SlideShare uma empresa Scribd logo
1 de 24
February 2011
Bachelor of Science in Information Technology (BScIT) – Semester 2/
      Diploma in Information Technology (DIT) – Semester 2
   BT0068 – Computer Organization and Architecture – 4 Credits
                          (Book ID: B0952)
                  Assignment Set – 1 (60 Marks)


Answer all questions                                        10 x 6 = 60
   1. Convert the following decimal numbers to binary:
         a. 1231
         b. 673
         c. 1998
Ans:-
a. 10011001111
b. 1010100001
c. 11111001110

   2. Give and explain one stage of logic circuit.

Ans:- Logic Microoperations:- Logic micro operations specify binary operations for
strings of bits stored in registers. These operations consider each bit of the register
separately and treat them as binary variables. For example, the exclusive-OR micro
operation with the contents of two registers R1 and R2 is symbolized by the statement

It specifies a logic micro operation to be executed on the individual bits of the registers
provided that the control variable P = 1. As a numerical example, assume that each
register has four bits. Let the content of R1 be 1010 and the content of R2 be 1100.
The exclusive-OR micro operation stated above symbolizes the following logic
computation:

1010 Content of R1

1100 Content of R2
0110 Content of R1 after P = 1

The content of R1, after the execution of the micro operation, is equal to the bit-by-bit
exclusive-OR operation on pairs of bits in R2 and previous values of R1. The logic
micro operations are seldom used in scientific computations, but they are very useful
for bit manipulation of binary data and for making logical decisions.

Special symbols will be adopted for the logic micro operations OR, AND, and
complement, to distinguish them from the corresponding symbols used to express
Boolean functions. The symbol V will be used to denote an OR micro operation and
the symbol ʌ to denote an AND micro operation. The complement micro operation is
the same as the 1’s complement and uses a bar on top of the symbol that denotes the
register name. By using different symbols, it will be possible to differentiate between a
logic micro operation and a control (or Boolean) function. Another reason for adopting
two sets of symbols is to be able to distinguish the symbol +, when used to symbolize
an arithmetic plus, from a logic OR operation. Although the + symbol has two
meanings, it will be possible to distinguish between them by noting where the symbol
occurs. When the symbol + occurs in a micro operation, it will denote an arithmetic
plus. When it occurs in a control (or Boolean) function, it will denote an OR operation.
We will never use it to symbolize an OR micro operation. For example, in the
statement..



   3. Explain Von Neumann Architecture.
Ans:-




IAS is the first digital computer in which the von Neumann Architecture was employed.
The general structure of the IAS computer is as shown in figure 3.10:
•   A main memory, which stores both instructions and data
   •   An arithmetic and logic unit (ALU) capable of operating on binary data
   •   A control unit, which interprets the instructions in memory and causes them to
       be executed
   •   Input and Output (I/O) equipment operated by the control unit

The von Neumann Architecture is based on three key concepts:

   1. Data and instructions are stored in a single read-write memory.
   2. The content of this memory is addressable by location, without regard to the
      type of data contained therein.
   3. Execution occurs in a sequential fashion unless explicitly modified from one
      instruction to the next




   4. Compare the register organizations of 8085, Z8000 and MC68000.
Ans:- It is very much instructive to examine and compare the register organization of
comparable systems. In this section we will discuss the register organization of 16-bit
microprocessors that were developed more or less at the same time.

The register organization of Z8000 machine. Here only purely internal registers
structure is given and memory address registers are not shown. Z8000 consists of
sixteen 16-bit general purpose registers, which can be used for data, address and
indexing. The designers of this machines felt that it was useful to provide a
regularized, general set of registers than to save instruction bits by using special
purpose registers. Further the way the functions are assigned to these registers is the
responsibility of the programmer. There might be different functional breakdown for
different applications. A segmented address space uses 7-bit segment number and a
16-bit offset. It uses two registers to hold a single address. There are two other
registers called stack pointers that are necessary for stack module. One register used
for system mode and one for normal mode.

MC68000 register organization:-This machine uses a structure that falls between the
Zilog Z8000 and Intel 8086. The register organization of MC68000 is as shown in
figure 4.8 The MC6800 machine partitions the 32-bit registers into eight data registers
and nine address registers. The data registers are used in data manipulations and are
also used in addressing only as index registers. The width of data register allows 8-bit,
16-bit and 32-bit data operations depending upon the opcode. The address registers
contain a 32-bit address. It does not support segmentation. Two of the address
registers are used as stack pointers: One for the users and one for the operating
system, depending upon the current execution mode. Both the stack pointers are
numbered as seven i.e., A7 as only one can be used at a time. The MC68000 also has
program counter and status register as in the other two machines. The program
counter is 32-bit register and status register is 16-bit. Like Zilog the Motorola team also
supports a regular instruction set with no special purpose registers. For the purpose of
code efficiency they divided the registers into functional components, saving one bit at
each registers.

   5. Give the advantages and disadvantages of physical and functional buses.

Ans:- Data Bus: A bus, which carries a word to or from memory, is called data bus.
Its width is equal to the word length of the memory. Also, it provides a means for
moving data between the different modules of a system. The data bus usually consists
of 8, 16 or 32 separate lines. The number of lines implies the data bus..

Address bus: A bus that is used to carry the address of the data in the memory and
its width is equal to the number of bits in the Memory Address Register (MAR) of the
memory.
Example: If a computer memory has 64K, 32-bit words, then the data bus will be 32-
bits wide and the address bus will be 16-bits wide.

Control bus: A bus that is used to control the access carries the control signals
between the various units of the computer. The processor has to send commands
READ and WRITE to the memory which requires single wire. A START command is
necessary for the I/O units. All these signals are carried by the control bus.

Types of Control Lines

· Memory Write: Causes data on the bus (data bus) to be written into the addressed
location.

· Memory Read: Causes data from the addressed location to be placed on the bus
(data bus).

· I/O Write: Causes data on the data bus to be output to the addressed I/O port.

· I/O Read: Causes data from the addressed I/O port to be placed on the bus (data
bus).

· Transfer ACK: Indicates that data has been accepted from or placed on the bus.

· Bus Request: Indicates that a module needs to gain control of the bus.

· Bus Grant: Indicates that a requesting module has been granted control of the bus.

· Interrupt Request: Indicates that an interrupt is pending.

· Interrupt ACK: Acknowledges that the pending interrupt has been recognized.

· Clock: Used to synchronize operations.

· Reset: Initializes all modules.

  6. Explain different types of binary codes.
Ans:-
7. Give and explain the block diagram of the bus system with four registers.
Ans:-
  8. Explain the System Bus structure.
  9. Explain the register organization of 8086.

Ans:- 8086 register organization:- In this machine every register is a special purpose
register. There are some registers that also serve as general purpose registers. The
8086 machine contains four 16-bit data registers that are accessible on a byte or 16-bit
basis. It also has four 16-bit pointers and index registers. The data registers can be
used as general purpose registers in some instructions. In others registers are used
implicitly.

Example: A multiply instruction always uses accumulator. The four pointer registers
each with segment offset are also used in a number of operations implicitly. There are
also four segment registers. Three of these segment registers are used in a dedicated,
implicit way to point to the segment of the current instruction, a segment containing
data and a segment containing a stack. This type of structure is useful in branch
operations. The dedicated and implicit uses provide for compact encoding at the cost
of reduced flexibility. The 8086 also includes an instruction pointer and a set of 1 bit
status and control flags.

   10. Explain the single bus structure.

Ans:- Single Bus System:- In this type of inter-connection, the three units share a
single bus. Hence the information can be transferred only between two units at a time.
Here the I/O units use the same memory address space. This simplifies programming
of I/O units as no special I/O instructions are needed. This is one of advantages of
single bus organization.
The transfer of information over a bus cannot be done at a speed comparable to the
operating speed of all the devices connected to the bus. Some electromechanical
devices such as keyboards and printers are very slow whereas disks and tapes are
considerably faster. Main memory and processors operate at electronic speeds. Since
all the devices must communicate over the bus, it is necessary to smooth out the
differences in timings among all the devices.

A common approach is to include buffer register with the devices to hold the
information during transfers. To illustrate this let us take one example. Consider the
transfer of an encoded character from the processor to a character printer where it is
to be printed. The processor sends the character to the printer output register over the
bus. Since buffer is an electronic register this transfer requires relatively little time.
Now the printer starts printing. At this time bus and the processor are no longer
needed and can be released for other activities. Buffer register is not available for
other transfers until the process is completed. Thus buffer register smoothes out the
timing differences between the processor, memory and I/O devices. This allows the
processor to switch rapidly from one device to another interweaving its processing
activity with data transfer involving several I/O devices.
February 2011
Bachelor of Science in Information Technology (BScIT) – Semester 2/
      Diploma in Information Technology (DIT) – Semester 2
   BT0068 – Computer Organization and Architecture – 4 Credits
                          (Book ID: B0952)
                  Assignment Set – 2 (60 Marks)


Answer all questions                                      10 x 6 = 60
  1. Give the details of data types specified for VAX & IBM 370 machines.

Ans:- Logical Data:- With logical data, memory can be used most efficiently for this
storage. Logical values are also called as Boolean values which are 1 = true,
0 = false.

IBM 370 Data types:- The IBM S/370 architecture provides the following data types.

· Binary integer: Binary integer may be either unsigned or signed. Signed binary
integers are stored in 2’s compliment form. Allowable lengths are 16 and 32 bits.

· Floating point: Floating point numbers of length 32, 64, and 128 bits are allowed.
All use 7-bit exponent field.

· Decimal: Arithmetic on packed decimal integers is provided. The length is from 1 to
16 bytes. The rightmost 4 bits of the rightmost byte hold the sign. Hence signed
numbers from 1 to 31 decimal digits can be represented.

· Binary logical: Operations are defined for data units of length 8, 32, and 64 bits.
And variable length logical data of up to 256 bytes.

· Character: EBCDIC is used.

VAX Data types:- The VAX provides an impressive array of data types. It is a byte
oriented machine. All data types are in terms of bytes including 16-bit word, 32-bit long
word, and the 64-bit quad-word, and even the 128 bit octa-word. The data type of VAX
machine provides the following five types of data.

· Binary integer: Binary integers are usually considered as in 2’s complement form.
However they can be considered and operated on unsigned integers. Allowable
lengths are 8, 16, 32, 64, and 128 bits.

· Floating point: It provides four different types of representations. They are

· F: 32 bits with an 8-bit exponent

· D: 64 bits with an 8-bit exponent

· G: 64 bits with an 11-bit exponent

· H: 128 bits with an 15-bit exponent

The F type is normal or default representation. D is usual double precision
representation. G and H are provided for variety of applications to give successively
increasing range and precision over F.

· Decimal: Arithmetic on packed decimal integers is provided. Two formats are
provided.

Packed decimal strings: The length is from 1 to 16 bytes with 4 bits holding the sign.

Unpacked numeric strings: Stores 1 digit in ASCII representation, per byte with up to
31 bytes of length.

· Variable bit field: These are small integers packed together in large data unit. A bit
field is specified by three operands: address of byte containing the start of field,
starting bit position within the byte, the length in bits of the field. This data type is used
to increase memory efficiency.

· Character: ASCII is used.
2. Discuss various number representations in a computer system.

Ans:- Number Representations:- Computers are built using logic circuits that operate
on information represented by two valued electrical signals. We label the two values
as 0 and 1; and we define the amount of information represented by such a signal as a
bit of information, where bit stands for binary digit. The most natural way to represent
a number in a computer system is by a string of bits, called a binary number. A text
character can also be represented by a string of bits called a character code. We will
first describe binary number representations and arithmetic operations on these
numbers, and then describe character representations.

Non-negative Integers:- The easiest numbers to represent are the non-negative
integers.

Negative Integers:-Things are easy as long as we stick to non-negative integers.
They become more complicated when we want to represent negative integers as well.
In binary arithmetic, we simply reserve one bit to determine the sign. In the circuitry for
addition, we would have one circuit for adding two numbers, and another for
subtracting two numbers. The combination of signs of the two inputs would determine
which circuit to use on the absolute values, as well as the sign of the output. While this
method works, it turns out that there is one that is much easier to deal with by
electronic circuits. This method is called the ‘two’s complement’ method. It turns out
that with this method, we do not need a special circuit for subtracting two numbers. In
order to explain this method, we first show how it would work in decimal arithmetic with
infinite precision. Then we show how it works with binary arithmetic, and finally how it
works with finite precision.

Infinite-Precision Ten’s Complement:- Imagine the odometer of an automobile. It
has a certain number of wheels, each with the ten digits on it. When one wheel goes
from 9 to 0, the wheel immediately to the left of it advances by one position.

Finite-Precision Ten’s Complement:-What we have said in the previous section
works almost as well with a fixed bounded number of odometer wheels
Finite-Precision Two’s Complement:- So far, we have studied the representation of
negative numbers using ten’s complement. In a computer, we prefer using base two
rather than base ten.

Rational Numbers:-Integers are useful, but sometimes we need to compute with
numbers that are not integer.An obvious idea is to use rational numbers. Many
algorithms, such as the simplex algorithm for linear optimization, use only rational
arithmetic whenever the input is rational. There is no particular difficulty in representing
rational numbers in a computer. It suffices to have a pair of integers, one for the
numerator and one for the denominator. To implement arithmetic on rational numbers,
we can use some additional restrictions on our representation. We may, for instance,
decide that:

· Positive rational numbers are always represented as two positive integers (the other
possibility is as two negative numbers),

· Negative rational numbers are always represented with a negative numerator and a
positive denominator (the other possibility is with a positive numerator and a negative
denominator),

· The numerator and the denominator are always relative prime (they have no
common factors).

Such a set of rules makes sure that our representation is canonical, i.e., that the
representation for a value is unique, even though, a priori, many representations would
work.

Circuits for implementing rational arithmetic would have to take such rules into
account. In particular, the last rule would imply dividing the two integers resulting from
every arithmetic operation with their largest common factor to obtain the canonical
representation.

Rational numbers and rational arithmetic is not very common in the hardware of a
computer. The reason is probably that rational numbers don’t behave very well with
respect to the size of the representation. For rational numbers to be truly useful, their
components, i.e., the numerator and the denominator both need to be arbitrary-
precision integers. As we have mentioned before, arbitrary precision anything does not
go very well with fixed-size circuits inside the CPU of a computer.

   3. Explain the addition of two floating point numbers with examples.

Ans:- The steps of a floating-point addition:

1. The exponents of the two floating-point numbers to be added are compared to find
the number with the smallest magnitude.

2. The significant of the number with the smaller magnitude is shifted so that the
exponents of the two numbers agree.

3. The significants are added.

4. The result of the addition is normalized.

5. Checks are made to see if any floating-point exceptions occurred during the
addition, such as overflow.

6. Rounding occurs.

Floating-Point Addition Example

Example: s = x + y

· numbers to be added are x = 1234.00 and y = -567.8

· these are represented in decimal notation with a mantissa (significand) of four digits

· six stages (A – F) are required to complete the addition.

   4. Discuss the organization of main memory.+

Ans:- Main Memory:- The main memory stores data and instructions. Main memories
are usually built from dynamic IC’s known as dynamic RAMs. These semiconductor
ICs can also implement static memories referred to as static RAMs (SRAMs). SRAMs
are faster but cost per bit is higher. These are often used to build caches.

Types of Random-Access Semiconductor Memory:

Dynamic RAM (DRAM): Example: Charge in capacitor. It requires periodic refreshing.

Static RAM (SRAM): Example: Flip-flop logic-gates. Applying power is enough (no
need for refreshing). Dynamic RAM is simpler and hence smaller than the static RAM.
Therefore they are denser and less expensive. But it requires supporting refresh
circuitry. Static RAMs are faster than dynamic RAMs.

ROM: The data is actually wired in the factory. It can never be altered.

PROM: Programmable ROM. It can only be programmed once after its fabrication. It
requires special device to program.

EPROM: Erasable Programmable ROM. It can be programmed multiple times. Whole
capacity need to be erased by ultraviolet radiation before a new programming activity.
It cannot be partially programmed.

EEPROM: Electrically Erasable Programmable ROM. Erased and programmed
electrically. It can be partially programmed. Write operation takes considerably longer
time compared to read operation.

Each more functional ROM is more expensive to build, and has smaller capacity than
less functional ROM’s.

Organization

Basic element of semiconductor memory is the memory cell. All semiconductor
memory cells have certain properties:

· Have two stable states that represent binary 0 and 1.

· Capable of being written into (at least once), to set the state.
· Capable of being read to sense the state.

Individual cells can be selected for reading and writing operations.

The cell has three functional terminals which are shown in figure 9.1. The select
terminals selects a cell for read or write operation, the control indicates read or write,
and the third terminal is used to write into the cell, that is to set the state of the cell to 0
or 1. Similarly for read operation the third terminal is used to output the state of the
cell.




   5. Explain various replacement algorithms.

Ans:- Replacement algorithms:- For any set associative mapping a replacement
algorithm is needed. The most common algorithms are discussed here. When a new
block is to be brought into the cache and all the positions that it may occupy are full,
then the cache controller must decide which of the old blocks to overwrite. Because
the programs usually stay in localized areas for a reasonable period of time, there is
high probability that the blocks that have been referenced recently will be referenced
again soon. Therefore when a block is to be overwritten, the block that has not
referenced for the longest time is overwritten. This block is called least-recently-used
block (LRU), and the technique is called the LRU replacement algorithm. In order to
use the LRU algorithm, the cache controller must track the LRU block as computation
proceeds.

There are several replacement algorithms that require less overhead than LRU
method. One method is to remove the oldest block from a full set when a new block
must be brought in. This method is referred to as FIFO. In this technique no updating
is needed when hit occurs. However, because the algorithm does not consider the
recent patterns of access to blocks in the cache, it is not effective as LRU approach in
choosing the best block to remove. There is another method called least frequently
used (LFU) that replaces that block in the set which has experienced the fewer
references. It is implemented by associating a counter with each slot. Yet another
simplest algorithm called random, is to choose a block to be overwritten in random. =

    6. Discuss the different categories of instructions.
Ans:- In most modern CPU’s, the first byte contains the opcode, sometimes including
the register reference in some of the instructions. The operand references are in the
following bytes.
The different categories of instructions is given below:-

Example: High level language statement: X = X + Y

If we assume a simple set of machine instructions, this operation could be
accomplished with three instructions: (assume X is stored in memory location 624, and
Y in memory loc. 625.)

1. Load a register with the contents of memory location 624.

2. Add the contents of memory location 625 to the register,

3. Store the contents of the register in memory location 624.

As seen, a simple "C" (or BASIC) instruction may require 3 machine instructions.

As we have seen before, the instructions fall into one the following four categories:

· Data processing: Arithmetic and logic instructions.

· Data storage: Memory instructions.

· Data movement: I/O instructions.

· Control: Test and branch instructions.
Number of Addresses:- Virtually all arithmetic and logic operations are either unary
(one operand) or binary (two operands). The result of an operation must be stored,
suggesting a third address. Finally, after the completion of an instruction, the next
instruction must be fetched, and its address is needed.

This line of reasoning suggests that an instruction could be required to contain 4
address references: two operands, one result, and the address of the next instruction.
In practice, the address of the next instruction is handled by the Program Counter
(PC); therefore most instructions have one, two or three operand addresses. Three-
address instruction formats are not common, because they require a relatively long
instruction format to hold three address references.


   7. Explain various operations of ALU.

Ans:- The ALU is the part of the CPU that actually performs arithmetic and logical operations
on data. All of the other elements of the computer system – control unit, registers, memory, I/O
– are there mainly to bring data into ALU for it to process and then take the results back out.




ALU Inputs and Outputs

The inputs and outputs of ALU are shown in figure 7.1. The inputs to the ALU are the control
signals generated by the control unit of CPU, and the registers of the CPU where the operands
for the manipulation of data are stored. The output is a register called status word or flag
register which reflects the result and the registers of the CPU where the result can be stored.
Thus data are presented to the ALU in registers, and the results of an operation are also stored
in registers. These registers are connected by signal paths to the ALU. ALU does not directly
interact with memory or other parts of the system (e.g. I/O modules), it only interacts directly
with registers. An ALU like all other electronic components of a computer is based on the use
of simple digital devices that store binary digits and perform Boolean logic operations.

The control unit is responsible for moving data to memory or I/O modules. Also, it is the
control unit that signals all the operations that happen in the CPU. The operations, functions
and implementation of Control Unit will be discussed in the tenth unit.
In this unit we will concentrate on the ALU. An important part of the use of logic circuits is for
computing various mathematical operations such as addition, multiplication, trigonometric
operations, etc. Hence we will be discussing the arithmetic involved in using ALU.

First, before discussing the computer arithmetic we must have a way of representing numbers
as binary data.


   8. Discuss the different formats of floating point numbers.

Ans:- Floating Point Numbers:- Instead of using the obvious representation of rational
numbers presented in the previous section, most computers use a different representation of a
subset of the rational numbers. We call these numbers floating-point numbers.

Floating-point numbers use inexact arithmetic, and in return require only a fixed-size
representation. For many computations (so-called scientific computations, as if other
computations weren’t scientific) such a representation has the great advantage that it is fast,
while at the same time usually giving adequate precision.

There are some (sometimes spectacular) exceptions to the “adequate precision” statement in the
previous paragraph, though. As a result, an entire discipline of applied mathematics, called
numerical analysis, has been created for the purpose of analyzing how algorithms behave with
respect to maintaining adequate precision, and of inventing new algorithms with better
properties in this respect.

The basic idea behind floating-point numbers is to represent a number as mantissa and an
exponent, each with a fixed number of bits of precision. If we denote the mantissa with m and
the exponent with e, then the number thus represented is m * 2e.

Again, we have a problem that a number can have several representations. To obtain a
canonical form, we simply add a rule that m must be greater than or equal to 1/2 and strictly
less than 1. If we write such a mantissa in binal (analogous to decimal) form, we always get a
number that starts with 0.1. This initial information therefore does not have to be represented,
and we represent only the remaining “binals”.

The reason floating-point representations work well for so-called scientific applications are that
we more often need to multiply or divide two numbers. Multiplication of two floating-point
numbers is easy to obtain. It suffices to multiply the mantissas and add the exponents. The
resulting mantissa might be smaller than 1/2, in fact, it can be as small as 1/4. In this case,
the result needs to be canonicalized. We do this by shifting the mantissa left by one position
and subtracting one from the exponent. Division is only slightly more complicated. Notice that
the imprecision in the result of a multiplication or a division is only due to the imprecision in
the original operands. No additional imprecision is introduced by the operation itself (except
possibly 1 unit in the least significant digit). Floating-point addition and subtraction do not
have this property.

To add two floating-point numbers, the one with the smallest exponent must first have its
mantissa shifted right by n steps, where n is the difference of the exponents. If n is greater than
the number of bits in the representation of the mantissa, the second number will be treated as 0
as far as addition is concerned. The situation is even worse for subtraction (or addition of one
positive and one negative number). If the numbers have roughly the same absolute value, the
result of the operation is roughly zero, and the resulting representation may have no correct
significant digits.

The two’s complement representation that we have mentioned above is mostly useful for
addition and subtraction. It only complicates things for multiplication and division. For
multiplication and division, it is better to use a representation with sign + absolute value. Since
multiplication and division is more common with floating-point numbers, and since they result
in multiplication and division of the mantissa, it is more advantageous to have the mantissa
represented as sign + absolute value. The exponents are added, so it is more common to use
two’s complement (or some related representation) for the exponent.

Usually, computers manipulate data in chunks of 8, 16, 32, 64, or 128 bits. It is therefore
useful to fit a single floating-point number with both mantissa and exponent in such a chunk. In
such a chunk, we need to have room for the sign (1 bit), the mantissa, and the exponent. While
there are many different ways of dividing the remaining bits between the mantissa and the
exponent, in practice most computers now use a norm called IEEE, which mandates the formats
as shown in figure.
9. Explain the characteristics of memory system.

Ans:- Characteristics of Memory Systems:- Memory systems are classified according to
their key characteristics. The most important are listed below:

Location

The classification of memory is done according to the location of the memory as:

· CPU: The CPU requires its own local memory in the form of registers and also the control
unit requires local memories which are fast accessible. We have already studied this in detail in
our earlier discussions.

· Internal (main): It is often equated with the main memory. There are other forms of internal
memory. We will be discussing the internal memory in the next coming sections of this unit.

External (secondary): It consists of peripheral storage devices like hard disks, magnetic disks,
magnetic tapes, CDs etc.

Capacity

Capacity is one of the important aspects of the memory.
Word size: Word size is the natural unit of organization of memory. The size of the word is
typically equal to the number of bits used to represent a number and is equal to the instruction
length. But there are many exceptions. Common word lengths are 8, 16 and 32 bits.

Number of words: The addressable unit is the word in many systems. However external
memory capacity is generally expressed in terms of bytes.

Unit of Transfer

Unit of transfer for internal memory is equal to the number of data lines into and out of
memory module.

· Word: For internal memory, unit of transfer is equal to the number of data lines into and out
of the memory module. Need not be equal to a word or addressable unit.

· Block: For external memory, data are often transferred in much larger units than a word, and
these are referred to as blocks.

Access Method

· Sequential: Tape units have sequential access. Data are generally stored in units called
"records". Data is accessed sequentially; the records may be passed (or rejected) until the
record that is searched for is found. The access time to a certain record is highly variable.

· Direct: Individual blocks or records have a unique address based on physical location. A
block may contain a group of data. Access is accomplished by direct address to reach general
vicinity, plus sequential searching, counting or waiting to reach the final location. Disk units
have direct access.

· Random: Each addressable location in memory has a unique, physically wired-in addressing
mechanism. The time to access a given location is independent of the sequence of prior
accesses and constant. Any location can be selected at random and directly addressed and
accessed. Main memory and some cache systems are random access.

· Associative: This is a random-access type of memory that enables one to make a comparison
of desired bit locations within a word for a specified match, and to do this for all words
simultaneously. Thus, a word is retrieved based on a portion of its contents rather than its
address. Some cache memories may employ associative access.

Performance
· Access time: For random-access memory, this is the time it takes to perform a read or write
operation. That is, the time from the instant that an address is presented to the memory to the
instant that data have been stored or made available for use. For non-random-access memory,
access time is the time it takes to position the read-write mechanism at the desired location.

· Cycle time: Applied to random-access memory. It consists of the access time plus any
additional time required before a second access can commence.

· Transfer rate: This is the rate at which data can be transferred into or out of a memory unit.
For random-access memory, it is equal to (1/<cycle-time>).

For non-random-access memory, the following relationship holds:

Tn = Ta + N/R

where Tn = Average time to read or write N bits;

Ta = Average access time,

N = Number of bits

R = Transfer rate, in bits per second (bps).

Physical Type

· Semiconductor: Main memory, cache. RAM, ROM.

· Magnetic: Magnetic disks (hard disks), magnetic tape units.

· Optical: CD-ROM, CD-RW.

· Magneto-Optical: The recording technology is fundamentally magnetic. However an optical
laser is used. The read operation is purely optical.

Physical Characteristics

· Volatile/Non-volatile: In a volatile memory, information decays naturally or is lost when
electrical power is switched off. In a non-volatile memory, information once recorded remains
without deterioration until deliberately changed; no electrical power is needed to retain
information. Magnetic-surface memories are nonvolatile. Semiconductor memories may be
either volatile or non-volatile.

· Erasable/Non-erasable: Non-erasable memory cannot be altered (except by destroying the
storage unit). ROMs are non-erasable.

Memory Hierarchy

Design constraints: How much? How fast? How expensive?

· Faster access time, greater cost per bit

· Greater capacity, smaller cost per bit,

· Greater capacity, slower access time.


    10. Discuss the physical characteristics of DISK.

Ans:- External Memory

Magnetic Disk:- A disk is a circular platter constructed of metal or of plastic coated with a
magnetic material. Data are recorded on and later retrieved from the disk via a conducting coil
named the head. During a read or write operation, the head is stationary while the platter rotates
beneath it. Writing is achieved by producing a magnetic field which records a magnetic pattern
on the magnetic surface.

Data Organization and Formatting:- depicts the data layout of disk. The head is capable of
reading or writing from a portion of the platter rotating beneath it. This gives rise to
organization of data on the platter in a concentric set of rings called Tracks. Each track is the
same width as the head. Adjacent tracks are separated by gaps that minimize errors due to
misalignment of head. Data is transferred to and from the disk in blocks. And the block is
smaller than the capacity of a track. Data is stored in block regions which is an angular part of a
track and is referred to as a sector. Typically 10-100 sectors are there per track. These may be
either of fixed or variable length.
Physical Characteristics

Head motion         :   Fixed-head      disk    (one    /     track)     or        Movable-head    disk
(one/surface).

Disk portability : No removable disk vs. Movable disk.

Sides : Double-sided vs. single-sided.

Platters : Single platter vs. Multiple platter disks.

Head    mechanism       :   Contact        (floppy),        Fixed      gap,        Aerodynamic     gap
(Winchester [= hard disk]).

Disk Performance Parameters

1. Seek time: Time required to move the disk arm (head) to the required track.

Where Ts = estimated seek time, n = number                                    of     tracks   traversed,
m = constant that depends on the disk drive, s = startup time.
2. Rotational delay: time required to rotate the disk to get wanted sector beneath the head.

3. Transfer time: T = b / (r N)

Where T = transfer time, b = number of bytes to be                                 transferred,
N = number of bytes on a track, r = rotation speed, in revolutions per second.

4. Access time: Ta = total average access time.

Ta = Ts + (1 / 2 r) + (b / r N) where Ts = average seek time.

RAID

1. RAID is a set of physical disk drives viewed by the operating system as a single logical
drive.

2. Data are distributed across the physical drives of an array.

3. Redundant disk capacity is used to store parity information, which guarantees data
recoverability in case of a disk failure.

Optical Memory & Magnetic Tape are other two external memories.

Mais conteúdo relacionado

Mais procurados

Mais procurados (17)

Architecture OF 8085
Architecture OF 8085Architecture OF 8085
Architecture OF 8085
 
Architecture of 8086 Microprocessor
Architecture of 8086 Microprocessor  Architecture of 8086 Microprocessor
Architecture of 8086 Microprocessor
 
INTEL 80386 MICROPROCESSOR
INTEL  80386  MICROPROCESSORINTEL  80386  MICROPROCESSOR
INTEL 80386 MICROPROCESSOR
 
8086 microprocessor
8086 microprocessor8086 microprocessor
8086 microprocessor
 
Module 4 advanced microprocessors
Module 4 advanced microprocessorsModule 4 advanced microprocessors
Module 4 advanced microprocessors
 
Module 2 instruction set
Module 2 instruction set Module 2 instruction set
Module 2 instruction set
 
8086ppt
8086ppt8086ppt
8086ppt
 
fco-lecture-8086
fco-lecture-8086fco-lecture-8086
fco-lecture-8086
 
Bt0068 computer organization and architecture
Bt0068 computer organization and architecture Bt0068 computer organization and architecture
Bt0068 computer organization and architecture
 
8086
80868086
8086
 
Introduction to 8086 microprocessor
Introduction to 8086 microprocessorIntroduction to 8086 microprocessor
Introduction to 8086 microprocessor
 
8085 archi
8085 archi8085 archi
8085 archi
 
Unit 1
Unit 1Unit 1
Unit 1
 
MPMC Microprocessor
MPMC MicroprocessorMPMC Microprocessor
MPMC Microprocessor
 
8086 conti
8086 conti8086 conti
8086 conti
 
8086 Architecture
8086 Architecture8086 Architecture
8086 Architecture
 
8086 architecture
8086 architecture8086 architecture
8086 architecture
 

Semelhante a Bt0068

8085 microprocessor
8085 microprocessor8085 microprocessor
8085 microprocessorgohanraw
 
Computer engineering - overview of microprocessors
Computer engineering - overview of microprocessorsComputer engineering - overview of microprocessors
Computer engineering - overview of microprocessorsEkeedaPvtLtd
 
MPMC UNIT-1. Microprocessor 8085 pdf Microprocessor and Microcontroller
MPMC UNIT-1. Microprocessor 8085 pdf Microprocessor and MicrocontrollerMPMC UNIT-1. Microprocessor 8085 pdf Microprocessor and Microcontroller
MPMC UNIT-1. Microprocessor 8085 pdf Microprocessor and MicrocontrollerRAHUL RANJAN
 
U proc ovw
U proc ovwU proc ovw
U proc ovwBrit4
 
8085 microprocessor
8085 microprocessor8085 microprocessor
8085 microprocessorAnuja Gunale
 
MICROPROCESSOR 8085 WITH PROGRAMS
MICROPROCESSOR 8085 WITH PROGRAMSMICROPROCESSOR 8085 WITH PROGRAMS
MICROPROCESSOR 8085 WITH PROGRAMSSabin Gautam
 
Microprocessor and microcontroller (MPMC).pdf
Microprocessor and microcontroller (MPMC).pdfMicroprocessor and microcontroller (MPMC).pdf
Microprocessor and microcontroller (MPMC).pdfXyzjakhaAbhuvs
 
An introduction to microprocessor architecture using INTEL 8085 as a classic...
An introduction to microprocessor  architecture using INTEL 8085 as a classic...An introduction to microprocessor  architecture using INTEL 8085 as a classic...
An introduction to microprocessor architecture using INTEL 8085 as a classic...Prasad Deshpande
 
8085 Architecture
8085 Architecture8085 Architecture
8085 Architecturedeval patel
 
Microprocessor 8085
Microprocessor 8085Microprocessor 8085
Microprocessor 8085Dhaval Barot
 

Semelhante a Bt0068 (20)

8085
80858085
8085
 
8085 microprocessor
8085 microprocessor8085 microprocessor
8085 microprocessor
 
8085
80858085
8085
 
8085
80858085
8085
 
EE8551 MPMC
EE8551  MPMCEE8551  MPMC
EE8551 MPMC
 
Computer engineering - overview of microprocessors
Computer engineering - overview of microprocessorsComputer engineering - overview of microprocessors
Computer engineering - overview of microprocessors
 
MPMC UNIT-1. Microprocessor 8085 pdf Microprocessor and Microcontroller
MPMC UNIT-1. Microprocessor 8085 pdf Microprocessor and MicrocontrollerMPMC UNIT-1. Microprocessor 8085 pdf Microprocessor and Microcontroller
MPMC UNIT-1. Microprocessor 8085 pdf Microprocessor and Microcontroller
 
U proc ovw
U proc ovwU proc ovw
U proc ovw
 
8085 microprocessor
8085 microprocessor8085 microprocessor
8085 microprocessor
 
8085 (1)
8085 (1)8085 (1)
8085 (1)
 
8085 intro
8085 intro8085 intro
8085 intro
 
MICROPROCESSOR 8085 WITH PROGRAMS
MICROPROCESSOR 8085 WITH PROGRAMSMICROPROCESSOR 8085 WITH PROGRAMS
MICROPROCESSOR 8085 WITH PROGRAMS
 
microprocessor
 microprocessor microprocessor
microprocessor
 
Microprocessor and microcontroller (MPMC).pdf
Microprocessor and microcontroller (MPMC).pdfMicroprocessor and microcontroller (MPMC).pdf
Microprocessor and microcontroller (MPMC).pdf
 
lecture1423813120.pdf
lecture1423813120.pdflecture1423813120.pdf
lecture1423813120.pdf
 
An introduction to microprocessor architecture using INTEL 8085 as a classic...
An introduction to microprocessor  architecture using INTEL 8085 as a classic...An introduction to microprocessor  architecture using INTEL 8085 as a classic...
An introduction to microprocessor architecture using INTEL 8085 as a classic...
 
8085.ppt
8085.ppt8085.ppt
8085.ppt
 
Microprocessor 8086
Microprocessor 8086Microprocessor 8086
Microprocessor 8086
 
8085 Architecture
8085 Architecture8085 Architecture
8085 Architecture
 
Microprocessor 8085
Microprocessor 8085Microprocessor 8085
Microprocessor 8085
 

Mais de Simpaly Jha (13)

Bt0071
Bt0071Bt0071
Bt0071
 
Bt0070
Bt0070Bt0070
Bt0070
 
Bt0072
Bt0072Bt0072
Bt0072
 
Shree Ganesh
Shree GaneshShree Ganesh
Shree Ganesh
 
B T0066
B T0066B T0066
B T0066
 
B T0065
B T0065B T0065
B T0065
 
B T0064
B T0064B T0064
B T0064
 
B T0062
B T0062B T0062
B T0062
 
Bt0064
Bt0064Bt0064
Bt0064
 
Bt0062
Bt0062Bt0062
Bt0062
 
Bt0066
Bt0066Bt0066
Bt0066
 
Bt0065
Bt0065Bt0065
Bt0065
 
Shree Ganesha!!!!!!!!!!!!!!!!!
Shree Ganesha!!!!!!!!!!!!!!!!!Shree Ganesha!!!!!!!!!!!!!!!!!
Shree Ganesha!!!!!!!!!!!!!!!!!
 

Último

"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 

Último (20)

"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 

Bt0068

  • 1. February 2011 Bachelor of Science in Information Technology (BScIT) – Semester 2/ Diploma in Information Technology (DIT) – Semester 2 BT0068 – Computer Organization and Architecture – 4 Credits (Book ID: B0952) Assignment Set – 1 (60 Marks) Answer all questions 10 x 6 = 60 1. Convert the following decimal numbers to binary: a. 1231 b. 673 c. 1998 Ans:- a. 10011001111 b. 1010100001 c. 11111001110 2. Give and explain one stage of logic circuit. Ans:- Logic Microoperations:- Logic micro operations specify binary operations for strings of bits stored in registers. These operations consider each bit of the register separately and treat them as binary variables. For example, the exclusive-OR micro operation with the contents of two registers R1 and R2 is symbolized by the statement It specifies a logic micro operation to be executed on the individual bits of the registers provided that the control variable P = 1. As a numerical example, assume that each register has four bits. Let the content of R1 be 1010 and the content of R2 be 1100. The exclusive-OR micro operation stated above symbolizes the following logic computation: 1010 Content of R1 1100 Content of R2
  • 2. 0110 Content of R1 after P = 1 The content of R1, after the execution of the micro operation, is equal to the bit-by-bit exclusive-OR operation on pairs of bits in R2 and previous values of R1. The logic micro operations are seldom used in scientific computations, but they are very useful for bit manipulation of binary data and for making logical decisions. Special symbols will be adopted for the logic micro operations OR, AND, and complement, to distinguish them from the corresponding symbols used to express Boolean functions. The symbol V will be used to denote an OR micro operation and the symbol ʌ to denote an AND micro operation. The complement micro operation is the same as the 1’s complement and uses a bar on top of the symbol that denotes the register name. By using different symbols, it will be possible to differentiate between a logic micro operation and a control (or Boolean) function. Another reason for adopting two sets of symbols is to be able to distinguish the symbol +, when used to symbolize an arithmetic plus, from a logic OR operation. Although the + symbol has two meanings, it will be possible to distinguish between them by noting where the symbol occurs. When the symbol + occurs in a micro operation, it will denote an arithmetic plus. When it occurs in a control (or Boolean) function, it will denote an OR operation. We will never use it to symbolize an OR micro operation. For example, in the statement.. 3. Explain Von Neumann Architecture. Ans:- IAS is the first digital computer in which the von Neumann Architecture was employed. The general structure of the IAS computer is as shown in figure 3.10:
  • 3. A main memory, which stores both instructions and data • An arithmetic and logic unit (ALU) capable of operating on binary data • A control unit, which interprets the instructions in memory and causes them to be executed • Input and Output (I/O) equipment operated by the control unit The von Neumann Architecture is based on three key concepts: 1. Data and instructions are stored in a single read-write memory. 2. The content of this memory is addressable by location, without regard to the type of data contained therein. 3. Execution occurs in a sequential fashion unless explicitly modified from one instruction to the next 4. Compare the register organizations of 8085, Z8000 and MC68000. Ans:- It is very much instructive to examine and compare the register organization of comparable systems. In this section we will discuss the register organization of 16-bit microprocessors that were developed more or less at the same time. The register organization of Z8000 machine. Here only purely internal registers structure is given and memory address registers are not shown. Z8000 consists of
  • 4. sixteen 16-bit general purpose registers, which can be used for data, address and indexing. The designers of this machines felt that it was useful to provide a regularized, general set of registers than to save instruction bits by using special purpose registers. Further the way the functions are assigned to these registers is the responsibility of the programmer. There might be different functional breakdown for different applications. A segmented address space uses 7-bit segment number and a 16-bit offset. It uses two registers to hold a single address. There are two other registers called stack pointers that are necessary for stack module. One register used for system mode and one for normal mode. MC68000 register organization:-This machine uses a structure that falls between the Zilog Z8000 and Intel 8086. The register organization of MC68000 is as shown in figure 4.8 The MC6800 machine partitions the 32-bit registers into eight data registers and nine address registers. The data registers are used in data manipulations and are also used in addressing only as index registers. The width of data register allows 8-bit, 16-bit and 32-bit data operations depending upon the opcode. The address registers contain a 32-bit address. It does not support segmentation. Two of the address registers are used as stack pointers: One for the users and one for the operating system, depending upon the current execution mode. Both the stack pointers are numbered as seven i.e., A7 as only one can be used at a time. The MC68000 also has program counter and status register as in the other two machines. The program counter is 32-bit register and status register is 16-bit. Like Zilog the Motorola team also supports a regular instruction set with no special purpose registers. For the purpose of code efficiency they divided the registers into functional components, saving one bit at each registers. 5. Give the advantages and disadvantages of physical and functional buses. Ans:- Data Bus: A bus, which carries a word to or from memory, is called data bus. Its width is equal to the word length of the memory. Also, it provides a means for moving data between the different modules of a system. The data bus usually consists of 8, 16 or 32 separate lines. The number of lines implies the data bus.. Address bus: A bus that is used to carry the address of the data in the memory and its width is equal to the number of bits in the Memory Address Register (MAR) of the memory.
  • 5. Example: If a computer memory has 64K, 32-bit words, then the data bus will be 32- bits wide and the address bus will be 16-bits wide. Control bus: A bus that is used to control the access carries the control signals between the various units of the computer. The processor has to send commands READ and WRITE to the memory which requires single wire. A START command is necessary for the I/O units. All these signals are carried by the control bus. Types of Control Lines · Memory Write: Causes data on the bus (data bus) to be written into the addressed location. · Memory Read: Causes data from the addressed location to be placed on the bus (data bus). · I/O Write: Causes data on the data bus to be output to the addressed I/O port. · I/O Read: Causes data from the addressed I/O port to be placed on the bus (data bus). · Transfer ACK: Indicates that data has been accepted from or placed on the bus. · Bus Request: Indicates that a module needs to gain control of the bus. · Bus Grant: Indicates that a requesting module has been granted control of the bus. · Interrupt Request: Indicates that an interrupt is pending. · Interrupt ACK: Acknowledges that the pending interrupt has been recognized. · Clock: Used to synchronize operations. · Reset: Initializes all modules. 6. Explain different types of binary codes. Ans:-
  • 6. 7. Give and explain the block diagram of the bus system with four registers. Ans:- 8. Explain the System Bus structure. 9. Explain the register organization of 8086. Ans:- 8086 register organization:- In this machine every register is a special purpose register. There are some registers that also serve as general purpose registers. The 8086 machine contains four 16-bit data registers that are accessible on a byte or 16-bit basis. It also has four 16-bit pointers and index registers. The data registers can be used as general purpose registers in some instructions. In others registers are used implicitly. Example: A multiply instruction always uses accumulator. The four pointer registers each with segment offset are also used in a number of operations implicitly. There are also four segment registers. Three of these segment registers are used in a dedicated, implicit way to point to the segment of the current instruction, a segment containing data and a segment containing a stack. This type of structure is useful in branch operations. The dedicated and implicit uses provide for compact encoding at the cost of reduced flexibility. The 8086 also includes an instruction pointer and a set of 1 bit status and control flags. 10. Explain the single bus structure. Ans:- Single Bus System:- In this type of inter-connection, the three units share a single bus. Hence the information can be transferred only between two units at a time. Here the I/O units use the same memory address space. This simplifies programming of I/O units as no special I/O instructions are needed. This is one of advantages of single bus organization.
  • 7. The transfer of information over a bus cannot be done at a speed comparable to the operating speed of all the devices connected to the bus. Some electromechanical devices such as keyboards and printers are very slow whereas disks and tapes are considerably faster. Main memory and processors operate at electronic speeds. Since all the devices must communicate over the bus, it is necessary to smooth out the differences in timings among all the devices. A common approach is to include buffer register with the devices to hold the information during transfers. To illustrate this let us take one example. Consider the transfer of an encoded character from the processor to a character printer where it is to be printed. The processor sends the character to the printer output register over the bus. Since buffer is an electronic register this transfer requires relatively little time. Now the printer starts printing. At this time bus and the processor are no longer needed and can be released for other activities. Buffer register is not available for other transfers until the process is completed. Thus buffer register smoothes out the timing differences between the processor, memory and I/O devices. This allows the processor to switch rapidly from one device to another interweaving its processing activity with data transfer involving several I/O devices.
  • 8. February 2011 Bachelor of Science in Information Technology (BScIT) – Semester 2/ Diploma in Information Technology (DIT) – Semester 2 BT0068 – Computer Organization and Architecture – 4 Credits (Book ID: B0952) Assignment Set – 2 (60 Marks) Answer all questions 10 x 6 = 60 1. Give the details of data types specified for VAX & IBM 370 machines. Ans:- Logical Data:- With logical data, memory can be used most efficiently for this storage. Logical values are also called as Boolean values which are 1 = true, 0 = false. IBM 370 Data types:- The IBM S/370 architecture provides the following data types. · Binary integer: Binary integer may be either unsigned or signed. Signed binary integers are stored in 2’s compliment form. Allowable lengths are 16 and 32 bits. · Floating point: Floating point numbers of length 32, 64, and 128 bits are allowed. All use 7-bit exponent field. · Decimal: Arithmetic on packed decimal integers is provided. The length is from 1 to 16 bytes. The rightmost 4 bits of the rightmost byte hold the sign. Hence signed numbers from 1 to 31 decimal digits can be represented. · Binary logical: Operations are defined for data units of length 8, 32, and 64 bits. And variable length logical data of up to 256 bytes. · Character: EBCDIC is used. VAX Data types:- The VAX provides an impressive array of data types. It is a byte oriented machine. All data types are in terms of bytes including 16-bit word, 32-bit long
  • 9. word, and the 64-bit quad-word, and even the 128 bit octa-word. The data type of VAX machine provides the following five types of data. · Binary integer: Binary integers are usually considered as in 2’s complement form. However they can be considered and operated on unsigned integers. Allowable lengths are 8, 16, 32, 64, and 128 bits. · Floating point: It provides four different types of representations. They are · F: 32 bits with an 8-bit exponent · D: 64 bits with an 8-bit exponent · G: 64 bits with an 11-bit exponent · H: 128 bits with an 15-bit exponent The F type is normal or default representation. D is usual double precision representation. G and H are provided for variety of applications to give successively increasing range and precision over F. · Decimal: Arithmetic on packed decimal integers is provided. Two formats are provided. Packed decimal strings: The length is from 1 to 16 bytes with 4 bits holding the sign. Unpacked numeric strings: Stores 1 digit in ASCII representation, per byte with up to 31 bytes of length. · Variable bit field: These are small integers packed together in large data unit. A bit field is specified by three operands: address of byte containing the start of field, starting bit position within the byte, the length in bits of the field. This data type is used to increase memory efficiency. · Character: ASCII is used.
  • 10. 2. Discuss various number representations in a computer system. Ans:- Number Representations:- Computers are built using logic circuits that operate on information represented by two valued electrical signals. We label the two values as 0 and 1; and we define the amount of information represented by such a signal as a bit of information, where bit stands for binary digit. The most natural way to represent a number in a computer system is by a string of bits, called a binary number. A text character can also be represented by a string of bits called a character code. We will first describe binary number representations and arithmetic operations on these numbers, and then describe character representations. Non-negative Integers:- The easiest numbers to represent are the non-negative integers. Negative Integers:-Things are easy as long as we stick to non-negative integers. They become more complicated when we want to represent negative integers as well. In binary arithmetic, we simply reserve one bit to determine the sign. In the circuitry for addition, we would have one circuit for adding two numbers, and another for subtracting two numbers. The combination of signs of the two inputs would determine which circuit to use on the absolute values, as well as the sign of the output. While this method works, it turns out that there is one that is much easier to deal with by electronic circuits. This method is called the ‘two’s complement’ method. It turns out that with this method, we do not need a special circuit for subtracting two numbers. In order to explain this method, we first show how it would work in decimal arithmetic with infinite precision. Then we show how it works with binary arithmetic, and finally how it works with finite precision. Infinite-Precision Ten’s Complement:- Imagine the odometer of an automobile. It has a certain number of wheels, each with the ten digits on it. When one wheel goes from 9 to 0, the wheel immediately to the left of it advances by one position. Finite-Precision Ten’s Complement:-What we have said in the previous section works almost as well with a fixed bounded number of odometer wheels
  • 11. Finite-Precision Two’s Complement:- So far, we have studied the representation of negative numbers using ten’s complement. In a computer, we prefer using base two rather than base ten. Rational Numbers:-Integers are useful, but sometimes we need to compute with numbers that are not integer.An obvious idea is to use rational numbers. Many algorithms, such as the simplex algorithm for linear optimization, use only rational arithmetic whenever the input is rational. There is no particular difficulty in representing rational numbers in a computer. It suffices to have a pair of integers, one for the numerator and one for the denominator. To implement arithmetic on rational numbers, we can use some additional restrictions on our representation. We may, for instance, decide that: · Positive rational numbers are always represented as two positive integers (the other possibility is as two negative numbers), · Negative rational numbers are always represented with a negative numerator and a positive denominator (the other possibility is with a positive numerator and a negative denominator), · The numerator and the denominator are always relative prime (they have no common factors). Such a set of rules makes sure that our representation is canonical, i.e., that the representation for a value is unique, even though, a priori, many representations would work. Circuits for implementing rational arithmetic would have to take such rules into account. In particular, the last rule would imply dividing the two integers resulting from every arithmetic operation with their largest common factor to obtain the canonical representation. Rational numbers and rational arithmetic is not very common in the hardware of a computer. The reason is probably that rational numbers don’t behave very well with respect to the size of the representation. For rational numbers to be truly useful, their components, i.e., the numerator and the denominator both need to be arbitrary-
  • 12. precision integers. As we have mentioned before, arbitrary precision anything does not go very well with fixed-size circuits inside the CPU of a computer. 3. Explain the addition of two floating point numbers with examples. Ans:- The steps of a floating-point addition: 1. The exponents of the two floating-point numbers to be added are compared to find the number with the smallest magnitude. 2. The significant of the number with the smaller magnitude is shifted so that the exponents of the two numbers agree. 3. The significants are added. 4. The result of the addition is normalized. 5. Checks are made to see if any floating-point exceptions occurred during the addition, such as overflow. 6. Rounding occurs. Floating-Point Addition Example Example: s = x + y · numbers to be added are x = 1234.00 and y = -567.8 · these are represented in decimal notation with a mantissa (significand) of four digits · six stages (A – F) are required to complete the addition. 4. Discuss the organization of main memory.+ Ans:- Main Memory:- The main memory stores data and instructions. Main memories are usually built from dynamic IC’s known as dynamic RAMs. These semiconductor
  • 13. ICs can also implement static memories referred to as static RAMs (SRAMs). SRAMs are faster but cost per bit is higher. These are often used to build caches. Types of Random-Access Semiconductor Memory: Dynamic RAM (DRAM): Example: Charge in capacitor. It requires periodic refreshing. Static RAM (SRAM): Example: Flip-flop logic-gates. Applying power is enough (no need for refreshing). Dynamic RAM is simpler and hence smaller than the static RAM. Therefore they are denser and less expensive. But it requires supporting refresh circuitry. Static RAMs are faster than dynamic RAMs. ROM: The data is actually wired in the factory. It can never be altered. PROM: Programmable ROM. It can only be programmed once after its fabrication. It requires special device to program. EPROM: Erasable Programmable ROM. It can be programmed multiple times. Whole capacity need to be erased by ultraviolet radiation before a new programming activity. It cannot be partially programmed. EEPROM: Electrically Erasable Programmable ROM. Erased and programmed electrically. It can be partially programmed. Write operation takes considerably longer time compared to read operation. Each more functional ROM is more expensive to build, and has smaller capacity than less functional ROM’s. Organization Basic element of semiconductor memory is the memory cell. All semiconductor memory cells have certain properties: · Have two stable states that represent binary 0 and 1. · Capable of being written into (at least once), to set the state.
  • 14. · Capable of being read to sense the state. Individual cells can be selected for reading and writing operations. The cell has three functional terminals which are shown in figure 9.1. The select terminals selects a cell for read or write operation, the control indicates read or write, and the third terminal is used to write into the cell, that is to set the state of the cell to 0 or 1. Similarly for read operation the third terminal is used to output the state of the cell. 5. Explain various replacement algorithms. Ans:- Replacement algorithms:- For any set associative mapping a replacement algorithm is needed. The most common algorithms are discussed here. When a new block is to be brought into the cache and all the positions that it may occupy are full, then the cache controller must decide which of the old blocks to overwrite. Because the programs usually stay in localized areas for a reasonable period of time, there is high probability that the blocks that have been referenced recently will be referenced again soon. Therefore when a block is to be overwritten, the block that has not referenced for the longest time is overwritten. This block is called least-recently-used block (LRU), and the technique is called the LRU replacement algorithm. In order to use the LRU algorithm, the cache controller must track the LRU block as computation proceeds. There are several replacement algorithms that require less overhead than LRU method. One method is to remove the oldest block from a full set when a new block must be brought in. This method is referred to as FIFO. In this technique no updating is needed when hit occurs. However, because the algorithm does not consider the
  • 15. recent patterns of access to blocks in the cache, it is not effective as LRU approach in choosing the best block to remove. There is another method called least frequently used (LFU) that replaces that block in the set which has experienced the fewer references. It is implemented by associating a counter with each slot. Yet another simplest algorithm called random, is to choose a block to be overwritten in random. = 6. Discuss the different categories of instructions. Ans:- In most modern CPU’s, the first byte contains the opcode, sometimes including the register reference in some of the instructions. The operand references are in the following bytes. The different categories of instructions is given below:- Example: High level language statement: X = X + Y If we assume a simple set of machine instructions, this operation could be accomplished with three instructions: (assume X is stored in memory location 624, and Y in memory loc. 625.) 1. Load a register with the contents of memory location 624. 2. Add the contents of memory location 625 to the register, 3. Store the contents of the register in memory location 624. As seen, a simple "C" (or BASIC) instruction may require 3 machine instructions. As we have seen before, the instructions fall into one the following four categories: · Data processing: Arithmetic and logic instructions. · Data storage: Memory instructions. · Data movement: I/O instructions. · Control: Test and branch instructions.
  • 16. Number of Addresses:- Virtually all arithmetic and logic operations are either unary (one operand) or binary (two operands). The result of an operation must be stored, suggesting a third address. Finally, after the completion of an instruction, the next instruction must be fetched, and its address is needed. This line of reasoning suggests that an instruction could be required to contain 4 address references: two operands, one result, and the address of the next instruction. In practice, the address of the next instruction is handled by the Program Counter (PC); therefore most instructions have one, two or three operand addresses. Three- address instruction formats are not common, because they require a relatively long instruction format to hold three address references. 7. Explain various operations of ALU. Ans:- The ALU is the part of the CPU that actually performs arithmetic and logical operations on data. All of the other elements of the computer system – control unit, registers, memory, I/O – are there mainly to bring data into ALU for it to process and then take the results back out. ALU Inputs and Outputs The inputs and outputs of ALU are shown in figure 7.1. The inputs to the ALU are the control signals generated by the control unit of CPU, and the registers of the CPU where the operands for the manipulation of data are stored. The output is a register called status word or flag register which reflects the result and the registers of the CPU where the result can be stored. Thus data are presented to the ALU in registers, and the results of an operation are also stored in registers. These registers are connected by signal paths to the ALU. ALU does not directly interact with memory or other parts of the system (e.g. I/O modules), it only interacts directly with registers. An ALU like all other electronic components of a computer is based on the use of simple digital devices that store binary digits and perform Boolean logic operations. The control unit is responsible for moving data to memory or I/O modules. Also, it is the control unit that signals all the operations that happen in the CPU. The operations, functions and implementation of Control Unit will be discussed in the tenth unit.
  • 17. In this unit we will concentrate on the ALU. An important part of the use of logic circuits is for computing various mathematical operations such as addition, multiplication, trigonometric operations, etc. Hence we will be discussing the arithmetic involved in using ALU. First, before discussing the computer arithmetic we must have a way of representing numbers as binary data. 8. Discuss the different formats of floating point numbers. Ans:- Floating Point Numbers:- Instead of using the obvious representation of rational numbers presented in the previous section, most computers use a different representation of a subset of the rational numbers. We call these numbers floating-point numbers. Floating-point numbers use inexact arithmetic, and in return require only a fixed-size representation. For many computations (so-called scientific computations, as if other computations weren’t scientific) such a representation has the great advantage that it is fast, while at the same time usually giving adequate precision. There are some (sometimes spectacular) exceptions to the “adequate precision” statement in the previous paragraph, though. As a result, an entire discipline of applied mathematics, called numerical analysis, has been created for the purpose of analyzing how algorithms behave with respect to maintaining adequate precision, and of inventing new algorithms with better properties in this respect. The basic idea behind floating-point numbers is to represent a number as mantissa and an exponent, each with a fixed number of bits of precision. If we denote the mantissa with m and the exponent with e, then the number thus represented is m * 2e. Again, we have a problem that a number can have several representations. To obtain a canonical form, we simply add a rule that m must be greater than or equal to 1/2 and strictly less than 1. If we write such a mantissa in binal (analogous to decimal) form, we always get a number that starts with 0.1. This initial information therefore does not have to be represented, and we represent only the remaining “binals”. The reason floating-point representations work well for so-called scientific applications are that we more often need to multiply or divide two numbers. Multiplication of two floating-point numbers is easy to obtain. It suffices to multiply the mantissas and add the exponents. The resulting mantissa might be smaller than 1/2, in fact, it can be as small as 1/4. In this case, the result needs to be canonicalized. We do this by shifting the mantissa left by one position
  • 18. and subtracting one from the exponent. Division is only slightly more complicated. Notice that the imprecision in the result of a multiplication or a division is only due to the imprecision in the original operands. No additional imprecision is introduced by the operation itself (except possibly 1 unit in the least significant digit). Floating-point addition and subtraction do not have this property. To add two floating-point numbers, the one with the smallest exponent must first have its mantissa shifted right by n steps, where n is the difference of the exponents. If n is greater than the number of bits in the representation of the mantissa, the second number will be treated as 0 as far as addition is concerned. The situation is even worse for subtraction (or addition of one positive and one negative number). If the numbers have roughly the same absolute value, the result of the operation is roughly zero, and the resulting representation may have no correct significant digits. The two’s complement representation that we have mentioned above is mostly useful for addition and subtraction. It only complicates things for multiplication and division. For multiplication and division, it is better to use a representation with sign + absolute value. Since multiplication and division is more common with floating-point numbers, and since they result in multiplication and division of the mantissa, it is more advantageous to have the mantissa represented as sign + absolute value. The exponents are added, so it is more common to use two’s complement (or some related representation) for the exponent. Usually, computers manipulate data in chunks of 8, 16, 32, 64, or 128 bits. It is therefore useful to fit a single floating-point number with both mantissa and exponent in such a chunk. In such a chunk, we need to have room for the sign (1 bit), the mantissa, and the exponent. While there are many different ways of dividing the remaining bits between the mantissa and the exponent, in practice most computers now use a norm called IEEE, which mandates the formats as shown in figure.
  • 19. 9. Explain the characteristics of memory system. Ans:- Characteristics of Memory Systems:- Memory systems are classified according to their key characteristics. The most important are listed below: Location The classification of memory is done according to the location of the memory as: · CPU: The CPU requires its own local memory in the form of registers and also the control unit requires local memories which are fast accessible. We have already studied this in detail in our earlier discussions. · Internal (main): It is often equated with the main memory. There are other forms of internal memory. We will be discussing the internal memory in the next coming sections of this unit. External (secondary): It consists of peripheral storage devices like hard disks, magnetic disks, magnetic tapes, CDs etc. Capacity Capacity is one of the important aspects of the memory.
  • 20. Word size: Word size is the natural unit of organization of memory. The size of the word is typically equal to the number of bits used to represent a number and is equal to the instruction length. But there are many exceptions. Common word lengths are 8, 16 and 32 bits. Number of words: The addressable unit is the word in many systems. However external memory capacity is generally expressed in terms of bytes. Unit of Transfer Unit of transfer for internal memory is equal to the number of data lines into and out of memory module. · Word: For internal memory, unit of transfer is equal to the number of data lines into and out of the memory module. Need not be equal to a word or addressable unit. · Block: For external memory, data are often transferred in much larger units than a word, and these are referred to as blocks. Access Method · Sequential: Tape units have sequential access. Data are generally stored in units called "records". Data is accessed sequentially; the records may be passed (or rejected) until the record that is searched for is found. The access time to a certain record is highly variable. · Direct: Individual blocks or records have a unique address based on physical location. A block may contain a group of data. Access is accomplished by direct address to reach general vicinity, plus sequential searching, counting or waiting to reach the final location. Disk units have direct access. · Random: Each addressable location in memory has a unique, physically wired-in addressing mechanism. The time to access a given location is independent of the sequence of prior accesses and constant. Any location can be selected at random and directly addressed and accessed. Main memory and some cache systems are random access. · Associative: This is a random-access type of memory that enables one to make a comparison of desired bit locations within a word for a specified match, and to do this for all words simultaneously. Thus, a word is retrieved based on a portion of its contents rather than its address. Some cache memories may employ associative access. Performance
  • 21. · Access time: For random-access memory, this is the time it takes to perform a read or write operation. That is, the time from the instant that an address is presented to the memory to the instant that data have been stored or made available for use. For non-random-access memory, access time is the time it takes to position the read-write mechanism at the desired location. · Cycle time: Applied to random-access memory. It consists of the access time plus any additional time required before a second access can commence. · Transfer rate: This is the rate at which data can be transferred into or out of a memory unit. For random-access memory, it is equal to (1/<cycle-time>). For non-random-access memory, the following relationship holds: Tn = Ta + N/R where Tn = Average time to read or write N bits; Ta = Average access time, N = Number of bits R = Transfer rate, in bits per second (bps). Physical Type · Semiconductor: Main memory, cache. RAM, ROM. · Magnetic: Magnetic disks (hard disks), magnetic tape units. · Optical: CD-ROM, CD-RW. · Magneto-Optical: The recording technology is fundamentally magnetic. However an optical laser is used. The read operation is purely optical. Physical Characteristics · Volatile/Non-volatile: In a volatile memory, information decays naturally or is lost when electrical power is switched off. In a non-volatile memory, information once recorded remains without deterioration until deliberately changed; no electrical power is needed to retain
  • 22. information. Magnetic-surface memories are nonvolatile. Semiconductor memories may be either volatile or non-volatile. · Erasable/Non-erasable: Non-erasable memory cannot be altered (except by destroying the storage unit). ROMs are non-erasable. Memory Hierarchy Design constraints: How much? How fast? How expensive? · Faster access time, greater cost per bit · Greater capacity, smaller cost per bit, · Greater capacity, slower access time. 10. Discuss the physical characteristics of DISK. Ans:- External Memory Magnetic Disk:- A disk is a circular platter constructed of metal or of plastic coated with a magnetic material. Data are recorded on and later retrieved from the disk via a conducting coil named the head. During a read or write operation, the head is stationary while the platter rotates beneath it. Writing is achieved by producing a magnetic field which records a magnetic pattern on the magnetic surface. Data Organization and Formatting:- depicts the data layout of disk. The head is capable of reading or writing from a portion of the platter rotating beneath it. This gives rise to organization of data on the platter in a concentric set of rings called Tracks. Each track is the same width as the head. Adjacent tracks are separated by gaps that minimize errors due to misalignment of head. Data is transferred to and from the disk in blocks. And the block is smaller than the capacity of a track. Data is stored in block regions which is an angular part of a track and is referred to as a sector. Typically 10-100 sectors are there per track. These may be either of fixed or variable length.
  • 23. Physical Characteristics Head motion : Fixed-head disk (one / track) or Movable-head disk (one/surface). Disk portability : No removable disk vs. Movable disk. Sides : Double-sided vs. single-sided. Platters : Single platter vs. Multiple platter disks. Head mechanism : Contact (floppy), Fixed gap, Aerodynamic gap (Winchester [= hard disk]). Disk Performance Parameters 1. Seek time: Time required to move the disk arm (head) to the required track. Where Ts = estimated seek time, n = number of tracks traversed, m = constant that depends on the disk drive, s = startup time.
  • 24. 2. Rotational delay: time required to rotate the disk to get wanted sector beneath the head. 3. Transfer time: T = b / (r N) Where T = transfer time, b = number of bytes to be transferred, N = number of bytes on a track, r = rotation speed, in revolutions per second. 4. Access time: Ta = total average access time. Ta = Ts + (1 / 2 r) + (b / r N) where Ts = average seek time. RAID 1. RAID is a set of physical disk drives viewed by the operating system as a single logical drive. 2. Data are distributed across the physical drives of an array. 3. Redundant disk capacity is used to store parity information, which guarantees data recoverability in case of a disk failure. Optical Memory & Magnetic Tape are other two external memories.