SlideShare uma empresa Scribd logo
1 de 55
A TRAINING REPORT
ON
EBEDDED SYSTEM
Submitted by
KULDEEP KAUSHIK
Under the Supervision of
PRAKUL RAJVANSHI
EMBEDDED CONSULTANT
(DUCAT)
in partial fulfillment for the award of the degree of
BACHELOR OF TECHNOLOGY
IN
ELECTRONICS &COMMUNICATION ENGINEERING
MANAV BHARTI UNIVERSITY
JUNE, 2013
Acknowledgement
I would like to express our sincere gratitude to my training supervisor “PrakulRajvanshi”
for giving me the opportunity to work on this topic. It would never be possible for me to take this
training to this level without his/her innovative ideas and her relentless support and
encouragement.
Kuldeep Kaushik
Modules Covered in Training:
Chapter 1: C Programming language
Chapter 2: Introduction to microcontroller and8051 microcontroller
Chapter 3: Linux Internals.
Chapter 4: Project (Automatic Cab Service).
CHAPTER 1
C Language Programming
Introduction
C language is widely used in the development of operating systems. An
Operating System (OS) is software (collection of programs) that controls the various functions of
a computer. Also it makes other programs on your computer work. For example, you cannot
work with a word processor program, such as Microsoft Word, if there is no operating system
installed on your computer. Windows, Unix, Linux, Solaris, and MacOS are some of the popular
operating systems.
Applications
C’s ability to communicate directly with hardware makes it a powerful choice for system
programmers. In fact, popular operating systems such as Unix and Linux are written entirely in
C. Additionally, even compilers and interpreters for other languages such as FORTRAN, Pascal,
and BASIC are written in C. However, C’s scope is not just limited to developing system
programs. It is also used to develop any kind of application, including complex business ones.
The following is a partial list of areas where C language is used:
• Embedded Systems
• Systems Programming
• Artificial Intelligence
• Industrial Automation
• Computer Graphics
• Space Research
• Image Processing
• Game Programming
What kind of language is C?
C is a structured programming language, which means that it allows you to develop
programs using well-defined control structures (you will learn about control structures in the
articles to come), and PProvides modularity (breaking the task into multiple sub tasks that are
simple enough to understand and to reuse). C is often called a middle-level language because it
combines the best elements of low-level or machine language with high-level languages.
Control Flow
In computer science, control flow (or alternatively, flow of control) refers to the order in
which the individual statements, instructions, or function calls of an imperative or a declarative
program are executed or evaluated.
Within an imperative programming language, a control flow statement is a statement whose
execution results in a choice being made as to which of two or more paths should be followed.
For non-strict functional languages, functions and language constructs exist to achieve the same
result, but they are not necessarily called control flow statements.
The kinds of control flow statements supported by different languages vary, but can be
categorized by their effect:
• continuation at a different statement (unconditional branch or jump),
• executing a set of statements only if some condition is met (choice - i.e. conditional
branch),
• executing a set of statements zero or more times, until some condition is met (i.e. loop -
the same as conditional branch),
• executing a set of distant statements, after which the flow of control usually returns
(subroutines, co routines, and continuations),
• Stopping the program, preventing any further execution (unconditional halt).
Conditional & decision statement
Conditional statements, conditional expressions and conditional constructs are features of
a programming language which perform different computations or actions depending on whether
a programmer-specified Boolean condition evaluates to true or false. Apart from the case
of branch predication, this is always achieved by selectively altering the control flow based on
some condition.
In imperative programming languages, the term "conditional statement" is usually used, whereas
in functional programming, the terms "conditional expression" or "conditional construct" are
preferred, because these terms all have distinct meanings.
Although dynamic dispatch is not usually classified as a conditional construct, it is another way
to select between alternatives at runtime.
If-else
The if-else construct (sometimes called if-then-else) is common across many
programming languages. Although the syntax varies quite a bit from language to language, the
basic structure (in pseudo code form) looks like this: (The example is actually perfectly valid
Visual Basic or QuickBASIC syntax.)
Else if
By using Else If, it is possible to combine several conditions. Only the statements
following the first condition that is found to be true will be executed. All other statements will be
skipped. The statements of the final Else will be executed if none of the conditions are true. This
example is written in the Ada language:
If expressions
Many languages support if expressions, which are similar to if statements, but return a
value as a result. Thus, they are true expressions (which evaluate to a value), not statements
(which just perform an action).
In C and C-like languages conditional expressions take the form of a ternary operator called the
conditional expression operator, ?:, which follows this template:
(condition)?(evaluate if condition was true):(evaluate if condition was false)
Case and switch statements
Switch statements (in some languages, case statements) compare a given value with
specified constants and take action according to the first constant to match. The example on the
left is written in Pascal, and the example on the right is written in C.
Functions
Functions are used in c for the following reasons, Function definition, Types of functions,
Functions with no arguments and no return values, Functions with arguments but no return
values, Functions with arguments and return values, Return value data type of function and Void
functions.
A function is a complete and independent program which is used (or invoked) by the main
program or other subprograms. A subprogram receives values called arguments from a calling
program, performs calculations and returns the results to the calling program.
There are many advantages in using functions in a program they are:
1. It facilitates top down modular programming. In this programming style, the high level logic
of the overall problem is solved first while the details of each lower level functions is addressed
later.
2. The length of the source program can be reduced by using functions at appropriate places.
This factor is critical with microcomputers where memory space is limited.
3. It is easy to locate and isolate a faulty function for further investigation.
4. A function may be used by many other programs this means that a c programmer can build on
what others have already done, instead of starting over from scratch.
5. A program can be used to avoid rewriting the same sequence of code at two or more locations
in a program. This is especially useful if the code involved is long or complicated.
6. Programming teams does a large percentage of programming. If the program is divided into
subprograms, each subprogram can be written by one or two team members of the team rather
than having the whole team to work on the complex program
Types of functions
A function may be long to any one of the following categories:
1. Functions with no arguments and no return values.
2. Functions with arguments and no return values.
3. Functions with arguments and return values.
MACROS
Preprocessing expands macros in all lines that are not preprocessor directives (lines that
do not have a # as the first non-white-space character) and in parts of some directives that are not
skipped as part of a conditional compilation. "Conditional compilation" directives allow you to
suppress compilation of parts of a source file by testing a constant expression or identifier to
determine which text blocks are passed on to the compiler and which text blocks are removed
from the source file during preprocessing.
The #define directive is typically used to associate meaningful identifiers with constants,
keywords, and commonly used statements or expressions. Identifiers that represent constants are
sometimes called "symbolic constants" or "manifest constants." Identifiers that represent
statements or expressions are called "macros." In this preprocessor documentation, only the term
"macro" is used.
When the name of the macro is recognized in the program source text or in the arguments of
certain other preprocessor commands, it is treated as a call to that macro. The macro name is
replaced by a copy of the macro body. If the macro accepts arguments, the actual arguments
following the macro name are substituted for formal parameters in the macro body. The process
of replacing a macro call with the processed copy of the body is called "expansion" of the macro
call.
Pointers
In C language, a pointer is a variable that points to or references a memory location in
which data is stored. Each memory cell in the computer has an address which can be used to
access its location. A pointer variable points to a memory location. By making use of pointer,
we can access and change the contents of the memory location.
Pointer declaration
A pointer variable contains the memory location of another variable. You begin the
declaration of a pointer by specifying the type of data stored in the location identified by the
pointer. The asterisk tells the compiler that you are creating a pointer variable. Finally you give
the name of the pointer variable. The pointer declaration syntax is as shown below.
type * variable_name
Example:
int ptr;
float *string;
Address operator
Once we declare a pointer variable, we point the variable to another variable. We can do
this by assigning the address of the variable to the pointer as in the following example:
ptr = #
The above declaration places the memory address of num variable into the pointer variable ptr. If
num is stored in memory 21260 address then the pointer variable ptr will contain the memory
address value 21260.
Pointers and function
The pointers are very much used in a function declaration. Sometimes only with a pointer
a complex function can be easily represented and success. The usage of the pointers in a function
definition may be classified into two groups.
1. Call by reference
2. Call by value.
Call by value
We have seen that a function is invoked there will be a link established between the
formal and actual parameters. A temporary storage is created where the value of actual
parameters is stored. The formal parameters picks up its value from storage area the mechanism
of data transfer between actual and formal parameters allows the actual parameters mechanism
of data transfer is referred as call by value. The corresponding formal parameter represents a
local variable in the called function. The current value of corresponding actual parameter
becomes the initial value of formal parameter. The value of formal parameter may be changed in
the body of the actual parameter. The value of formal parameter may be changed in the body of
the subprogram by assignment or input statements. This will not change the value of actual
parameters.
Call by Reference
When we pass address to a function the parameters receiving the address should be
pointers. The process of calling a function by using pointers to pass the address of the variable is
known as call by reference. The function which is called by reference can change the values of
the variable used in the call.
Pointer to arrays
An array is actually very much like pointer. We can declare the arrays first element as
a[0] or as int *a because a[0] is an address and *a is also an address the form of declaration is
equivalent. The difference is pointer is a variable and can appear on the left of the assignment
operator that is lvalue. The array name is constant and cannot appear as the left side of
assignment operator.
Strings are characters arrays and here last element is arrays and pointers to char arrays can be
used to perform a number of string functions.
Pointers and structures
We know the name of an array stands for the address of its zeroth element the same
concept applies for names of arrays of structures. Suppose item is an array variable of struct type.
Consider the following declaration:
struct products
{
char name[30];
int manufac;
float net;
item[2],*ptr;
this statement declares item as array of two elements, each type struct products and ptr as
a pointer data objects of type struct products, the
assignment ptr=item;
would assign the address of zeroth element to product[0]. Its members can be accessed by using
the following notation.
ptr->name;
ptr->manufac;
ptr->net;
The symbol -> is called arrow pointer and is made up of minus sign and greater than sign. Note
that ptr-> is simple another way of writing product[0]. When the pointer is incremented by one it
is made to pint to next record ie item[1]. The following statement will print the values of
members of all the elements of the product array.
Pointers on pointer
While pointers provide enormous power and flexibility to the programmers, they may use
cause manufactures if it not properly handled. Consider the following precautions using pointers
to prevent errors. We should make sure that we know where each pointer is pointing in a
program. Here are some general observations and common errors that might be useful to
remember.
A pointer contains garbage until it is initialized. Since compilers cannot detect uninitialized
or wrongly initialized pointers, the errors may not be known until we execute the program
remember that even if we are able to locate a wrong result, it may not provide any evidence for
us to suspect problems in the pointers.
The abundance of c operators is another cause of confusion that leads to errors. For example the
expressions such as
*ptr++, *p[],(ptr).member
etc should be carefully used. A proper understanding of the precedence and associatively rules
should be carefully used.
Structures and Unions
In this tutorial you will learn about C Programming - Structures and Unions, Giving
values to members, Initializing structure, Functions and structures, Passing structure to elements
to functions, Passing entire function to functions, Arrays of structure, Structure within a structure
and Union.
Arrays are used to store large set of data and manipulate them but the disadvantage is that all the
elements stored in an array are to be of the same data type. If we need to use a collection of
different data type items it is not possible using an array. When we require using a collection of
different data items of different data types we can use a structure. Structure is a method of
packing data of different types. A structure is a convenient method of handling a group of related
data items of different data types.
Structures do not occupy any memory until it is associated with the structure variable such as
book1. the template is terminated with a semicolon. While the entire declaration is considered as
a statement, each member is declared independently for its name and type in a separate statement
inside the template. The tag name such as lib_books can be used to declare structure variables of
its data type later in the program.
A structure is usually defines before main along with macro definitions. In such cases
the structure assumes global status and all the functions can access the structure.
Functions and structures
We can pass structures as arguments to functions. Unlike array names however, which
always point to the start of the array, structure names are not pointers. As a result, when we
change structure parameter inside a function, we don’t effect its corresponding argument.
Arrays of structure
It is possible to define a array of structures for example if we are maintaining information
of all the students in the college and if 100 students are studying in the college. We need to use
an array than single variables.
An array of structures can be assigned initial values just as any other array can. Remember that
each element is a structure that must be assigned corresponding initial values.
Union
Unions like structure contain members whose individual data types may differ from one
another. However the members that compose a union all share the same storage area within the
computer memory where as each member within a structure is assigned its own unique storage
area. Thus unions are used to conserve memory. They are useful for application involving
multiple members. Where values need not be assigned to all the members at any one time Like
structures union can be declared using the keyword union. This declares a variable code of type
union item. The union contains three members each with a different data type. However we can
use only one of them at a time. This is because if only one location is allocated for union variable
irrespective of size. The compiler allocates a piece of storage that is large enough to access a
union member we can use the same syntax that we use to access structure members are all valid
member variables. During accessing we should make sure that we are accessing the member
whose value is currently stored.
File handling
In any programming language it is vital to learn file handling techniques. Many
applications will at some point involve accessing folders and files on the hard drive. In C, a
stream is associated with a file. Special functions have been designed for handling file
operations. Some of them will be discussed in this chapter. The header file stdio.h is required for
using these functions.
Opening a file
Before we perform any operations on a file, we need to identify the file to the system and
open it. We do this by using a file pointer. The type FILE defined in stdio.h allows us to define a
file pointer. Then you use the function fopen() for opening a file. Once this is done one can read
or write to the file using the fread() or fwrite() functions, respectively. The fclose() function is
used to explicitly close any opened file.
Stack & queue
In this section, we introduce two closely-related data types for manipulating arbitrarily
large collections of objects: the stack and the queue. Each is defined by two basic
operations: insert a new item, and remove an item. When we insert an item, our intent is clear.
But when we remove an item, which one do we choose? The rule used for a queue is to always
remove the item that has been in the collection the most amount of time. This policy is known
as first-in-first-out or FIFO. The rule used for a stack is to always remove the item that has
been in the collection the least amount of time. This policy is known as last-in first-out or
LIFO.
Pushdown stacks.
A pushdown stack (or just a stack) is a collection that is based on the last-in-first-out
(LIFO) policy. When you click a hyperlink, your browser displays the new page (and inserts it
onto a stack). You can keep clicking on hyperlinks to visit new pages. You can always revisit the
previous page by clicking the back button (remove it from a stack). The last-in-first-out policy
offered by a pushdown stack provides just the behavior that you expect.
By tradition, we name the stack insert method push() and the stack remove operation pop(). We
also include a method to test whether the stack is empty. The following API summarizes the
operations:
The asterisk indicates that we will be considering more than one implementation of this API.
Linked lists
For classes such as stacks that implement collections of objects, an important objective is
to ensure that the amount of space used is always proportional to the number of items in
the collection. Now we consider the use of a fundamental data structure known as a linked
list that can provide implementations of collections (and, in particular, stacks) that achieves this
important objective.
A linked list is a recursive data structure defined as follows: a linked list is either empty
(null) or a reference to a node having a reference to a linked list . The node in this
definition is an abstract entity that might hold any kind of data in addition to the node reference
that characterizes its role in building linked lists. With object-oriented programming,
implementing linked lists is not difficult. We start with a simple example of a class for the node
abstraction:
A Node has two instance variables: a String and a Node. The String is a placeholder in this
example for any data that we might want to structure with a linked list (we can use any set of
instance variables); the instance variable of type Node characterizes the linked nature of the data
structure. Now, from the recursive definition, we can represent a linked list by a variable of
type Node just by ensuring that its value is either null or a reference to a Node whose next field
is a reference to a linked list.
Queue
A queue supports the insert and remove operations using a FIFO discipline. By
convention, we name the queue insert operation enqueue and the remove operation dequeue.
Lincoln tunnel. Student has tasks that must be completed. Put on a queue. Do the tasks in the
same order that they arrive.
• Linked list implementation. Program Queue.java implements a FIFO queue of strings
using a linked list. Like Stack, we maintain a reference first to the least-recently
added Node on the queue. For efficiency, we also maintain a reference last to the least-
recently added Node on the queue.
• Array implementation. Similar to array implementation of stack, but a little trickier
since need to wrap-around. Program DoublingQueue.javaimplements the queue interface.
The array is dynamically resized using repeated doubling.
Trees
Tree structures support various basic dynamic set operations including Search
Predecessor Successor, Minimum, Maximum, Insert, and Delete in time proportional to the
height of the tree. Ideally, a tree will be balanced and the height will be log n where n is the
number of nodes in the tree. To ensure that the height of the tree is as small as possible and
therefore provide the best running time, a balanced tree structure like a red-black tree, AVL tree,
or b-tree must be used. When working with large sets of data, it is often not possible or desirable
to maintain the entire structure in primary storage (RAM). Instead, a relatively small portion of
the data structure is maintained in primary storage, and additional data is read from secondary
storage as needed. Unfortunately, a magnetic disk, the most common form of secondary storage,
is significantly slower than random access memory (RAM). In fact, the system often spends
more time retrieving data than actually processing data.
B-trees are balanced trees that are optimized for situations when part or all of the tree must be
maintained in secondary storage such as a magnetic disk. Since disk accesses are expensive (time
consuming) operations, a b-tree tries to minimize the number of disk accesses. For example, a b-
tree with a height of 2 and a branching factor of 1001 can store over one billion keys but requires
at most two disk accesses to search for any node.
The Structure of B-Trees
Unlike a binary-tree, each node of a b-tree may have a variable number of keys and
children. The keys are stored in non-decreasing order. Each key has an associated child that is
the root of a sub tree containing all nodes with keys less than or equal to the key but greater than
the preceding key. A node also has an additional rightmost child that is the root for a sub tree
containing all keys greater than any keys in the node.
A b-tree has a minimum number of allowable children for each node known as the minimization
factor. If t is this minimization factor, every node must have at least t - 1 keys. Under certain
circumstances, the root node is allowed to violate this property by having fewer than t - 1 keys.
Every node may have at most 2t - 1 keys or, equivalently, 2t children.
Since each node tends to have a large branching factor (a large number of children), it is
typically necessary to traverse relatively few nodes before locating the desired key. If access to
each node requires a disk access, then a b-tree will minimize the number of disk accesses
required. The minimization factor is usually chosen so that the total size of each node
corresponds to a multiple of the block size of the underlying storage device. This choice
simplifies and optimizes disk access. Consequently, a b-tree is an ideal data structure for
situations where all data cannot reside in primary storage and accesses to secondary storage are
comparatively expensive (or time consuming).
Height of B-Trees
For n greater than or equal to one, the height of an n-key b-tree T of height h with a
minimum degree t greater than or equal to 2,
CHAPTER 2:
MICROCONTROLLER AND ITS
INTERFACING
Introduction
8051 Architecture:
Block Diagram and Pin Diagram:
Timers Interrupts & interrupt handling
What is an Interrupt?
The interrupts refer to a notification, communicated to the controller, by a hardware
device or software, on receipt of which controller momentarily stops and responds to the
interrupt. Whenever an interrupt occurs the controller completes the execution of the current
instruction and starts the execution of an Interrupt Service Routine (ISR) or Interrupt
Handler. ISR is a piece of code that tells the processor or controller what to do when the interrupt
occurs. After the execution of ISR, controller returns back to the instruction it has jumped from
(before the interrupt was received).
Why need interrupts?
An application built around microcontrollers generally has the following structure. It
takes input from devices like keypad, ADC etc; processes the input using certain algorithm; and
generates an output which is either displayed using devices like seven segment, LCD or used
further to operate other devices like motors etc. In such designs, controllers interact with the
inbuilt devices like timers and other interfaced peripherals like sensors, serial port etc. The
programmer needs to monitor their status regularly like whether the sensor is giving output,
whether a signal has been received or transmitted, whether timer has finished counting, or if an
interfaced device needs service from the controller, and so on. This state of continuous
monitoring is known as polling.
In polling, the microcontroller keeps checking the status of other devices; and while doing so it
does no other operation and consumes all its processing time for monitoring. This problem can
be addressed by using interrupts. In interrupt method, the controller responds to only when an
interruption occurs. Thus in interrupt method, controller is not required to regularly monitor the
status (flags, signals etc.) of interfaced and inbuilt devices.
Hardware and Software Interrupt
The interrupts in a controller can be either hardware or software. If the interrupts are
generated by the controller’s inbuilt devices, like timer interrupts; or by the interfaced devices,
they are called the hardware interrupts. If the interrupts are generated by a piece of code, they are
termed as software interrupts.
Multiple Interrupts
What would happen if multiple interrupts are received by a microcontroller at the same
instant? In such a case, the controller assigns priorities to the interrupts. Thus the interrupt with
the highest priority is served first. However the priority of interrupts can be changed configuring
the appropriate registers in the code.
8051 Interrupts
The 8051 controller has six hardware interrupts of which five are available to the
programmer. These are as follows:
1. RESET Interrupt - This is also known as Power on Reset (POR). When the RESET interrupt
is received, the controller restarts executing code from 0000H location. This is an interrupt which
is not available to or, better to say, need not be available to the programmer.
2. Timer Interrupts - Each Timer is associated with a Timer interrupt. A timer interrupt notifies
the microcontroller that the corresponding Timer has finished counting.
3. External Interrupts - There are two external interrupts EX0 and EX1 to serve external
devices. Both these interrupts are active low. In AT89C51, P3.2 (INT0) and P3.3 (INT1) pins are
available for external interrupts 0 and 1 respectively. An external interrupt notifies the
microcontroller that an external device needs its service.
4. Serial Interrupt - This interrupt is used for serial communication. When enabled, it notifies
the controller whether a byte has been received or transmitted.
How is an interrupt serviced?
Every interrupt is assigned a fixed memory area inside the processor/controller. The
Interrupt Vector Table (IVT) holds the starting address of the memory area assigned to it
(corresponding to every interrupt).
When an interrupt is received, the controller stops after executing the current instruction. It
transfers the content of program counter into stack. It also stores the current status of the
interrupts internally but not on stack. After this, it jumps to the memory location specified
by Interrupt Vector Table (IVT). After that the code written on that memory area gets executed.
This code is known as the Interrupt Service Routine (ISR) or interrupt handler. ISR is a code
written by the programmer to handle or service the interrupt.
Programming Interrupts
While programming interrupts, first thing to do is to specify the microcontroller which
interrupts must be served. This is done by configuring the Interrupt Enable (IE) register which
enables or disables the various available interrupts. The Interrupt Enable register has following
bits to enable/disable the hardware interrupts of the 8051 controller.
To enable any of the interrupts, first the EA bit must be set to 1. After that the bits corresponding
to the desired interrupts are enabled. ET0, ET1 and ET2 bits are used to enable the Timer
Interrupts 0, 1 and 2, respectively. In AT89C51, there are only two timers, so ET2 is not used.
EX0 and EX1 are used to enable the external interrupts 0 and 1. ES is used for serial interrupt.
EA bit acts as a lock bit. If any of the interrupt bits are enabled but EA bit is not set, the interrupt
will not function. By default all the interrupts are in disabled mode.
Setting the bits of IE register is necessary and sufficient to enable the interrupts. Next step is to
specify the controller what to do when an interrupt occurs. This is done by writing a subroutine
or function for the interrupt. This is the ISR and gets automatically called when an interrupt
occurs. It is not required to call the Interrupt Subroutine explicitly in the code.
1. Programming Timer Interrupts
The timer interrupts IT0 and IT1 are related to Timers 0 and 1, respectively. (Please
refer 8051 Timers for details on Timer registers and modes.) The interrupt programming for
timers involves following steps:
1. Configure TMOD register to select timer(s) and its/their mode.
2. Load initial values in THx and TLx for mode 0 and 1; or in THx only for mode 2.
3. Enable Timer Interrupt by configuring bits of IE register.
4. Start timer by setting timer run bit TRx.
5. Write subroutine for Timer Interrupt. The interrupt number is 1 for Timer0 and 3 for
Timer1.
Note that it is not required to clear timer flag TFx.
6. To stop the timer, clear TRx in the end of subroutine. Otherwise it will restart from
0000H in case of modes 0 or 1 and from initial values in case of mode 2.
7. If the Timer has to run again and again, it is required to reload initial values within the
routine itself (in case of mode 0 and 1). Otherwise after one cycle timer will start counting from
0000H.
2. Programming External Interrupts
The external interrupts are the interrupts received from the (external) devices interfaced
with the microcontroller. They are received at INTx pins of the controller. These can be level
triggered or edge triggered. In level triggered, interrupt is enabled for a low at INTx pin; while in
case of edge triggering, interrupt is enabled for a high to low transition at INTx pin. The edge or
level trigger is decided by the TCON register. The TCON register has following bits:
Setting the IT0 and IT1 bits make the external interrupt 0 and 1 edge triggered respectively. By
default these bits are cleared and so external interrupt is level triggered.
Note : For a level trigger interrupt, the INTx pin must remain low until the start of the ISR and
should return to high before the end of ISR. If the low at INTx pin goes high before the start of
ISR, interrupt will not be generated. Also if the INTx pin remains low even after the end of ISR,
the interrupt will be generated once again. This is the reason why level trigger interrupt (low) at
INTx pin must be four machine cycles long and not greater than or smaller than this.
Following are the steps for using external interrupt :
1. Enable external interrupt by configuring IE register.
2. Write routine for external interrupt. The interrupt number is 0 for EX0 and 2 for EX1
respectively.
3. Programming Serial Interrupt
To use the serial interrupt the ES bit along with the EA bit is set. Whenever one byte of
data is sent or received, the serial interrupt is generated and the TI or RI flag goes high. Here, the
TI or RI flag needs to be cleared explicitly in the interrupt routine (written for the Serial
Interrupt).
The programming of the Serial Interrupt involves the following steps:
1. Enable the Serial Interrupt (configure the IE register).
2. Configure SCON register.
3. Write routine or function for the Serial Interrupt. The interrupt number is 4.
4. Clear the RI or TI flag within the routine.
Programming Multiple Interrupts
Multiple interrupts can be enabled by setting more than one interrupts in the IE register.
If more than one interrupts occur at the same time, the interrupts will be serviced in order of their
priority. By default the interrupts have the following priorities in descending order:
The priority of the interrupts can be changed by programming the bits of Interrupt Priority (IP)
register. The IP register has the following bit configuration:
First two MSBs are reserved. The remaining bits are the priority bits for the available interrupts.
Setting a particular bit in IP register makes the corresponding interrupt of the higher priority.
For example, IP = 0x08; will make Timer1 priority higher. So the interrupt priority order will
change as follows (in descending order):
More than one bit in IP register can also be set. In such a case, the higher priority interrupts will
follow the sequence as they follow in default case.
For example, IP = 0x0A; will make Timer0 and Timer1 priorities higher. So the interrupt priority
order will change as follows (in descending order):
Serial Communication Protocols
Distributed systems require protocols for communication between microcontrollers.
Controller Area Networks (CAN) and Serial Peripheral Interfaces (SPI) are two of the most
common such protocols.
The beauty of using multiple processors in a single system is that the timing requirements of one
processor can be divorced from the timing requirements of the other. In a real-time system, this
quality can make the programming a lot easier and reduce the potential for race conditions. The
price you pay is that you then have to get information from one processor to the other.
If you use one fast processor instead of two slow ones, passing information from one part of the
software to another may be as simple as passing parameters to a function or storing the data in a
global location. However, when the pieces of software that need to communicate are located on
different processors, you have to figure out how to bundle the information into a packet and pass
it across some sort of link. In this article, we'll look at two standard protocols, SPI and CAN, that
can be used to communicate between processors, and also at some of the issues that arise in
designing ad hoc protocols for small systems.
Controller Area Network (CAN)
Controller Area Network (CAN) is a multi-drop bus protocol, so it can support many
communicating nodes. The advantages are obvious. The disadvantage of moving to more than
two nodes is that you now require some addressing mechanism to indicate who sent a message,
and who should receive it. The CAN protocol is based on two signals shared by all nodes on the
network. The CAN_High and CAN_Low signals provide a differential signal and allow collision
detection. If both lines go high, two different nodes must be trying to drive two different signals,
and one will then back off and allow the other to continue.
CAN is used in almost every automobile manufactured in Europe. In the U.S., CAN is popular in
factory automation, where the Device Net protocol uses CAN as its lower layer.
The biggest difference between CAN and SPI is that the CAN protocol defines packets. In SPI
(and serial interfaces in general), only the transmission of a byte is fully defined. Given a
mechanism for byte transfer, software can provide a packet layer, but no standard size or type
exists for a serial packet. Since packet transfer is standardized for CAN, it's usually implemented
in hardware. Implementing packets, including checksums and backoff-and-retry mechanisms, in
hardware hides a whole family of low-level design issues from the software engineer.
The program can place a packet in a CAN controller's buffer and not worry about interacting
with the CAN hardware until the packet is sent or an entire packet has been received. The same
level of control could be built into a serial controller, but unless it was standardized, that
controller could only communicate with peers of the same type.
A CAN packet consists of an identifier that comprises either 11 bits or 29 bits and 8 bytes of
data, along with a few other pieces of housekeeping like the checksum. The identifier is not
defined by the CAN protocol, but higher level protocols can describe how the identifier can be
divided into source, destination, priority, and type information. You could also define these bits
yourself if you don't have to share the bus with devices outside of your control.
When controlling transmission byte by byte, you usually have to combine a number of bytes to
say anything meaningful, except in cases as trivial as the thermostat example discussed earlier.
However, in eight bytes you can express commands, report on parameter values, or pass
calibration results.
For debugging purposes, communicating from a microcontroller to a PC is straightforward. By
snooping the CAN bus from the PC, you can monitor the communications between the
microcontrollers in the system, or you can imitate one side of the conversation by inserting test
messages.
A product called USBcan from Kvaser provides an interface to the CAN bus through the PC's
USB port. A number of other companies offer similar products, but what I found impressive
about Kvaser was the quality of the software libraries available. The CANlib library provides an
API for building and receiving CAN packets. The company also provides a version of the library
compiled for my favorite PC development environment, Borland C++ Builder, which enabled
me to build a nice GUI that showed all bus activity. The same program can be used for
calibration, inserting text messages, and even downloading a new version of software to the
device.
Each Kvaser product, whether ISA, PCI, PCMCIA or USB-based, has a driver. Once the driver
is installed, the applications built using Kvaser's libraries will work directly with that device. So,
if I develop on a PC with a PCI card, I can still deploy my test software to a field engineer with a
laptop and a PCMCIA card. Since the application I was working on was automotive, it was ideal
to be able to send someone into a vehicle with a laptop. One of my few gripes with the supplied
software is that it only supports the mainstream versions of Windows. Linux drivers would have
been welcome, but Kvaser does not support it. (Open source drivers are available for some of the
Kvaser ISA boards at the Linux CAN Project homepage.)
One of the most useful drivers from Kvaser is a virtual driver that doesn't require a CAN
hardware interface. This allows one PC application to communicate with other PC applications
running CAN software without any CAN hardware. You can therefore develop and test a PC
program to communicate over the CAN bus without requiring any CAN hardware, as long as you
write another PC test program to listen to whatever the first program is saying. This is useful if
there isn't enough hardware to provide a system to each developer or if the prototype target is not
yet available.
Higher layer protocols
There are a number of higher layer protocols that have been layered on top of the basic
CAN specifications . These include SAE J1939, DeviceNet, and CANOpen. The emphasis of
these protocols is to define the meaning of the identifier and to encourage interoperability
between CAN-based solutions from different vendors. Each standard has established a foothold
in a different application domain.
If your system is closed, that is, if all nodes on the bus will be products from your company, then
implementing one of the standard higher level protocols is probably unnecessary. However,
examining these standards may give you ideas for some of the features that you might want to
implement. For example, SAE J1939 includes a connection-oriented mechanism, which is
suitable when transferring blocks of data larger than eight bytes. The standard defines a
handshaking message to set up the connection and then a system of counting segments to ensure
that the receiver will detect any missing packets.
Some higher level protocols define messages for particular application domains, such as a
message that is sent when a car's brakes are engaged. In theory, this means that you can develop
a device that integrates with your in-car electronics. In practice, the exact workings of the engine
management CAN bus on any vehicle are a closely guarded secret. The CAN standards are not a
ticket in; you still need the manufacturer's cooperation.
Introduction to KEIL Uvision programming software:
Third-Party Utilities
extend the functions
and capabilities of
µVision.
Keil PK51 is a complete software
development environment for classic and
extended 8051 microcontrollers. Like all Keil
tools, it is easy to learn and use.
RTX Real-Time
Kernels enables the
development of real-
time software.
The Keil 8051 Development Tools are designed to solve the complex problems facing embedded
software developers.
• When starting a new project, simply select the microcontroller you use from the Device
Database and the µVision IDE sets all compiler, assembler, linker, and memory options for
you.
• Numerous example programs are included to help you get started with the most popular
embedded 8051 devices.
• The Keil µVision Debugger accurately simulates on-chip peripherals (I²C, CAN, UART, SPI,
Interrupts, I/O Ports, A/D Converter, D/A Converter, and PWM Modules) of your 8051
device. Simulation helps you understand hardware configurations and avoids time wasted on
setup problems. Additionally, with simulation, you can write and test applications before
target hardware is available.
• When you are ready to begin testing your software application with target hardware, use the
MON51, MON390, MONADI, or FlashMON51 Target Monitors, the ISD51 In-System
Debugger, or the ULINK USB-JTAG Adapter to download and test program code on your
target system.
Top view simulator:
Topview Simulator gives an excellent simulation environment for the industry's most
popular 8 bit Microcontroller family, MCS 51. It gives required facilities to enable the system
designers to start projects right from the scratch and finish them with ease and confidence.
It is the total simulation solution giving many state of art features meeting the needs of the
designers possessing different levels of expertise. If you are a beginner, then you can learn about
8051 based embedded solutions without any hardware. If you are an experienced designer, you
may find most of the required facilities built in the simulator that enabling you to complete your
next project without waiting for the target hardware.
The features of the simulator is briefly tabulated here for your reference:
Finished Real Time Projects.
Project Channel Sequential Controller with LED Displays.
Project Channel Sequential Controller - LCD Display
Project - Programmable Timer with 2X16 LCD Display
Device Selection
A wide range of device selection, including Generic 8031 devices and Atmel's
AT89CXX series 8051 microcontrollers.
Program Editing
Powerful editing feature for generating your programs both in C and Assembly level
and the facility to call an external Compiler / Assembler (Keil / SDCC Compilers) to process
input programs.
Clearview
Clearview facility gives all the internal architectural details in multiple windows.
Information about the Program, Data Memory, register, peripherals, SFR bits are clearly
presented in many windows to make you understand the program flow very easily.
ProgramExecution
A variety of program execution options including Single Stroke full speed execution,
Single Step, Step Over and Breakpoint execution modes give you total control over the target
program. Clear view updates all the windows with the correct and latest data and it is a
convenient help during your debugging operations. You may find how this Topview Simulator
simplifies the most difficult operation of the program development, debugging, into a very
simple task.
Simulation Facilities
Powerful simulation facilities are incorporated for I/O lines, Interrupt lines, Clocks meant
for Timers / Counters.
Many external interfacing possibilities can be simulated:
Range of Plain Point LEDs and Seven Segment LED options.
• LCD modules in many configurations.
• Momentary ON keys.
• A variety of keypads upto 4 X 8 key matrix.
• Toggle switches.
• All modes of onchip serial port communication facility.
• I2C components including RTC, EEPROMs.
• SPI Bus based EEPROM devices.
Code Generation Facilities
Powerful and versatile Code Generating facility enables you to generate the exact and
compact assembly code / C Source code for many possible application oriented interfacing
options.
You can simply define your exact needs and get the target assembly code / C Source code at a
press of button at anywhere in your program flow. The code gets embedded into your application
program automatically.
You are assured of trouble free working of final code in the real time.
• All modes of the serial port.
• Interfacing I2C/SPI Bus devices.
• Range of keypads.
• Many LED/LCD interfacing possibilities.
CHAPTER 3
LINUX INTERNAL
INTRODUCTION
Linux Operating System
Linux is a free open-source operating system based on Unix. Linux was originally created
by Linus Torvalds with the assistance of developers from around the globe. Linux is free to
download, edit and distribute. Linux is a very powerful operating system and it is gradually
becoming popular throughout the world.
Advantages of Linux
Low cost
There is no need to spend time and huge amount money to obtain licenses since Linux
and much of it’s software come with the GNU General Public License. There is no need to worry
about any software's that you use in Linux.
Stability
Linux has high stability compared with other operating systems. There is no need to
reboot the Linux system to maintain performance levels. Rarely it freeze up or slow down. It has
a continuous up-times of hundreds of days or more.
Performance
Linux provides high performance on various networks. It has the ability to handle large
numbers of users simultaneously.
Networking
Linux provides a strong support for network functionality; client and server systems can
be easily set up on any computer running Linux. It can perform tasks like network backup more
faster than other operating systems.
Flexibility
Linux is very flexible. Linux can be used for high performance server applications,
desktop applications, and embedded systems. You can install only the needed components for a
particular use. You can also restrict the use of specific computers.
Compatibility
It runs all common Unix software packages and can process all common file formats.
Wider Choice
There is a large number of Linux distributions which gives you a wider choice. Each
organization develop and support different distribution. You can pick the one you like best; the
core function's are the same.
Fast and easy installation
Linux distributions come with user-friendly installation.
Better use of hard disk
Linux uses its resources well enough even when the hard disk is almost full.
Multitasking
Linux is a multitasking operating system. It can handle many things at the same time.
Security
Linux is one of the most secure operating systems. File ownership and permissions make
linux more secure.
Open source
Linux is an Open source operating systems. You can easily get the source code for linux
and edit it to develop your personal operating system.
Today, Linux is widely used for both basic home and office uses. It is the main operating system
used for high performance business and in web servers. Linux has made a high impact in this
world.
Inter process communication (IPC)
Processes communicate with each other and with the kernel to coordinate their activities.
Linux supports a number of Inter-Process Communication (IPC) mechanisms. Signals and pipes
are two of them but Linux also supports the System V IPC mechanisms named after the
Unix TM release in which they first appeared.
Signals
Signals are one of the oldest inter-process communication methods used by
Unix .systems. They are used to signal asynchronous events to one or more processes. A
signal could be generated by a keyboard interrupt or an error condition such as the process
attempting to access a non-existent location in its virtual memory. Signals are also used by
the shells to signal job control commands to their child processes.
There are a set of defined signals that the kernel can generate or that can be generated by other
processes in the system, provided that they have the correct privileges. You can list a system's set
of signals using the kill command (kill -l), on my Intel Linux box this gives:
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL
5) SIGTRAP 6) SIGIOT 7) SIGBUS 8) SIGFPE
9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2
13) SIGPIPE 14) SIGALRM 15) SIGTERM 17) SIGCHLD
18) SIGCONT 19) SIGSTOP 20) SIGTSTP 21) SIGTTIN
22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ
26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO
30) SIGPWR
Pipes
The common Linux shells all allow redirection. For example
$ ls | pr | lpr
pipes the output from the ls command listing the directory's files into the standard input
of the pr command which paginates them. Finally the standard output from the pr command is
piped into the standard input of the lpr command which prints the results on the default printer.
Pipes then are unidirectional byte streams which connect the standard output from one process
into the standard input of another process. Neither process is aware of this redirection and
behaves just as it would normally. It is the shell which sets up these temporary pipes between the
processes.
In Linux, a pipe is implemented using two file data structures which both point at the same
temporary VFS inode which itself points at a physical page within memory. shows that
each file data structure contains pointers to different file operation routine vectors; one for writing
to the pipe, the other for reading from the pipe.
This hides the underlying differences from the generic system calls which read and write to
ordinary files. As the writing process writes to the pipe, bytes are copied into the shared data
page and when the reading process reads from the pipe, bytes are copied from the shared data
page. Linux must synchronize access to the pipe. It must make sure that the reader and the writer
of the pipe are in step and to do this it uses locks, wait queues and signals.
When the writer wants to write to the pipe it uses the standard write library functions. These all
pass file descriptors that are indices into the process's set of file data structures, each one
representing an open file or, as in this case, an open pipe. The Linux system call uses the write
routine pointed at by the file data structure describing this pipe. That write routine uses
information held in the VFS inode representing the pipe to manage the write request.
If there is enough room to write all of the bytes into the pipe and, so long as the pipe is not
locked by its reader, Linux locks it for the writer and copies the bytes to be written from the
process's address space into the shared data page. If the pipe is locked by the reader or if there is
not enough room for the data then the current process is made to sleep on the pipe inode's wait
queue and the scheduler is called so that another process can run. It is interruptible, so it can
receive signals and it will be woken by the reader when there is enough room for the write data
or when the pipe is unlocked. When the data has been written, the pipe's VFS inode is unlocked
and any waiting readers sleeping on the inode's wait queue will themselves be woken up.
Reading data from the pipe is a very similar process to writing to it.
Processes are allowed to do non-blocking reads (it depends on the mode in which they opened
the file or pipe) and, in this case, if there is no data to be read or if the pipe is locked, an error
will be returned. This means that the process can continue to run. The alternative is to wait on the
pipe inode's wait queue until the write process has finished. When both processes have finished
with the pipe, the pipe inode is discarded along with the shared data page.
Sockets
Sockets are interfaces that can "plug into" each other over a network. Once so "plugged
in", the programs so connected communicate.
This article discusses only simple aspects of stream inet sockets (don't worry about exactly what
that is right now). For the purposes of this article, a "server" program is exposed via a socket
connected to a certain /etc/services port number. A "client" program can then connect its own
socket to the server's socket, at which time the client program's writes to the socket are read as
stdin to the server program, and stdout from the server program are read from the client's socket
reads. This is one subset of socket programming, but it's perhaps the easiest to master, so this is
where you should start.
Diagram of client-server socket connection via xinetd.
Note that the client communicates by reading and writing the socket,
but the server program communicates via stdin and stdout.
This tutorial requires a Linux box. It hasn't been tested on other types of UNIX, but I think it
might work. This tutorial is centered around a system using xinetd, but it would be simple
enough to adapt it to older inetd systems. This tutorial will not work under Windows. I think it's
important that this complex type of programming be learned on the most reliable, straightforward
system possible, so Windows is out.
For the purposes of this tutorial, the server application will be at port 3333. Note that you can
implement both the client and the server on a single computer, in which case the client is
connected to a port on the computer containing both the client and the server.
System V IPC Mechanisms
Linux supports three types of interprocess communication mechanisms that first appeared in
Unix TM System V (1983). These are message queues, semaphores and shared memory. These
System V IPC mechanisms all share common authentication methods. Processes may access
these resources only by passing a unique reference identifier to the kernel via system calls.
Access to these System V IPC objects is checked using access permissions, much like accesses
to files are checked. The access rights to the System V IPC object is set by the creator of the
object via system calls. The object's reference identifier is used by each mechanism as an index
into a table of resources. It is not a straight forward index but requires some manipulation to
generate the index.
All Linux data structures representing System V IPC objects in the system include an ipc_perm
structure which contains the owner and creator process's user and group identifiers. The access
mode for this object (owner, group and other) and the IPC object's key. The key is used as a way
of locating the System V IPC object's reference identifier. Two sets of keys are supported: public
and private. If the key is public then any process in the system, subject to rights checking, can
find the reference identifier for the System V IPC object. System V IPC objects can never be
referenced with a key, only by their reference identifier.
Message Queues
Message queues allow one or more processes to write messages, which will be read by
one or more reading processes. Linux maintains a list of message queues, the msgque vector;
each element of which points to a msqid_ds data structure that fully describes the message
queue. When message queues are created a new msqid_ds data structure is allocated from system
memory and inserted into the vector.
data structure contains an ipc_perm data structure and pointers to the messages entered onto this
queue. In addition, Linux keeps queue modification times such as the last time that this queue
was written to and so on. The msqid_ds also contains two wait queues; one for the writers to the
queue and one for the readers of the message queue.
Each time a process attempts to write a message to the write queue its effective user and group
identifiers are compared with the mode in this queue's ipc_perm data structure. If the process can
write to the queue then the message may be copied from the process's address space into a msg
data structure and put at the end of this message queue. Each message is tagged with an
application specific type, agreed between the cooperating processes. However, there may be no
room for the message as Linux restricts the number and length of messages that can be written.
In this case the process will be added to this message queue's write wait queue and the scheduler
will be called to select a new process to run. It will be woken up when one or more messages
have been read from this message queue.
Reading from the queue is a similar process. Again, the processes access rights to the write
queue are checked. A reading process may choose to either get the first message in the queue
regardless of its type or select messages with particular types. If no messages match this criteria
the reading process will be added to the message queue's read wait queue and the scheduler run.
When a new message is written to the queue this process will be woken up and run again.
Semaphores
In its simplest form a semaphore is a location in memory whose value can be tested and
set by more than one process. The test and set operation is, so far as each process is concerned,
uninterruptible or atomic; once started nothing can stop it. The result of the test and set operation
is the addition of the current value of the semaphore and the set value, which can be positive or
negative. Depending on the result of the test and set operation one process may have to sleep
until the semaphore’s value is changed by another process. Semaphores can be used to
implement critical regions, areas of critical code that only one process at a time should be
executing. Say you had many cooperating processes reading records from and writing records to
a single data file. You would want that file access to be strictly coordinated. You could use a
semaphore with an initial value of 1 and, around the file operating code, put two semaphore
operations, the first to test and decrement the semaphore's value and the second to test and
increment it. The first process to access the file would try to decrement the semaphore's value
and it would succeed, the semaphore's value now being 0. This process can now go ahead and
use the data file but if another process wishing to use it now tries to decrement the semaphore's
value it would fail as the result would be -1. That process will be suspended until the first
process has finished with the data file. When the first process has finished with the data file it
will increment the semaphore's value, making it 1 again. Now the waiting process can be woken
and this time its attempt to increment the semaphore will succeed.
System V IPC semaphore objects each describe a semaphore array and Linux uses the semid_ds
data structure to represent this. All of the semid_ds data structures in the system are pointed at by
the semary, a vector of pointers. There are sem_nsems in each semaphore array, each one described
by asem data structure pointed at by sem_base. All of the processes that are allowed to manipulate
the semaphore array of a System V IPC semaphore object may make system calls that perform
operations on them. The system call can specify many operations and each operation is described
by three inputs; the semaphore index, the operation value and a set of flags. The semaphore
index is an index into the semaphore array and the operation value is a numerical value that will
be added to the current value of the semaphore. First Linux tests whether or not all of the
operations would succeed. An operation will succeed if the operation value added to the
semaphore's current value would be greater than zero or if both the operation value and the
semaphore's current value are zero. If any of the semaphore operations would fail Linux may
suspend the process but only if the operation flags have not requested that the system call is non-
blocking. If the process is to be suspended then Linux must save the state of the semaphore
operations to be performed and put the current process onto a wait queue. It does this by building
a sem_queue data structure on the stack and filling it out. The new sem_queue data structure is put at
the end of this semaphore object's wait queue (using
the sem_pending and sem_pending_last pointers). The current process is put on the wait queue in
the sem_queue data structure (sleeper) and the scheduler called to choose another process to run.
If all of the semaphore operations would have succeeded and the current process does not need to
be suspended, Linux goes ahead and applies the operations to the appropriate members of the
semaphore array. Now Linux must check that any waiting, suspended, processes may now apply
their semaphore operations. It looks at each member of the operations pending queue
(sem_pending) in turn, testing to see if the semphore operations will succeed this time. If they will
then it removes the sem_queue data structure from the operations pending list and applies the
semaphore operations to the semaphore array. It wakes up the sleeping process making it
available to be restarted the next time the scheduler runs. Linux keeps looking through the
pending list from the start until there is a pass where no semaphore operations can be applied and
so no more processes can be woken.
There is a problem with semaphores, deadlocks. These occur when one process has altered the
semaphores value as it enters a critical region but then fails to leave the critical region because it
crashed or was killed. Linux protects against this by maintaining lists of adjustments to the
semaphore arrays. The idea is that when these adjustments are applied, the semaphores will be
put back to the state that they were in before the a process's set of semaphore operations were
applied. These adjustments are kept in sem_undo data structures queued both on the semid_ds data
structure and on the task_struct data structure for the processes using these semaphore arrays.
Each individual semaphore operation may request that an adjustment be maintained. Linux will
maintain at most one sem_undo data structure per process for each semaphore array. If the
requesting process does not have one, then one is created when it is needed. The
new sem_undo data structure is queued both onto this process's task_struct data structure and onto
the semaphore array's semid_ds data structure. As operations are applied to the semphores in the
semaphore array the negation of the operation value is added to this semphore's entry in the
adjustment array of this process's sem_undo data structure. So, if the operation value is 2, then -2
is added to the adjustment entry for this semaphore.
When processes are deleted, as they exit Linux works through their set of sem_undo data
structures applying the adjustments to the semaphore arrays. If a semaphore set is deleted,
the sem_undo data structures are left queued on the process's task_struct but the semaphore array
identifier is made invalid. In this case the semaphore clean up code simply discards
the sem_undo data structure
Shared Memory
Shared memory allows one or more processes to communicate via memory that appears
in all of their virtual address spaces. The pages of the virtual memory is referenced by page table
entries in each of the sharing processes' page tables. It does not have to be at the same address in
all of the processes' virtual memory. As with all System V IPC objects, access to shared memory
areas is controlled via keys and access rights checking. Once the memory is being shared, there
are no checks on how the processes are using it. They must rely on other mechanisms, for
example System V semaphores, to synchronize access to the memory.
Each newly created shared memory area is represented by a shmid_ds data structure. These are
kept in the shm_segs vector.
The shmid_ds data structure decribes how big the area of shared memory is, how many processes
are using it and information about how that shared memory is mapped into their address spaces.
It is the creator of the shared memory that controls the access permissions to that memory and
whether its key is public or private. If it has enough access rights it may also lock the shared
memory into physical memory.
Each process that wishes to share the memory must attach to that virtual memory via a system
call. This creates a new vm_area_struct data structure describing the shared memory for this
process. The process can choose where in its virtual address space the shared memory goes or it
can let Linux choose a free area large enough. The new vm_area_struct structure is put into the list
of vm_area_structpointed at by the shmid_ds. The vm_next_shared and vm_prev_shared pointers are used
to link them together. The virtual memory is not actually created during the attach; it happens
when the first process attempts to access it.
The first time that a process accesses one of the pages of the shared virtual memory, a page fault
will occur. When Linux fixes up that page fault it finds the vm_area_struct data structure describing
it. This contains pointers to handler routines for this type of shared virtual memory. The shared
memory page fault handling code looks in the list of page table entries for this shmid_ds to see if
one exists for this page of the shared virtual memory. If it does not exist, it will allocate a
physical page and create a page table entry for it. As well as going into the current process's page
tables, this entry is saved in the shmid_ds. This means that when the next process that attempts to
access this memory gets a page fault, the shared memory fault handling code will use this newly
created physical page for that process too. So, the first process that accesses a page of the shared
memory causes it to be created and thereafter access by the other processes cause that page to be
added into their virtual address spaces.
When processes no longer wish to share the virtual memory, they detach from it. So long as
other processes are still using the memory the detach only affects the current process.
Its vm_area_struct is removed from the shmid_ds data structure and deallocated. The current
process's page tables are updated to invalidate the area of virtual memory that it used to share.
When the last process sharing the memory detaches from it, the pages of the shared memory
current in physical memory are freed, as is the shmid_ds data structure for this shared memory.
Further complications arise when shared virtual memory is not locked into physical memory. In
this case the pages of the shared memory may be swapped out to the system's swap disk during
periods of high memory usage.
CHAPTER 4
PROJECT:AUTOMATIC CAB SERVICE
Introduction
Now a day’s automatic term is used very commonly, in which a single IC is used to
perform different function. An Automatic Cab Service is works on the principal of Line Fowler
robot in which the predefined path is provided by the programmer and the cab follows the same
path.
COMPONENTS USED
RESISTOR
The flow of charge (or current) through any material, encounters an opposing force
similar in respect to mechanical friction. This opposing force is called resistance of the material.
It is measure in ohms. In some electric circuits resistance is deliberately introduced in the form
of the resistor.
Resistors are of following types:
Wire wound resistors.
Carbon resistors.
Metal film resistors.
WIRE WOUND RESISTORS:
Wire wound resistors are made from a long (usually Ni-Chromium) wound on a ceramic
core. Longer the length of the wire, higher is the resistance. So depending on the value of resistor
required in a circuit, the wire is cut and wound on a ceramic core. Such resistors are available in
power of 2 watts to several hundred watts and resistance values from 1 ohm to 100k ohms. Thus
wire wound resistors are used for high currents.
CARBON RESISTORS:
Carbon resistors are divided into two types:
Carbon composition resistors are made by mixing carbon grains with binding material
(glue) and module in the form of rods. Wire leads are inserted at the two ends. After this an
insulating material seals the resistor. Resistors are available in power ratings of 1/10, 1/8, 1/4, ½,
1.2 watts and values from 1 ohm to 20 ohms.
Carbon film resistors are made by depositing carbon film on a ceramic rod. They are cheaper
than carbon composition resistors
METAL FILM RESISTORS:
They are also called thin film resistors. They are made of a thin metal coating deposited
on a cylindrical insulating support. Such resistors are free of inductance effect that is common in
wire wound resistors at high frequency.
VARIABLE RESISTORS:
Potentiometer is a resistor where values can be set depending on the requirement.
Potentiometer is widely used in electronics systems. Examples are volume control, brightness
and contrast control of radio or T. V. sets.
Resistorarchitecture
COLOUR CODE:
CAPACITORS :
A Capacitor can store charge, and its capacity to store charge is called capacitance.
Capacitors consist of separated by an insulating material (known as dielectric).The two plates are
joined with two leads. The dielectric could be air, mica, paper, ceramic, polyster, polysterene
etc. Types of capacitors are :
ELECTROLYTIC CAPACITOR:
Electrolytic capacitor have an electrolyte as a dielectric. When such an electrolyte is
charged, chemical changes takes place in the electrolyte. If its one plate is charged positively,
same plate must be charged positively in future also. We call such capacitors as polarized.
Normally we see electrolytic capacitor and the leads are marked with positive or negative on the
can. Non-electrolyte capacitors have dielectric material such as paper, mica or ceramic.
MICA CAPACITOR:
It is a sandwich of several thin metal plates separated by thin sheets of mica. Alternate
plates are connected together and leads attached for outside connections. The total assembly is
encased in a plastic capsule. Such capacitors have small capacitance value (50 to 500pf) and high
working voltage (5000 and above)
CERAMIC CAPACITOR:
Such capacitor have disc or hollow tubular shaped dielectric made of ceramic material
such as titanium oxide and barium titanate. Thin coating of silver components is deposited on
both sides of dielectric disc, which acts as capacitor plates. These capacitors have very low
leakage current. Breakdown voltage is very high.
DC Motor
A DC motor is electromechanical device that converts electric energy into mechanical
energy that can be used to do many useful works. It can produce mechanical movements like
moving the tray of CD/DVD drive in and out. DC motor comes in various rating 6V and 12V. It
has two wires or pins of opposite polarity. when connected with power supply the shaft rotates.
You can reverse the direction of rotation by reversing the polarity of input.
DC MOTOR
Motor Driver IC L239D
This chip is designed to control 2 DC motors. There are 2 INPUTS and 2 OUTPUTS
PINs for each motors. The connections is as follows
Motor ControllerUsingL293D
The behaviorof motorfor variousinputconditionare asfollows
A B
Stop Low Low
Clockwise Low High
Anti Clockwis High Low
Stop High High
VOLTAGE REGULATOR
The power supply which provides a constant output voltage irrespective of
variations in the input voltage or load current is called a regulated power supply is called as
voltage regulator. The function of a voltage regulator is to provide a stable dc voltage for
powering other electronic circuits. A voltage regulator should be capable of providing substantial
output current.
Thus in short, a voltage regulator is an electrical regulator designed to automatically
maintain a constant voltage level.
Features:
• Output current upto 1A.
• Output voltage of 5,6,8,9,10,12,15,18,24V.
• Thermal overload protection.
• Short circuit protection.
• Output transistor safe operating area protection.
The three terminal voltage regulators category we have positive voltage regulators and negative
output voltage regulators.
The 78XX series is a series of positive voltage regulators and 79XX is the series of negative
voltage regulators.
78XX Series IC :
78XX series are three terminal positive voltage regulators.Here XX indicates the output
voltage. In our project the output voltage is regulated 5 volts.
A 5V voltage regulator (7805) is used to ensure that no more than 5V is delivered to
the Digilab board regardless of the voltage present at the J12 connector (provided that
voltage is less than 12VDC). The regulator functions by using a diode to clamp the output
voltage at 5VDC regardless of the input voltage - excess voltage is converted to heat and
dissipated through the body of the regulator. If a DC supply of greater than 12V is
used, excessive heat will be generated, and the board may be damaged. If a DC
supply of less than 5V is used, insufficient voltage will be present at the regulators output.
blockdiagramof voltage regulator
circuitdiagramof voltage regulator
schematicof voltage regulator
If a power supply providesavoltage higherthan7 or 8 volts,the regulatormustdissipatesignificant
heat.The "fin"on the regulatorbody(the side thatprotrudesupwardbeyondthe mainbodyof the part)
helps to dissipate excessheatmore efficiently.If the boardrequires higher currents (due tothe use
of peripheral devicesorlarger breadboard circuits), then the regulator may need to dissipate
more heat.In thiscase,the regulatorcan be securedtothe circuit boardby fasteningitwithascrew
and nut(see below).By securing the regulatortightlytothe circuitboard,excessheatcan be passedto
the board and thenradiatedaway.
CRYSTAL OSCILLATOR
A crystal oscillator is an electronic circuit that uses the mechanical resonance of a
vibrating crystal of piezoelectric material to create an electrical signal with a very precise
frequency. This frequency is commonly used to keep track of time (as in quartz wristwatches),
to provide a stable clock signal for digital integrated circuits, and to stabilize frequencies for
radio transmitters/receivers.
A crystal is a solid in which the constituent atoms, molecules, or ions are packed in a regularly
ordered, repeating pattern extending in all three spatial dimensions.
Almost any object made of an elastic material could be used like a crystal, with appropriate
transducers, since all objects have natural resonant frequencies of vibration.
The resonant frequency depends on size, shape, elasticity, and the speed of sound in the material.
When a crystal of quartz is properly cut and mounted, it can be made to distort in an electric
field by applying a voltage to an electrode near or on the crystal. This property is known as
piezoelectricity.
When the field is removed, the quartz will generate an electric field as it returns to its previous
shape, and this can generate a voltage.
The result is that a quartz crystal behaves like a circuit composed of an inductor, capacitor
and resistor, with a precise resonant frequency.
APPLICATIONS
Used in consumer devices such as wristwatches, clocks, radios, computers, and
cellphones. Quartz crystals are also found inside test and measurement equipment, such as
counters, signal generators, and oscilloscopes.
symbol of Crystal Oscillator
LCD Display
An LCD (Liquid Cristal Display) is a low cost display. It is easy to interface with a
micro-controller because of a embedded controller. This controller is standard across many
displays.
16X 2 LCD Display
8-BIT MICROCONTROLLER AT89S52
FEATURES
• 8K Bytesof In-SystemProgrammable(ISP) FlashMemory
• 4.0V to 5.5V OperatingRange
• FullyStaticOperation:0Hz to 33 MHz
• Three-level ProgramMemoryLock
• 256 x 8-bitInternal RAM
• 32 Programmable I/OLines
• Three 16-bitTimer/Counters
• EightInterruptSources
• Full Duplex UARTSerial Channel
• Low-powerIdle andPower-downModes
• InterruptRecoveryfromPower-downMode
• WatchdogTimer
• Dual Data Pointer
• Power-offFlag
• FastProgrammingTime
• FlexibleISPProgramming(Byte andPage Mode)
DESCRIPTION
The AT89S52 is a low-power, high-performance CMOS 8-bit microcontroller with 8K
bytes of in-system programmable Flash memory. The device is manufactured using Atmel’s
high-density nonvolatile memory technology and is compatible with the indus-try-standard
80C51 instruction set and pinout. The on-chip Flash allows the program memory to be
reprogrammed in-system or by a conventional nonvolatile memory pro-grammer. By combining
a versatile 8-bit CPU with in-system programmable Flash on a monolithic chip, the Atmel
AT89S52 is a powerful microcontroller which provides a highly-flexible and cost-effective
solution to many embedded control applications.
The AT89S52 provides the following standard features: 8K bytes of Flash, 256 bytes of
RAM, 32 I/O lines, Watchdog timer, two data pointers, three 16-bit timer/counters, a six-vector
two-level interrupt architecture, a full duplex serial port, on-chip oscillator, and clock circuitry.
In addition, the AT89S52 is designed with static logic for operation down to zero frequency and
supports two software selectable power saving modes.
PIN CONFIGURATION
5.1 pin configuration of 89S52microcontroller
BLOCK DIAGRAM
block diagram of 89S52 microcontroller
APENDIX A
Instruction Set
• ACALL - Absolute Call
• ADD, ADDC - Add Accumulator (With Carry)
• AJMP - Absolute Jump
• ANL - Bitwise AND
• CJNE - Compare and Jump if Not Equal
• CLR - Clear Register
• CPL - Complement Register
• DA - Decimal Adjust
• DEC - Decrement Register
• DIV - Divide Accumulator by B
• DJNZ - Decrement Register and Jump if Not Zero
• INC - Increment Register
• JB - Jump if Bit Set
• JBC - Jump if Bit Set and Clear Bit
• JC - Jump if Carry Set
• JMP - Jump to Address
• JNB - Jump if Bit Not Set
• JNC - Jump if Carry Not Set
• JNZ - Jump if Accumulator Not Zero
• JZ - Jump if Accumulator Zero
• LCALL - Long Call
• LJMP - Long Jump
• MOV - Move Memory
• MOVC - Move Code Memory
• MOVX - Move Extended Memory
• MUL - Multiply Accumulator by B
• NOP - No Operation
• ORL - Bitwise OR
• POP - Pop Value From Stack
• PUSH - Push Value Onto Stack
• RET - Return From Subroutine
• RETI - Return From Interrupt
• RL - Rotate Accumulator Left
• RLC - Rotate Accumulator Left Through Carry
• RR - Rotate Accumulator Right
• RRC - Rotate Accumulator Right Through Carry
• SETB - Set Bit
• SJMP - Short Jump
• SUBB - Subtract From Accumulator With Borrow
• SWAP - Swap Accumulator Nibbles
• XCH - Exchange Bytes
• XCHD - Exchange Digits
• XRL - Bitwise Exclusive OR
• Undefined - Undefined Instruction
An "addressing mode" refers to how you are addressing a given memory location. In summary,
the addressing modes are as follows, with an example of each:
Immediate Addressing MOV A,#20h
Direct Addressing MOV A,30h
Indirect Addressing MOV A,@R0
External Direct MOVX A,@DPTR
Code Indirect MOVC A,@A+DPTR
APENDIX B
SOFTWARE PROGRAGMING OF MICROCONTROLLER
#include<reg51.h>
#define lcd P0
sbit rs=P1^4;
sbit rw=P1^5;
sbit en=P1^6;
sbit ex0=P3^2;
sbit sen=P3^4;
sbit bs1_a=P2^0;
sbit bs1_b=P2^1;
sbit bs2_a=P2^2;
sbit bs2_b=P2^3;
sbit gt1_a=P2^4;
sbit gt1_b=P2^5;
sbit gt2_a=P2^6;
sbit gt2_b=P2^7;
void lcd_cmd(char);
void lcd_data(char);
void lcd_init();
void delay(unsigned int);
void lcd_msg(char*);
void forward(char);
void stop();
void gate_open();
void gate_close();
void gate_stop();
void laxmi_ngr(char);
void geeta_colny(char);
void krishna_ngr(char);
void noida15(char);
void greater_noida();
void cab_terminate();
void ext0_isr() interrupt 0
{
IE=0x00;
gate_open();
delay(50);
gate_stop();
delay(100);
gate_close();
delay(50);
gate_stop();
IE=0x81;
}
void main()
{
lcd=0x00;
rs=0;
rw=0;
en=0;
ex0=1;
IE=0x81;
while(1)
{
lcd_init();
delay(5);
lcd_msg("AUTOMATIC CAB");
lcd_cmd(0xc0);
delay(5);
lcd_msg("SERVICE");
delay(100);
laxmi_ngr(0);
geeta_colny(0);
krishna_ngr(0);
noida15(0);
greater_noida();
cab_terminate();
noida15(1);
krishna_ngr(1);
geeta_colny(1);
laxmi_ngr(1);
cab_terminate();
}
}
void lcd_data(char d)
{
rs=1;
lcd=d;
en=1;
delay(5);
en=0;
}
void lcd_cmd(char c)
{
rs=0;
lcd=c;
en=1;
delay(5);
en=0;
}
void lcd_init()
{
lcd_cmd(0x01);
delay(5);
lcd_cmd(0x06);
delay(5);
lcd_cmd(0x0e);
delay(5);
lcd_cmd(0x38);
delay(5);
lcd_cmd(0x80);
delay(5);
}
void lcd_msg(char *m)
{
while(*m!='0')
{
lcd_data(*m);
delay(10);
m++;
}
}
void delay(unsigned int x)
{
char ch;
unsigned int y,z;
ch=P2;
for(y=0;y<=x;y++)
for(z=0;z<1000;z++)
{
if(sen==1)
P2=0x00;
else
P2=ch;
}
}
void forward(char a)
{
if(a==0)
{
bs1_a=1;
bs1_b=0;
bs2_a=1;
bs2_b=0;
}
else
{
bs1_a=0;
bs1_b=1;
bs2_a=0;
bs2_b=1;
}
}
void stop()
{
bs1_a=0;
bs1_b=0;
bs2_a=0;
bs2_b=0;
}
void gate_open()
{
gt1_a=0;
gt1_b=1;
gt2_a=0;
gt2_b=1;
}
void gate_close()
{
gt1_a=1;
gt1_b=0;
gt2_a=1;
gt2_b=0;
}
void gate_stop()
{
gt1_a=0;
gt1_b=0;
gt2_a=0;
gt2_b=0;
}
void laxmi_ngr(char a)
{
lcd_cmd(0x01);
delay(5);
lcd_msg("LAXMI NAGAR");
gate_open();
delay(50);
gate_stop();
delay(100);
gate_close();
delay(50);
gate_stop();
delay(10);
forward(a);
delay(200);
stop();
delay(200);
}
void geeta_colny(char a)
{
lcd_cmd(0x01);
delay(5);
lcd_msg("GEETA COLONY");
gate_open();
delay(50);
gate_stop();
delay(100);
gate_close();
delay(50);
gate_stop();
delay(10);
forward(a);
delay(200);
stop();
delay(200);
}
void krishna_ngr(char a)
{
lcd_cmd(0x01);
delay(5);
lcd_msg("KRISHNA NAGAR");
gate_open();
delay(50);
gate_stop();
delay(100);
gate_close();
delay(50);
gate_stop();
delay(10);
forward(a);
delay(200);
stop();
delay(200);
}
void noida15(char a)
{
lcd_cmd(0x01);
delay(5);
lcd_msg("NOIDA SEC. 15");
gate_open();
delay(50);
gate_stop();
delay(100);
gate_close();
delay(50);
gate_stop();
delay(10);
forward(a);
delay(200);
stop();
delay(200);
}
void greater_noida()
{
lcd_cmd(0x01);
delay(5);
lcd_msg("GREATER NOIDA");
}
void cab_terminate()
{
lcd_init();
delay(5);
lcd_msg("CAB TERMINATE");
lcd_cmd(0xc0);
delay(5);
lcd_msg("HERE");
delay(100);
gate_open();
delay(50);
gate_stop();
delay(100);
gate_close();
delay(50);
gate_stop();
delay(10);
}
REFERENCES
• C Programming by Yashwant Kanakter
• The 8051 Microcontroller and Embedded System by
• Muhammed Ali Mazidi
• Janice Gillispie Mazidi
• Rolin D. McKinlay

Mais conteúdo relacionado

Mais procurados

Unit1 principle of programming language
Unit1 principle of programming languageUnit1 principle of programming language
Unit1 principle of programming language
Vasavi College of Engg
 
Programming topics. syed arslan rizvi
Programming topics. syed arslan rizviProgramming topics. syed arslan rizvi
Programming topics. syed arslan rizvi
Syed Arslan Rizvi
 
Ppl for students unit 1,2 and 3
Ppl for students unit 1,2 and 3Ppl for students unit 1,2 and 3
Ppl for students unit 1,2 and 3
Akshay Nagpurkar
 
Chapter 5-programming
Chapter 5-programmingChapter 5-programming
Chapter 5-programming
Aten Kecik
 
La5 ict-topic-5-programming
La5 ict-topic-5-programmingLa5 ict-topic-5-programming
La5 ict-topic-5-programming
Kak Yong
 
265 ge8151 problem solving and python programming - 2 marks with answers
265   ge8151 problem solving and python programming - 2 marks with answers265   ge8151 problem solving and python programming - 2 marks with answers
265 ge8151 problem solving and python programming - 2 marks with answers
vithyanila
 
Computer programming all chapters
Computer programming all chaptersComputer programming all chapters
Computer programming all chapters
Ibrahim Elewah
 
Language processor
Language processorLanguage processor
Language processor
Abha Damani
 

Mais procurados (20)

Unit1 principle of programming language
Unit1 principle of programming languageUnit1 principle of programming language
Unit1 principle of programming language
 
SOFTWARE TOOL FOR TRANSLATING PSEUDOCODE TO A PROGRAMMING LANGUAGE
SOFTWARE TOOL FOR TRANSLATING PSEUDOCODE TO A PROGRAMMING LANGUAGESOFTWARE TOOL FOR TRANSLATING PSEUDOCODE TO A PROGRAMMING LANGUAGE
SOFTWARE TOOL FOR TRANSLATING PSEUDOCODE TO A PROGRAMMING LANGUAGE
 
Programming topics. syed arslan rizvi
Programming topics. syed arslan rizviProgramming topics. syed arslan rizvi
Programming topics. syed arslan rizvi
 
Compiler Design(Nanthu)
Compiler Design(Nanthu)Compiler Design(Nanthu)
Compiler Design(Nanthu)
 
Programming In C++
Programming In C++ Programming In C++
Programming In C++
 
Ppl for students unit 1,2 and 3
Ppl for students unit 1,2 and 3Ppl for students unit 1,2 and 3
Ppl for students unit 1,2 and 3
 
Chapter 5-programming
Chapter 5-programmingChapter 5-programming
Chapter 5-programming
 
La5 ict-topic-5-programming
La5 ict-topic-5-programmingLa5 ict-topic-5-programming
La5 ict-topic-5-programming
 
final pl paper
final pl paperfinal pl paper
final pl paper
 
Programming in c
Programming in cProgramming in c
Programming in c
 
265 ge8151 problem solving and python programming - 2 marks with answers
265   ge8151 problem solving and python programming - 2 marks with answers265   ge8151 problem solving and python programming - 2 marks with answers
265 ge8151 problem solving and python programming - 2 marks with answers
 
Imperative programming
Imperative programmingImperative programming
Imperative programming
 
Compiler
Compiler Compiler
Compiler
 
La5 programming
La5  programmingLa5  programming
La5 programming
 
Programming
ProgrammingProgramming
Programming
 
Computer programming all chapters
Computer programming all chaptersComputer programming all chapters
Computer programming all chapters
 
Language processor
Language processorLanguage processor
Language processor
 
Graphical programming
Graphical programmingGraphical programming
Graphical programming
 
C programming
C programmingC programming
C programming
 
Aspect Oriented Programming Through C#.NET
Aspect Oriented Programming Through C#.NETAspect Oriented Programming Through C#.NET
Aspect Oriented Programming Through C#.NET
 

Semelhante a Training 8051Report

Problem Solving Techniques
Problem Solving TechniquesProblem Solving Techniques
Problem Solving Techniques
Ashesh R
 
Switch case and looping statement
Switch case and looping statementSwitch case and looping statement
Switch case and looping statement
_jenica
 
Introduction To C++ programming and its basic concepts
Introduction To C++ programming and its basic conceptsIntroduction To C++ programming and its basic concepts
Introduction To C++ programming and its basic concepts
ssuserf86fba
 

Semelhante a Training 8051Report (20)

Book management system
Book management systemBook management system
Book management system
 
Problem Solving Techniques
Problem Solving TechniquesProblem Solving Techniques
Problem Solving Techniques
 
Contact management system
Contact management systemContact management system
Contact management system
 
C Language Presentation.pptx
C Language Presentation.pptxC Language Presentation.pptx
C Language Presentation.pptx
 
Introduction to systems programming
Introduction to systems programmingIntroduction to systems programming
Introduction to systems programming
 
Switch case and looping statement
Switch case and looping statementSwitch case and looping statement
Switch case and looping statement
 
Basic construction of c
Basic construction of cBasic construction of c
Basic construction of c
 
Introduction to problem solving in C
Introduction to problem solving in CIntroduction to problem solving in C
Introduction to problem solving in C
 
Stnotes doc 5
Stnotes doc 5Stnotes doc 5
Stnotes doc 5
 
SWE-401 - 9. Software Implementation
SWE-401 - 9. Software ImplementationSWE-401 - 9. Software Implementation
SWE-401 - 9. Software Implementation
 
9. Software Implementation
9. Software Implementation9. Software Implementation
9. Software Implementation
 
CS8251_QB_answers.pdf
CS8251_QB_answers.pdfCS8251_QB_answers.pdf
CS8251_QB_answers.pdf
 
What is algorithm
What is algorithmWhat is algorithm
What is algorithm
 
PROBLEM SOLVING
PROBLEM SOLVINGPROBLEM SOLVING
PROBLEM SOLVING
 
Introduction To C++ programming and its basic concepts
Introduction To C++ programming and its basic conceptsIntroduction To C++ programming and its basic concepts
Introduction To C++ programming and its basic concepts
 
Functional programming in TypeScript
Functional programming in TypeScriptFunctional programming in TypeScript
Functional programming in TypeScript
 
PROGRAMMING LANGUAGE AND TYPES
PROGRAMMING LANGUAGE AND TYPESPROGRAMMING LANGUAGE AND TYPES
PROGRAMMING LANGUAGE AND TYPES
 
design intoduction of_COMPILER_DESIGN.pdf
design intoduction of_COMPILER_DESIGN.pdfdesign intoduction of_COMPILER_DESIGN.pdf
design intoduction of_COMPILER_DESIGN.pdf
 
PCCF UNIT 2 CLASS.pptx
PCCF UNIT 2 CLASS.pptxPCCF UNIT 2 CLASS.pptx
PCCF UNIT 2 CLASS.pptx
 
Prgramming paradigms
Prgramming paradigmsPrgramming paradigms
Prgramming paradigms
 

Training 8051Report

  • 1. A TRAINING REPORT ON EBEDDED SYSTEM Submitted by KULDEEP KAUSHIK Under the Supervision of PRAKUL RAJVANSHI EMBEDDED CONSULTANT (DUCAT) in partial fulfillment for the award of the degree of BACHELOR OF TECHNOLOGY IN ELECTRONICS &COMMUNICATION ENGINEERING MANAV BHARTI UNIVERSITY JUNE, 2013 Acknowledgement
  • 2. I would like to express our sincere gratitude to my training supervisor “PrakulRajvanshi” for giving me the opportunity to work on this topic. It would never be possible for me to take this training to this level without his/her innovative ideas and her relentless support and encouragement. Kuldeep Kaushik Modules Covered in Training: Chapter 1: C Programming language Chapter 2: Introduction to microcontroller and8051 microcontroller Chapter 3: Linux Internals. Chapter 4: Project (Automatic Cab Service). CHAPTER 1 C Language Programming Introduction C language is widely used in the development of operating systems. An Operating System (OS) is software (collection of programs) that controls the various functions of a computer. Also it makes other programs on your computer work. For example, you cannot work with a word processor program, such as Microsoft Word, if there is no operating system installed on your computer. Windows, Unix, Linux, Solaris, and MacOS are some of the popular operating systems.
  • 3. Applications C’s ability to communicate directly with hardware makes it a powerful choice for system programmers. In fact, popular operating systems such as Unix and Linux are written entirely in C. Additionally, even compilers and interpreters for other languages such as FORTRAN, Pascal, and BASIC are written in C. However, C’s scope is not just limited to developing system programs. It is also used to develop any kind of application, including complex business ones. The following is a partial list of areas where C language is used: • Embedded Systems • Systems Programming • Artificial Intelligence • Industrial Automation • Computer Graphics • Space Research • Image Processing • Game Programming What kind of language is C? C is a structured programming language, which means that it allows you to develop programs using well-defined control structures (you will learn about control structures in the articles to come), and PProvides modularity (breaking the task into multiple sub tasks that are simple enough to understand and to reuse). C is often called a middle-level language because it combines the best elements of low-level or machine language with high-level languages. Control Flow In computer science, control flow (or alternatively, flow of control) refers to the order in which the individual statements, instructions, or function calls of an imperative or a declarative program are executed or evaluated. Within an imperative programming language, a control flow statement is a statement whose execution results in a choice being made as to which of two or more paths should be followed. For non-strict functional languages, functions and language constructs exist to achieve the same result, but they are not necessarily called control flow statements. The kinds of control flow statements supported by different languages vary, but can be categorized by their effect: • continuation at a different statement (unconditional branch or jump),
  • 4. • executing a set of statements only if some condition is met (choice - i.e. conditional branch), • executing a set of statements zero or more times, until some condition is met (i.e. loop - the same as conditional branch), • executing a set of distant statements, after which the flow of control usually returns (subroutines, co routines, and continuations), • Stopping the program, preventing any further execution (unconditional halt). Conditional & decision statement Conditional statements, conditional expressions and conditional constructs are features of a programming language which perform different computations or actions depending on whether a programmer-specified Boolean condition evaluates to true or false. Apart from the case of branch predication, this is always achieved by selectively altering the control flow based on some condition. In imperative programming languages, the term "conditional statement" is usually used, whereas in functional programming, the terms "conditional expression" or "conditional construct" are preferred, because these terms all have distinct meanings. Although dynamic dispatch is not usually classified as a conditional construct, it is another way to select between alternatives at runtime. If-else The if-else construct (sometimes called if-then-else) is common across many programming languages. Although the syntax varies quite a bit from language to language, the basic structure (in pseudo code form) looks like this: (The example is actually perfectly valid Visual Basic or QuickBASIC syntax.) Else if By using Else If, it is possible to combine several conditions. Only the statements following the first condition that is found to be true will be executed. All other statements will be skipped. The statements of the final Else will be executed if none of the conditions are true. This example is written in the Ada language: If expressions
  • 5. Many languages support if expressions, which are similar to if statements, but return a value as a result. Thus, they are true expressions (which evaluate to a value), not statements (which just perform an action). In C and C-like languages conditional expressions take the form of a ternary operator called the conditional expression operator, ?:, which follows this template: (condition)?(evaluate if condition was true):(evaluate if condition was false) Case and switch statements Switch statements (in some languages, case statements) compare a given value with specified constants and take action according to the first constant to match. The example on the left is written in Pascal, and the example on the right is written in C. Functions Functions are used in c for the following reasons, Function definition, Types of functions, Functions with no arguments and no return values, Functions with arguments but no return values, Functions with arguments and return values, Return value data type of function and Void functions. A function is a complete and independent program which is used (or invoked) by the main program or other subprograms. A subprogram receives values called arguments from a calling program, performs calculations and returns the results to the calling program. There are many advantages in using functions in a program they are: 1. It facilitates top down modular programming. In this programming style, the high level logic of the overall problem is solved first while the details of each lower level functions is addressed later. 2. The length of the source program can be reduced by using functions at appropriate places. This factor is critical with microcomputers where memory space is limited. 3. It is easy to locate and isolate a faulty function for further investigation. 4. A function may be used by many other programs this means that a c programmer can build on what others have already done, instead of starting over from scratch.
  • 6. 5. A program can be used to avoid rewriting the same sequence of code at two or more locations in a program. This is especially useful if the code involved is long or complicated. 6. Programming teams does a large percentage of programming. If the program is divided into subprograms, each subprogram can be written by one or two team members of the team rather than having the whole team to work on the complex program Types of functions A function may be long to any one of the following categories: 1. Functions with no arguments and no return values. 2. Functions with arguments and no return values. 3. Functions with arguments and return values. MACROS Preprocessing expands macros in all lines that are not preprocessor directives (lines that do not have a # as the first non-white-space character) and in parts of some directives that are not skipped as part of a conditional compilation. "Conditional compilation" directives allow you to suppress compilation of parts of a source file by testing a constant expression or identifier to determine which text blocks are passed on to the compiler and which text blocks are removed from the source file during preprocessing. The #define directive is typically used to associate meaningful identifiers with constants, keywords, and commonly used statements or expressions. Identifiers that represent constants are sometimes called "symbolic constants" or "manifest constants." Identifiers that represent statements or expressions are called "macros." In this preprocessor documentation, only the term "macro" is used. When the name of the macro is recognized in the program source text or in the arguments of certain other preprocessor commands, it is treated as a call to that macro. The macro name is replaced by a copy of the macro body. If the macro accepts arguments, the actual arguments following the macro name are substituted for formal parameters in the macro body. The process of replacing a macro call with the processed copy of the body is called "expansion" of the macro call. Pointers In C language, a pointer is a variable that points to or references a memory location in which data is stored. Each memory cell in the computer has an address which can be used to access its location. A pointer variable points to a memory location. By making use of pointer, we can access and change the contents of the memory location. Pointer declaration
  • 7. A pointer variable contains the memory location of another variable. You begin the declaration of a pointer by specifying the type of data stored in the location identified by the pointer. The asterisk tells the compiler that you are creating a pointer variable. Finally you give the name of the pointer variable. The pointer declaration syntax is as shown below. type * variable_name Example: int ptr; float *string; Address operator Once we declare a pointer variable, we point the variable to another variable. We can do this by assigning the address of the variable to the pointer as in the following example: ptr = &num; The above declaration places the memory address of num variable into the pointer variable ptr. If num is stored in memory 21260 address then the pointer variable ptr will contain the memory address value 21260. Pointers and function The pointers are very much used in a function declaration. Sometimes only with a pointer a complex function can be easily represented and success. The usage of the pointers in a function definition may be classified into two groups. 1. Call by reference 2. Call by value. Call by value We have seen that a function is invoked there will be a link established between the formal and actual parameters. A temporary storage is created where the value of actual parameters is stored. The formal parameters picks up its value from storage area the mechanism of data transfer between actual and formal parameters allows the actual parameters mechanism of data transfer is referred as call by value. The corresponding formal parameter represents a local variable in the called function. The current value of corresponding actual parameter becomes the initial value of formal parameter. The value of formal parameter may be changed in the body of the actual parameter. The value of formal parameter may be changed in the body of the subprogram by assignment or input statements. This will not change the value of actual parameters.
  • 8. Call by Reference When we pass address to a function the parameters receiving the address should be pointers. The process of calling a function by using pointers to pass the address of the variable is known as call by reference. The function which is called by reference can change the values of the variable used in the call. Pointer to arrays An array is actually very much like pointer. We can declare the arrays first element as a[0] or as int *a because a[0] is an address and *a is also an address the form of declaration is equivalent. The difference is pointer is a variable and can appear on the left of the assignment operator that is lvalue. The array name is constant and cannot appear as the left side of assignment operator. Strings are characters arrays and here last element is arrays and pointers to char arrays can be used to perform a number of string functions. Pointers and structures We know the name of an array stands for the address of its zeroth element the same concept applies for names of arrays of structures. Suppose item is an array variable of struct type. Consider the following declaration: struct products { char name[30]; int manufac; float net; item[2],*ptr; this statement declares item as array of two elements, each type struct products and ptr as a pointer data objects of type struct products, the assignment ptr=item; would assign the address of zeroth element to product[0]. Its members can be accessed by using the following notation. ptr->name; ptr->manufac; ptr->net; The symbol -> is called arrow pointer and is made up of minus sign and greater than sign. Note that ptr-> is simple another way of writing product[0]. When the pointer is incremented by one it is made to pint to next record ie item[1]. The following statement will print the values of members of all the elements of the product array. Pointers on pointer
  • 9. While pointers provide enormous power and flexibility to the programmers, they may use cause manufactures if it not properly handled. Consider the following precautions using pointers to prevent errors. We should make sure that we know where each pointer is pointing in a program. Here are some general observations and common errors that might be useful to remember. A pointer contains garbage until it is initialized. Since compilers cannot detect uninitialized or wrongly initialized pointers, the errors may not be known until we execute the program remember that even if we are able to locate a wrong result, it may not provide any evidence for us to suspect problems in the pointers. The abundance of c operators is another cause of confusion that leads to errors. For example the expressions such as *ptr++, *p[],(ptr).member etc should be carefully used. A proper understanding of the precedence and associatively rules should be carefully used. Structures and Unions In this tutorial you will learn about C Programming - Structures and Unions, Giving values to members, Initializing structure, Functions and structures, Passing structure to elements to functions, Passing entire function to functions, Arrays of structure, Structure within a structure and Union. Arrays are used to store large set of data and manipulate them but the disadvantage is that all the elements stored in an array are to be of the same data type. If we need to use a collection of different data type items it is not possible using an array. When we require using a collection of different data items of different data types we can use a structure. Structure is a method of packing data of different types. A structure is a convenient method of handling a group of related data items of different data types. Structures do not occupy any memory until it is associated with the structure variable such as book1. the template is terminated with a semicolon. While the entire declaration is considered as a statement, each member is declared independently for its name and type in a separate statement inside the template. The tag name such as lib_books can be used to declare structure variables of its data type later in the program. A structure is usually defines before main along with macro definitions. In such cases the structure assumes global status and all the functions can access the structure. Functions and structures We can pass structures as arguments to functions. Unlike array names however, which always point to the start of the array, structure names are not pointers. As a result, when we change structure parameter inside a function, we don’t effect its corresponding argument. Arrays of structure It is possible to define a array of structures for example if we are maintaining information of all the students in the college and if 100 students are studying in the college. We need to use an array than single variables.
  • 10. An array of structures can be assigned initial values just as any other array can. Remember that each element is a structure that must be assigned corresponding initial values. Union Unions like structure contain members whose individual data types may differ from one another. However the members that compose a union all share the same storage area within the computer memory where as each member within a structure is assigned its own unique storage area. Thus unions are used to conserve memory. They are useful for application involving multiple members. Where values need not be assigned to all the members at any one time Like structures union can be declared using the keyword union. This declares a variable code of type union item. The union contains three members each with a different data type. However we can use only one of them at a time. This is because if only one location is allocated for union variable irrespective of size. The compiler allocates a piece of storage that is large enough to access a union member we can use the same syntax that we use to access structure members are all valid member variables. During accessing we should make sure that we are accessing the member whose value is currently stored. File handling In any programming language it is vital to learn file handling techniques. Many applications will at some point involve accessing folders and files on the hard drive. In C, a stream is associated with a file. Special functions have been designed for handling file operations. Some of them will be discussed in this chapter. The header file stdio.h is required for using these functions. Opening a file Before we perform any operations on a file, we need to identify the file to the system and open it. We do this by using a file pointer. The type FILE defined in stdio.h allows us to define a file pointer. Then you use the function fopen() for opening a file. Once this is done one can read or write to the file using the fread() or fwrite() functions, respectively. The fclose() function is used to explicitly close any opened file. Stack & queue In this section, we introduce two closely-related data types for manipulating arbitrarily large collections of objects: the stack and the queue. Each is defined by two basic operations: insert a new item, and remove an item. When we insert an item, our intent is clear. But when we remove an item, which one do we choose? The rule used for a queue is to always remove the item that has been in the collection the most amount of time. This policy is known as first-in-first-out or FIFO. The rule used for a stack is to always remove the item that has been in the collection the least amount of time. This policy is known as last-in first-out or LIFO.
  • 11. Pushdown stacks. A pushdown stack (or just a stack) is a collection that is based on the last-in-first-out (LIFO) policy. When you click a hyperlink, your browser displays the new page (and inserts it onto a stack). You can keep clicking on hyperlinks to visit new pages. You can always revisit the previous page by clicking the back button (remove it from a stack). The last-in-first-out policy offered by a pushdown stack provides just the behavior that you expect. By tradition, we name the stack insert method push() and the stack remove operation pop(). We also include a method to test whether the stack is empty. The following API summarizes the operations: The asterisk indicates that we will be considering more than one implementation of this API. Linked lists For classes such as stacks that implement collections of objects, an important objective is to ensure that the amount of space used is always proportional to the number of items in the collection. Now we consider the use of a fundamental data structure known as a linked list that can provide implementations of collections (and, in particular, stacks) that achieves this important objective. A linked list is a recursive data structure defined as follows: a linked list is either empty (null) or a reference to a node having a reference to a linked list . The node in this definition is an abstract entity that might hold any kind of data in addition to the node reference that characterizes its role in building linked lists. With object-oriented programming, implementing linked lists is not difficult. We start with a simple example of a class for the node abstraction: A Node has two instance variables: a String and a Node. The String is a placeholder in this example for any data that we might want to structure with a linked list (we can use any set of instance variables); the instance variable of type Node characterizes the linked nature of the data structure. Now, from the recursive definition, we can represent a linked list by a variable of
  • 12. type Node just by ensuring that its value is either null or a reference to a Node whose next field is a reference to a linked list. Queue A queue supports the insert and remove operations using a FIFO discipline. By convention, we name the queue insert operation enqueue and the remove operation dequeue. Lincoln tunnel. Student has tasks that must be completed. Put on a queue. Do the tasks in the same order that they arrive. • Linked list implementation. Program Queue.java implements a FIFO queue of strings using a linked list. Like Stack, we maintain a reference first to the least-recently added Node on the queue. For efficiency, we also maintain a reference last to the least- recently added Node on the queue. • Array implementation. Similar to array implementation of stack, but a little trickier since need to wrap-around. Program DoublingQueue.javaimplements the queue interface. The array is dynamically resized using repeated doubling. Trees Tree structures support various basic dynamic set operations including Search Predecessor Successor, Minimum, Maximum, Insert, and Delete in time proportional to the height of the tree. Ideally, a tree will be balanced and the height will be log n where n is the number of nodes in the tree. To ensure that the height of the tree is as small as possible and therefore provide the best running time, a balanced tree structure like a red-black tree, AVL tree,
  • 13. or b-tree must be used. When working with large sets of data, it is often not possible or desirable to maintain the entire structure in primary storage (RAM). Instead, a relatively small portion of the data structure is maintained in primary storage, and additional data is read from secondary storage as needed. Unfortunately, a magnetic disk, the most common form of secondary storage, is significantly slower than random access memory (RAM). In fact, the system often spends more time retrieving data than actually processing data. B-trees are balanced trees that are optimized for situations when part or all of the tree must be maintained in secondary storage such as a magnetic disk. Since disk accesses are expensive (time consuming) operations, a b-tree tries to minimize the number of disk accesses. For example, a b- tree with a height of 2 and a branching factor of 1001 can store over one billion keys but requires at most two disk accesses to search for any node. The Structure of B-Trees Unlike a binary-tree, each node of a b-tree may have a variable number of keys and children. The keys are stored in non-decreasing order. Each key has an associated child that is the root of a sub tree containing all nodes with keys less than or equal to the key but greater than the preceding key. A node also has an additional rightmost child that is the root for a sub tree containing all keys greater than any keys in the node. A b-tree has a minimum number of allowable children for each node known as the minimization factor. If t is this minimization factor, every node must have at least t - 1 keys. Under certain circumstances, the root node is allowed to violate this property by having fewer than t - 1 keys. Every node may have at most 2t - 1 keys or, equivalently, 2t children. Since each node tends to have a large branching factor (a large number of children), it is typically necessary to traverse relatively few nodes before locating the desired key. If access to each node requires a disk access, then a b-tree will minimize the number of disk accesses required. The minimization factor is usually chosen so that the total size of each node corresponds to a multiple of the block size of the underlying storage device. This choice simplifies and optimizes disk access. Consequently, a b-tree is an ideal data structure for situations where all data cannot reside in primary storage and accesses to secondary storage are comparatively expensive (or time consuming). Height of B-Trees For n greater than or equal to one, the height of an n-key b-tree T of height h with a minimum degree t greater than or equal to 2,
  • 14. CHAPTER 2: MICROCONTROLLER AND ITS INTERFACING Introduction 8051 Architecture: Block Diagram and Pin Diagram: Timers Interrupts & interrupt handling What is an Interrupt? The interrupts refer to a notification, communicated to the controller, by a hardware device or software, on receipt of which controller momentarily stops and responds to the interrupt. Whenever an interrupt occurs the controller completes the execution of the current instruction and starts the execution of an Interrupt Service Routine (ISR) or Interrupt Handler. ISR is a piece of code that tells the processor or controller what to do when the interrupt occurs. After the execution of ISR, controller returns back to the instruction it has jumped from (before the interrupt was received).
  • 15. Why need interrupts? An application built around microcontrollers generally has the following structure. It takes input from devices like keypad, ADC etc; processes the input using certain algorithm; and generates an output which is either displayed using devices like seven segment, LCD or used further to operate other devices like motors etc. In such designs, controllers interact with the inbuilt devices like timers and other interfaced peripherals like sensors, serial port etc. The programmer needs to monitor their status regularly like whether the sensor is giving output, whether a signal has been received or transmitted, whether timer has finished counting, or if an interfaced device needs service from the controller, and so on. This state of continuous monitoring is known as polling. In polling, the microcontroller keeps checking the status of other devices; and while doing so it does no other operation and consumes all its processing time for monitoring. This problem can be addressed by using interrupts. In interrupt method, the controller responds to only when an interruption occurs. Thus in interrupt method, controller is not required to regularly monitor the status (flags, signals etc.) of interfaced and inbuilt devices. Hardware and Software Interrupt The interrupts in a controller can be either hardware or software. If the interrupts are generated by the controller’s inbuilt devices, like timer interrupts; or by the interfaced devices, they are called the hardware interrupts. If the interrupts are generated by a piece of code, they are termed as software interrupts. Multiple Interrupts What would happen if multiple interrupts are received by a microcontroller at the same instant? In such a case, the controller assigns priorities to the interrupts. Thus the interrupt with the highest priority is served first. However the priority of interrupts can be changed configuring the appropriate registers in the code. 8051 Interrupts The 8051 controller has six hardware interrupts of which five are available to the programmer. These are as follows: 1. RESET Interrupt - This is also known as Power on Reset (POR). When the RESET interrupt is received, the controller restarts executing code from 0000H location. This is an interrupt which is not available to or, better to say, need not be available to the programmer. 2. Timer Interrupts - Each Timer is associated with a Timer interrupt. A timer interrupt notifies the microcontroller that the corresponding Timer has finished counting.
  • 16. 3. External Interrupts - There are two external interrupts EX0 and EX1 to serve external devices. Both these interrupts are active low. In AT89C51, P3.2 (INT0) and P3.3 (INT1) pins are available for external interrupts 0 and 1 respectively. An external interrupt notifies the microcontroller that an external device needs its service. 4. Serial Interrupt - This interrupt is used for serial communication. When enabled, it notifies the controller whether a byte has been received or transmitted. How is an interrupt serviced? Every interrupt is assigned a fixed memory area inside the processor/controller. The Interrupt Vector Table (IVT) holds the starting address of the memory area assigned to it (corresponding to every interrupt). When an interrupt is received, the controller stops after executing the current instruction. It transfers the content of program counter into stack. It also stores the current status of the interrupts internally but not on stack. After this, it jumps to the memory location specified by Interrupt Vector Table (IVT). After that the code written on that memory area gets executed. This code is known as the Interrupt Service Routine (ISR) or interrupt handler. ISR is a code written by the programmer to handle or service the interrupt. Programming Interrupts While programming interrupts, first thing to do is to specify the microcontroller which interrupts must be served. This is done by configuring the Interrupt Enable (IE) register which enables or disables the various available interrupts. The Interrupt Enable register has following bits to enable/disable the hardware interrupts of the 8051 controller. To enable any of the interrupts, first the EA bit must be set to 1. After that the bits corresponding to the desired interrupts are enabled. ET0, ET1 and ET2 bits are used to enable the Timer Interrupts 0, 1 and 2, respectively. In AT89C51, there are only two timers, so ET2 is not used. EX0 and EX1 are used to enable the external interrupts 0 and 1. ES is used for serial interrupt. EA bit acts as a lock bit. If any of the interrupt bits are enabled but EA bit is not set, the interrupt will not function. By default all the interrupts are in disabled mode. Setting the bits of IE register is necessary and sufficient to enable the interrupts. Next step is to specify the controller what to do when an interrupt occurs. This is done by writing a subroutine or function for the interrupt. This is the ISR and gets automatically called when an interrupt occurs. It is not required to call the Interrupt Subroutine explicitly in the code. 1. Programming Timer Interrupts The timer interrupts IT0 and IT1 are related to Timers 0 and 1, respectively. (Please refer 8051 Timers for details on Timer registers and modes.) The interrupt programming for timers involves following steps: 1. Configure TMOD register to select timer(s) and its/their mode. 2. Load initial values in THx and TLx for mode 0 and 1; or in THx only for mode 2. 3. Enable Timer Interrupt by configuring bits of IE register.
  • 17. 4. Start timer by setting timer run bit TRx. 5. Write subroutine for Timer Interrupt. The interrupt number is 1 for Timer0 and 3 for Timer1. Note that it is not required to clear timer flag TFx. 6. To stop the timer, clear TRx in the end of subroutine. Otherwise it will restart from 0000H in case of modes 0 or 1 and from initial values in case of mode 2. 7. If the Timer has to run again and again, it is required to reload initial values within the routine itself (in case of mode 0 and 1). Otherwise after one cycle timer will start counting from 0000H. 2. Programming External Interrupts The external interrupts are the interrupts received from the (external) devices interfaced with the microcontroller. They are received at INTx pins of the controller. These can be level triggered or edge triggered. In level triggered, interrupt is enabled for a low at INTx pin; while in case of edge triggering, interrupt is enabled for a high to low transition at INTx pin. The edge or level trigger is decided by the TCON register. The TCON register has following bits: Setting the IT0 and IT1 bits make the external interrupt 0 and 1 edge triggered respectively. By default these bits are cleared and so external interrupt is level triggered. Note : For a level trigger interrupt, the INTx pin must remain low until the start of the ISR and should return to high before the end of ISR. If the low at INTx pin goes high before the start of ISR, interrupt will not be generated. Also if the INTx pin remains low even after the end of ISR, the interrupt will be generated once again. This is the reason why level trigger interrupt (low) at INTx pin must be four machine cycles long and not greater than or smaller than this. Following are the steps for using external interrupt : 1. Enable external interrupt by configuring IE register. 2. Write routine for external interrupt. The interrupt number is 0 for EX0 and 2 for EX1 respectively. 3. Programming Serial Interrupt To use the serial interrupt the ES bit along with the EA bit is set. Whenever one byte of data is sent or received, the serial interrupt is generated and the TI or RI flag goes high. Here, the TI or RI flag needs to be cleared explicitly in the interrupt routine (written for the Serial Interrupt). The programming of the Serial Interrupt involves the following steps: 1. Enable the Serial Interrupt (configure the IE register). 2. Configure SCON register. 3. Write routine or function for the Serial Interrupt. The interrupt number is 4. 4. Clear the RI or TI flag within the routine. Programming Multiple Interrupts Multiple interrupts can be enabled by setting more than one interrupts in the IE register. If more than one interrupts occur at the same time, the interrupts will be serviced in order of their priority. By default the interrupts have the following priorities in descending order:
  • 18. The priority of the interrupts can be changed by programming the bits of Interrupt Priority (IP) register. The IP register has the following bit configuration: First two MSBs are reserved. The remaining bits are the priority bits for the available interrupts. Setting a particular bit in IP register makes the corresponding interrupt of the higher priority. For example, IP = 0x08; will make Timer1 priority higher. So the interrupt priority order will change as follows (in descending order): More than one bit in IP register can also be set. In such a case, the higher priority interrupts will follow the sequence as they follow in default case. For example, IP = 0x0A; will make Timer0 and Timer1 priorities higher. So the interrupt priority order will change as follows (in descending order): Serial Communication Protocols Distributed systems require protocols for communication between microcontrollers. Controller Area Networks (CAN) and Serial Peripheral Interfaces (SPI) are two of the most common such protocols. The beauty of using multiple processors in a single system is that the timing requirements of one processor can be divorced from the timing requirements of the other. In a real-time system, this quality can make the programming a lot easier and reduce the potential for race conditions. The price you pay is that you then have to get information from one processor to the other. If you use one fast processor instead of two slow ones, passing information from one part of the software to another may be as simple as passing parameters to a function or storing the data in a global location. However, when the pieces of software that need to communicate are located on different processors, you have to figure out how to bundle the information into a packet and pass it across some sort of link. In this article, we'll look at two standard protocols, SPI and CAN, that can be used to communicate between processors, and also at some of the issues that arise in designing ad hoc protocols for small systems. Controller Area Network (CAN) Controller Area Network (CAN) is a multi-drop bus protocol, so it can support many communicating nodes. The advantages are obvious. The disadvantage of moving to more than two nodes is that you now require some addressing mechanism to indicate who sent a message, and who should receive it. The CAN protocol is based on two signals shared by all nodes on the network. The CAN_High and CAN_Low signals provide a differential signal and allow collision detection. If both lines go high, two different nodes must be trying to drive two different signals, and one will then back off and allow the other to continue.
  • 19. CAN is used in almost every automobile manufactured in Europe. In the U.S., CAN is popular in factory automation, where the Device Net protocol uses CAN as its lower layer. The biggest difference between CAN and SPI is that the CAN protocol defines packets. In SPI (and serial interfaces in general), only the transmission of a byte is fully defined. Given a mechanism for byte transfer, software can provide a packet layer, but no standard size or type exists for a serial packet. Since packet transfer is standardized for CAN, it's usually implemented in hardware. Implementing packets, including checksums and backoff-and-retry mechanisms, in hardware hides a whole family of low-level design issues from the software engineer. The program can place a packet in a CAN controller's buffer and not worry about interacting with the CAN hardware until the packet is sent or an entire packet has been received. The same level of control could be built into a serial controller, but unless it was standardized, that controller could only communicate with peers of the same type. A CAN packet consists of an identifier that comprises either 11 bits or 29 bits and 8 bytes of data, along with a few other pieces of housekeeping like the checksum. The identifier is not defined by the CAN protocol, but higher level protocols can describe how the identifier can be divided into source, destination, priority, and type information. You could also define these bits yourself if you don't have to share the bus with devices outside of your control. When controlling transmission byte by byte, you usually have to combine a number of bytes to say anything meaningful, except in cases as trivial as the thermostat example discussed earlier. However, in eight bytes you can express commands, report on parameter values, or pass calibration results. For debugging purposes, communicating from a microcontroller to a PC is straightforward. By snooping the CAN bus from the PC, you can monitor the communications between the microcontrollers in the system, or you can imitate one side of the conversation by inserting test messages. A product called USBcan from Kvaser provides an interface to the CAN bus through the PC's USB port. A number of other companies offer similar products, but what I found impressive about Kvaser was the quality of the software libraries available. The CANlib library provides an API for building and receiving CAN packets. The company also provides a version of the library compiled for my favorite PC development environment, Borland C++ Builder, which enabled me to build a nice GUI that showed all bus activity. The same program can be used for calibration, inserting text messages, and even downloading a new version of software to the device. Each Kvaser product, whether ISA, PCI, PCMCIA or USB-based, has a driver. Once the driver is installed, the applications built using Kvaser's libraries will work directly with that device. So, if I develop on a PC with a PCI card, I can still deploy my test software to a field engineer with a
  • 20. laptop and a PCMCIA card. Since the application I was working on was automotive, it was ideal to be able to send someone into a vehicle with a laptop. One of my few gripes with the supplied software is that it only supports the mainstream versions of Windows. Linux drivers would have been welcome, but Kvaser does not support it. (Open source drivers are available for some of the Kvaser ISA boards at the Linux CAN Project homepage.) One of the most useful drivers from Kvaser is a virtual driver that doesn't require a CAN hardware interface. This allows one PC application to communicate with other PC applications running CAN software without any CAN hardware. You can therefore develop and test a PC program to communicate over the CAN bus without requiring any CAN hardware, as long as you write another PC test program to listen to whatever the first program is saying. This is useful if there isn't enough hardware to provide a system to each developer or if the prototype target is not yet available. Higher layer protocols There are a number of higher layer protocols that have been layered on top of the basic CAN specifications . These include SAE J1939, DeviceNet, and CANOpen. The emphasis of these protocols is to define the meaning of the identifier and to encourage interoperability between CAN-based solutions from different vendors. Each standard has established a foothold in a different application domain. If your system is closed, that is, if all nodes on the bus will be products from your company, then implementing one of the standard higher level protocols is probably unnecessary. However, examining these standards may give you ideas for some of the features that you might want to implement. For example, SAE J1939 includes a connection-oriented mechanism, which is suitable when transferring blocks of data larger than eight bytes. The standard defines a handshaking message to set up the connection and then a system of counting segments to ensure that the receiver will detect any missing packets. Some higher level protocols define messages for particular application domains, such as a message that is sent when a car's brakes are engaged. In theory, this means that you can develop a device that integrates with your in-car electronics. In practice, the exact workings of the engine management CAN bus on any vehicle are a closely guarded secret. The CAN standards are not a ticket in; you still need the manufacturer's cooperation.
  • 21. Introduction to KEIL Uvision programming software: Third-Party Utilities extend the functions and capabilities of µVision. Keil PK51 is a complete software development environment for classic and extended 8051 microcontrollers. Like all Keil tools, it is easy to learn and use. RTX Real-Time Kernels enables the development of real- time software. The Keil 8051 Development Tools are designed to solve the complex problems facing embedded software developers. • When starting a new project, simply select the microcontroller you use from the Device Database and the µVision IDE sets all compiler, assembler, linker, and memory options for you. • Numerous example programs are included to help you get started with the most popular embedded 8051 devices. • The Keil µVision Debugger accurately simulates on-chip peripherals (I²C, CAN, UART, SPI, Interrupts, I/O Ports, A/D Converter, D/A Converter, and PWM Modules) of your 8051 device. Simulation helps you understand hardware configurations and avoids time wasted on setup problems. Additionally, with simulation, you can write and test applications before target hardware is available. • When you are ready to begin testing your software application with target hardware, use the MON51, MON390, MONADI, or FlashMON51 Target Monitors, the ISD51 In-System Debugger, or the ULINK USB-JTAG Adapter to download and test program code on your target system. Top view simulator: Topview Simulator gives an excellent simulation environment for the industry's most popular 8 bit Microcontroller family, MCS 51. It gives required facilities to enable the system designers to start projects right from the scratch and finish them with ease and confidence.
  • 22. It is the total simulation solution giving many state of art features meeting the needs of the designers possessing different levels of expertise. If you are a beginner, then you can learn about 8051 based embedded solutions without any hardware. If you are an experienced designer, you may find most of the required facilities built in the simulator that enabling you to complete your next project without waiting for the target hardware. The features of the simulator is briefly tabulated here for your reference: Finished Real Time Projects. Project Channel Sequential Controller with LED Displays. Project Channel Sequential Controller - LCD Display
  • 23. Project - Programmable Timer with 2X16 LCD Display Device Selection A wide range of device selection, including Generic 8031 devices and Atmel's AT89CXX series 8051 microcontrollers.
  • 24. Program Editing Powerful editing feature for generating your programs both in C and Assembly level and the facility to call an external Compiler / Assembler (Keil / SDCC Compilers) to process input programs. Clearview Clearview facility gives all the internal architectural details in multiple windows. Information about the Program, Data Memory, register, peripherals, SFR bits are clearly presented in many windows to make you understand the program flow very easily. ProgramExecution A variety of program execution options including Single Stroke full speed execution, Single Step, Step Over and Breakpoint execution modes give you total control over the target program. Clear view updates all the windows with the correct and latest data and it is a convenient help during your debugging operations. You may find how this Topview Simulator simplifies the most difficult operation of the program development, debugging, into a very simple task. Simulation Facilities Powerful simulation facilities are incorporated for I/O lines, Interrupt lines, Clocks meant for Timers / Counters. Many external interfacing possibilities can be simulated: Range of Plain Point LEDs and Seven Segment LED options. • LCD modules in many configurations. • Momentary ON keys. • A variety of keypads upto 4 X 8 key matrix. • Toggle switches. • All modes of onchip serial port communication facility. • I2C components including RTC, EEPROMs. • SPI Bus based EEPROM devices. Code Generation Facilities Powerful and versatile Code Generating facility enables you to generate the exact and compact assembly code / C Source code for many possible application oriented interfacing options. You can simply define your exact needs and get the target assembly code / C Source code at a press of button at anywhere in your program flow. The code gets embedded into your application program automatically. You are assured of trouble free working of final code in the real time. • All modes of the serial port. • Interfacing I2C/SPI Bus devices. • Range of keypads. • Many LED/LCD interfacing possibilities.
  • 25. CHAPTER 3 LINUX INTERNAL INTRODUCTION Linux Operating System Linux is a free open-source operating system based on Unix. Linux was originally created by Linus Torvalds with the assistance of developers from around the globe. Linux is free to download, edit and distribute. Linux is a very powerful operating system and it is gradually becoming popular throughout the world. Advantages of Linux Low cost There is no need to spend time and huge amount money to obtain licenses since Linux and much of it’s software come with the GNU General Public License. There is no need to worry about any software's that you use in Linux. Stability Linux has high stability compared with other operating systems. There is no need to reboot the Linux system to maintain performance levels. Rarely it freeze up or slow down. It has a continuous up-times of hundreds of days or more. Performance Linux provides high performance on various networks. It has the ability to handle large numbers of users simultaneously. Networking Linux provides a strong support for network functionality; client and server systems can be easily set up on any computer running Linux. It can perform tasks like network backup more faster than other operating systems. Flexibility Linux is very flexible. Linux can be used for high performance server applications, desktop applications, and embedded systems. You can install only the needed components for a particular use. You can also restrict the use of specific computers. Compatibility It runs all common Unix software packages and can process all common file formats. Wider Choice There is a large number of Linux distributions which gives you a wider choice. Each organization develop and support different distribution. You can pick the one you like best; the core function's are the same. Fast and easy installation Linux distributions come with user-friendly installation.
  • 26. Better use of hard disk Linux uses its resources well enough even when the hard disk is almost full. Multitasking Linux is a multitasking operating system. It can handle many things at the same time. Security Linux is one of the most secure operating systems. File ownership and permissions make linux more secure. Open source Linux is an Open source operating systems. You can easily get the source code for linux and edit it to develop your personal operating system. Today, Linux is widely used for both basic home and office uses. It is the main operating system used for high performance business and in web servers. Linux has made a high impact in this world. Inter process communication (IPC) Processes communicate with each other and with the kernel to coordinate their activities. Linux supports a number of Inter-Process Communication (IPC) mechanisms. Signals and pipes are two of them but Linux also supports the System V IPC mechanisms named after the Unix TM release in which they first appeared. Signals Signals are one of the oldest inter-process communication methods used by Unix .systems. They are used to signal asynchronous events to one or more processes. A signal could be generated by a keyboard interrupt or an error condition such as the process attempting to access a non-existent location in its virtual memory. Signals are also used by the shells to signal job control commands to their child processes. There are a set of defined signals that the kernel can generate or that can be generated by other processes in the system, provided that they have the correct privileges. You can list a system's set of signals using the kill command (kill -l), on my Intel Linux box this gives: 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGIOT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP 21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ 26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR Pipes The common Linux shells all allow redirection. For example $ ls | pr | lpr
  • 27. pipes the output from the ls command listing the directory's files into the standard input of the pr command which paginates them. Finally the standard output from the pr command is piped into the standard input of the lpr command which prints the results on the default printer. Pipes then are unidirectional byte streams which connect the standard output from one process into the standard input of another process. Neither process is aware of this redirection and behaves just as it would normally. It is the shell which sets up these temporary pipes between the processes. In Linux, a pipe is implemented using two file data structures which both point at the same temporary VFS inode which itself points at a physical page within memory. shows that each file data structure contains pointers to different file operation routine vectors; one for writing to the pipe, the other for reading from the pipe. This hides the underlying differences from the generic system calls which read and write to ordinary files. As the writing process writes to the pipe, bytes are copied into the shared data page and when the reading process reads from the pipe, bytes are copied from the shared data page. Linux must synchronize access to the pipe. It must make sure that the reader and the writer of the pipe are in step and to do this it uses locks, wait queues and signals. When the writer wants to write to the pipe it uses the standard write library functions. These all pass file descriptors that are indices into the process's set of file data structures, each one representing an open file or, as in this case, an open pipe. The Linux system call uses the write
  • 28. routine pointed at by the file data structure describing this pipe. That write routine uses information held in the VFS inode representing the pipe to manage the write request. If there is enough room to write all of the bytes into the pipe and, so long as the pipe is not locked by its reader, Linux locks it for the writer and copies the bytes to be written from the process's address space into the shared data page. If the pipe is locked by the reader or if there is not enough room for the data then the current process is made to sleep on the pipe inode's wait queue and the scheduler is called so that another process can run. It is interruptible, so it can receive signals and it will be woken by the reader when there is enough room for the write data or when the pipe is unlocked. When the data has been written, the pipe's VFS inode is unlocked and any waiting readers sleeping on the inode's wait queue will themselves be woken up. Reading data from the pipe is a very similar process to writing to it. Processes are allowed to do non-blocking reads (it depends on the mode in which they opened the file or pipe) and, in this case, if there is no data to be read or if the pipe is locked, an error will be returned. This means that the process can continue to run. The alternative is to wait on the pipe inode's wait queue until the write process has finished. When both processes have finished with the pipe, the pipe inode is discarded along with the shared data page. Sockets Sockets are interfaces that can "plug into" each other over a network. Once so "plugged in", the programs so connected communicate. This article discusses only simple aspects of stream inet sockets (don't worry about exactly what that is right now). For the purposes of this article, a "server" program is exposed via a socket connected to a certain /etc/services port number. A "client" program can then connect its own socket to the server's socket, at which time the client program's writes to the socket are read as stdin to the server program, and stdout from the server program are read from the client's socket reads. This is one subset of socket programming, but it's perhaps the easiest to master, so this is where you should start. Diagram of client-server socket connection via xinetd. Note that the client communicates by reading and writing the socket, but the server program communicates via stdin and stdout. This tutorial requires a Linux box. It hasn't been tested on other types of UNIX, but I think it might work. This tutorial is centered around a system using xinetd, but it would be simple enough to adapt it to older inetd systems. This tutorial will not work under Windows. I think it's
  • 29. important that this complex type of programming be learned on the most reliable, straightforward system possible, so Windows is out. For the purposes of this tutorial, the server application will be at port 3333. Note that you can implement both the client and the server on a single computer, in which case the client is connected to a port on the computer containing both the client and the server. System V IPC Mechanisms Linux supports three types of interprocess communication mechanisms that first appeared in Unix TM System V (1983). These are message queues, semaphores and shared memory. These System V IPC mechanisms all share common authentication methods. Processes may access these resources only by passing a unique reference identifier to the kernel via system calls. Access to these System V IPC objects is checked using access permissions, much like accesses to files are checked. The access rights to the System V IPC object is set by the creator of the object via system calls. The object's reference identifier is used by each mechanism as an index into a table of resources. It is not a straight forward index but requires some manipulation to generate the index. All Linux data structures representing System V IPC objects in the system include an ipc_perm structure which contains the owner and creator process's user and group identifiers. The access mode for this object (owner, group and other) and the IPC object's key. The key is used as a way of locating the System V IPC object's reference identifier. Two sets of keys are supported: public and private. If the key is public then any process in the system, subject to rights checking, can find the reference identifier for the System V IPC object. System V IPC objects can never be referenced with a key, only by their reference identifier. Message Queues Message queues allow one or more processes to write messages, which will be read by one or more reading processes. Linux maintains a list of message queues, the msgque vector; each element of which points to a msqid_ds data structure that fully describes the message queue. When message queues are created a new msqid_ds data structure is allocated from system memory and inserted into the vector.
  • 30. data structure contains an ipc_perm data structure and pointers to the messages entered onto this queue. In addition, Linux keeps queue modification times such as the last time that this queue was written to and so on. The msqid_ds also contains two wait queues; one for the writers to the queue and one for the readers of the message queue. Each time a process attempts to write a message to the write queue its effective user and group identifiers are compared with the mode in this queue's ipc_perm data structure. If the process can write to the queue then the message may be copied from the process's address space into a msg data structure and put at the end of this message queue. Each message is tagged with an application specific type, agreed between the cooperating processes. However, there may be no room for the message as Linux restricts the number and length of messages that can be written. In this case the process will be added to this message queue's write wait queue and the scheduler will be called to select a new process to run. It will be woken up when one or more messages have been read from this message queue. Reading from the queue is a similar process. Again, the processes access rights to the write queue are checked. A reading process may choose to either get the first message in the queue regardless of its type or select messages with particular types. If no messages match this criteria the reading process will be added to the message queue's read wait queue and the scheduler run. When a new message is written to the queue this process will be woken up and run again. Semaphores In its simplest form a semaphore is a location in memory whose value can be tested and set by more than one process. The test and set operation is, so far as each process is concerned, uninterruptible or atomic; once started nothing can stop it. The result of the test and set operation is the addition of the current value of the semaphore and the set value, which can be positive or negative. Depending on the result of the test and set operation one process may have to sleep until the semaphore’s value is changed by another process. Semaphores can be used to implement critical regions, areas of critical code that only one process at a time should be
  • 31. executing. Say you had many cooperating processes reading records from and writing records to a single data file. You would want that file access to be strictly coordinated. You could use a semaphore with an initial value of 1 and, around the file operating code, put two semaphore operations, the first to test and decrement the semaphore's value and the second to test and increment it. The first process to access the file would try to decrement the semaphore's value and it would succeed, the semaphore's value now being 0. This process can now go ahead and use the data file but if another process wishing to use it now tries to decrement the semaphore's value it would fail as the result would be -1. That process will be suspended until the first process has finished with the data file. When the first process has finished with the data file it will increment the semaphore's value, making it 1 again. Now the waiting process can be woken and this time its attempt to increment the semaphore will succeed. System V IPC semaphore objects each describe a semaphore array and Linux uses the semid_ds data structure to represent this. All of the semid_ds data structures in the system are pointed at by the semary, a vector of pointers. There are sem_nsems in each semaphore array, each one described by asem data structure pointed at by sem_base. All of the processes that are allowed to manipulate the semaphore array of a System V IPC semaphore object may make system calls that perform operations on them. The system call can specify many operations and each operation is described by three inputs; the semaphore index, the operation value and a set of flags. The semaphore index is an index into the semaphore array and the operation value is a numerical value that will be added to the current value of the semaphore. First Linux tests whether or not all of the operations would succeed. An operation will succeed if the operation value added to the semaphore's current value would be greater than zero or if both the operation value and the semaphore's current value are zero. If any of the semaphore operations would fail Linux may suspend the process but only if the operation flags have not requested that the system call is non- blocking. If the process is to be suspended then Linux must save the state of the semaphore operations to be performed and put the current process onto a wait queue. It does this by building
  • 32. a sem_queue data structure on the stack and filling it out. The new sem_queue data structure is put at the end of this semaphore object's wait queue (using the sem_pending and sem_pending_last pointers). The current process is put on the wait queue in the sem_queue data structure (sleeper) and the scheduler called to choose another process to run. If all of the semaphore operations would have succeeded and the current process does not need to be suspended, Linux goes ahead and applies the operations to the appropriate members of the semaphore array. Now Linux must check that any waiting, suspended, processes may now apply their semaphore operations. It looks at each member of the operations pending queue (sem_pending) in turn, testing to see if the semphore operations will succeed this time. If they will then it removes the sem_queue data structure from the operations pending list and applies the semaphore operations to the semaphore array. It wakes up the sleeping process making it available to be restarted the next time the scheduler runs. Linux keeps looking through the pending list from the start until there is a pass where no semaphore operations can be applied and so no more processes can be woken. There is a problem with semaphores, deadlocks. These occur when one process has altered the semaphores value as it enters a critical region but then fails to leave the critical region because it crashed or was killed. Linux protects against this by maintaining lists of adjustments to the semaphore arrays. The idea is that when these adjustments are applied, the semaphores will be put back to the state that they were in before the a process's set of semaphore operations were applied. These adjustments are kept in sem_undo data structures queued both on the semid_ds data structure and on the task_struct data structure for the processes using these semaphore arrays. Each individual semaphore operation may request that an adjustment be maintained. Linux will maintain at most one sem_undo data structure per process for each semaphore array. If the requesting process does not have one, then one is created when it is needed. The new sem_undo data structure is queued both onto this process's task_struct data structure and onto the semaphore array's semid_ds data structure. As operations are applied to the semphores in the semaphore array the negation of the operation value is added to this semphore's entry in the adjustment array of this process's sem_undo data structure. So, if the operation value is 2, then -2 is added to the adjustment entry for this semaphore. When processes are deleted, as they exit Linux works through their set of sem_undo data structures applying the adjustments to the semaphore arrays. If a semaphore set is deleted, the sem_undo data structures are left queued on the process's task_struct but the semaphore array identifier is made invalid. In this case the semaphore clean up code simply discards the sem_undo data structure Shared Memory Shared memory allows one or more processes to communicate via memory that appears in all of their virtual address spaces. The pages of the virtual memory is referenced by page table entries in each of the sharing processes' page tables. It does not have to be at the same address in all of the processes' virtual memory. As with all System V IPC objects, access to shared memory areas is controlled via keys and access rights checking. Once the memory is being shared, there are no checks on how the processes are using it. They must rely on other mechanisms, for example System V semaphores, to synchronize access to the memory.
  • 33. Each newly created shared memory area is represented by a shmid_ds data structure. These are kept in the shm_segs vector. The shmid_ds data structure decribes how big the area of shared memory is, how many processes are using it and information about how that shared memory is mapped into their address spaces. It is the creator of the shared memory that controls the access permissions to that memory and whether its key is public or private. If it has enough access rights it may also lock the shared memory into physical memory. Each process that wishes to share the memory must attach to that virtual memory via a system call. This creates a new vm_area_struct data structure describing the shared memory for this process. The process can choose where in its virtual address space the shared memory goes or it can let Linux choose a free area large enough. The new vm_area_struct structure is put into the list of vm_area_structpointed at by the shmid_ds. The vm_next_shared and vm_prev_shared pointers are used to link them together. The virtual memory is not actually created during the attach; it happens when the first process attempts to access it. The first time that a process accesses one of the pages of the shared virtual memory, a page fault will occur. When Linux fixes up that page fault it finds the vm_area_struct data structure describing it. This contains pointers to handler routines for this type of shared virtual memory. The shared memory page fault handling code looks in the list of page table entries for this shmid_ds to see if one exists for this page of the shared virtual memory. If it does not exist, it will allocate a physical page and create a page table entry for it. As well as going into the current process's page tables, this entry is saved in the shmid_ds. This means that when the next process that attempts to access this memory gets a page fault, the shared memory fault handling code will use this newly created physical page for that process too. So, the first process that accesses a page of the shared memory causes it to be created and thereafter access by the other processes cause that page to be added into their virtual address spaces. When processes no longer wish to share the virtual memory, they detach from it. So long as other processes are still using the memory the detach only affects the current process.
  • 34. Its vm_area_struct is removed from the shmid_ds data structure and deallocated. The current process's page tables are updated to invalidate the area of virtual memory that it used to share. When the last process sharing the memory detaches from it, the pages of the shared memory current in physical memory are freed, as is the shmid_ds data structure for this shared memory. Further complications arise when shared virtual memory is not locked into physical memory. In this case the pages of the shared memory may be swapped out to the system's swap disk during periods of high memory usage. CHAPTER 4 PROJECT:AUTOMATIC CAB SERVICE Introduction Now a day’s automatic term is used very commonly, in which a single IC is used to perform different function. An Automatic Cab Service is works on the principal of Line Fowler robot in which the predefined path is provided by the programmer and the cab follows the same path. COMPONENTS USED RESISTOR The flow of charge (or current) through any material, encounters an opposing force similar in respect to mechanical friction. This opposing force is called resistance of the material. It is measure in ohms. In some electric circuits resistance is deliberately introduced in the form of the resistor. Resistors are of following types: Wire wound resistors. Carbon resistors. Metal film resistors. WIRE WOUND RESISTORS: Wire wound resistors are made from a long (usually Ni-Chromium) wound on a ceramic core. Longer the length of the wire, higher is the resistance. So depending on the value of resistor required in a circuit, the wire is cut and wound on a ceramic core. Such resistors are available in power of 2 watts to several hundred watts and resistance values from 1 ohm to 100k ohms. Thus wire wound resistors are used for high currents.
  • 35. CARBON RESISTORS: Carbon resistors are divided into two types: Carbon composition resistors are made by mixing carbon grains with binding material (glue) and module in the form of rods. Wire leads are inserted at the two ends. After this an insulating material seals the resistor. Resistors are available in power ratings of 1/10, 1/8, 1/4, ½, 1.2 watts and values from 1 ohm to 20 ohms. Carbon film resistors are made by depositing carbon film on a ceramic rod. They are cheaper than carbon composition resistors METAL FILM RESISTORS: They are also called thin film resistors. They are made of a thin metal coating deposited on a cylindrical insulating support. Such resistors are free of inductance effect that is common in wire wound resistors at high frequency. VARIABLE RESISTORS: Potentiometer is a resistor where values can be set depending on the requirement. Potentiometer is widely used in electronics systems. Examples are volume control, brightness and contrast control of radio or T. V. sets. Resistorarchitecture COLOUR CODE:
  • 36. CAPACITORS : A Capacitor can store charge, and its capacity to store charge is called capacitance. Capacitors consist of separated by an insulating material (known as dielectric).The two plates are joined with two leads. The dielectric could be air, mica, paper, ceramic, polyster, polysterene etc. Types of capacitors are : ELECTROLYTIC CAPACITOR: Electrolytic capacitor have an electrolyte as a dielectric. When such an electrolyte is charged, chemical changes takes place in the electrolyte. If its one plate is charged positively, same plate must be charged positively in future also. We call such capacitors as polarized. Normally we see electrolytic capacitor and the leads are marked with positive or negative on the can. Non-electrolyte capacitors have dielectric material such as paper, mica or ceramic. MICA CAPACITOR: It is a sandwich of several thin metal plates separated by thin sheets of mica. Alternate plates are connected together and leads attached for outside connections. The total assembly is encased in a plastic capsule. Such capacitors have small capacitance value (50 to 500pf) and high working voltage (5000 and above) CERAMIC CAPACITOR: Such capacitor have disc or hollow tubular shaped dielectric made of ceramic material such as titanium oxide and barium titanate. Thin coating of silver components is deposited on both sides of dielectric disc, which acts as capacitor plates. These capacitors have very low leakage current. Breakdown voltage is very high.
  • 37. DC Motor A DC motor is electromechanical device that converts electric energy into mechanical energy that can be used to do many useful works. It can produce mechanical movements like moving the tray of CD/DVD drive in and out. DC motor comes in various rating 6V and 12V. It has two wires or pins of opposite polarity. when connected with power supply the shaft rotates. You can reverse the direction of rotation by reversing the polarity of input. DC MOTOR Motor Driver IC L239D This chip is designed to control 2 DC motors. There are 2 INPUTS and 2 OUTPUTS PINs for each motors. The connections is as follows
  • 38. Motor ControllerUsingL293D The behaviorof motorfor variousinputconditionare asfollows A B Stop Low Low Clockwise Low High Anti Clockwis High Low Stop High High VOLTAGE REGULATOR The power supply which provides a constant output voltage irrespective of variations in the input voltage or load current is called a regulated power supply is called as voltage regulator. The function of a voltage regulator is to provide a stable dc voltage for powering other electronic circuits. A voltage regulator should be capable of providing substantial output current. Thus in short, a voltage regulator is an electrical regulator designed to automatically maintain a constant voltage level.
  • 39. Features: • Output current upto 1A. • Output voltage of 5,6,8,9,10,12,15,18,24V. • Thermal overload protection. • Short circuit protection. • Output transistor safe operating area protection. The three terminal voltage regulators category we have positive voltage regulators and negative output voltage regulators. The 78XX series is a series of positive voltage regulators and 79XX is the series of negative voltage regulators. 78XX Series IC : 78XX series are three terminal positive voltage regulators.Here XX indicates the output voltage. In our project the output voltage is regulated 5 volts. A 5V voltage regulator (7805) is used to ensure that no more than 5V is delivered to the Digilab board regardless of the voltage present at the J12 connector (provided that voltage is less than 12VDC). The regulator functions by using a diode to clamp the output voltage at 5VDC regardless of the input voltage - excess voltage is converted to heat and dissipated through the body of the regulator. If a DC supply of greater than 12V is used, excessive heat will be generated, and the board may be damaged. If a DC supply of less than 5V is used, insufficient voltage will be present at the regulators output.
  • 41. schematicof voltage regulator If a power supply providesavoltage higherthan7 or 8 volts,the regulatormustdissipatesignificant heat.The "fin"on the regulatorbody(the side thatprotrudesupwardbeyondthe mainbodyof the part) helps to dissipate excessheatmore efficiently.If the boardrequires higher currents (due tothe use of peripheral devicesorlarger breadboard circuits), then the regulator may need to dissipate more heat.In thiscase,the regulatorcan be securedtothe circuit boardby fasteningitwithascrew and nut(see below).By securing the regulatortightlytothe circuitboard,excessheatcan be passedto the board and thenradiatedaway. CRYSTAL OSCILLATOR A crystal oscillator is an electronic circuit that uses the mechanical resonance of a vibrating crystal of piezoelectric material to create an electrical signal with a very precise frequency. This frequency is commonly used to keep track of time (as in quartz wristwatches), to provide a stable clock signal for digital integrated circuits, and to stabilize frequencies for radio transmitters/receivers. A crystal is a solid in which the constituent atoms, molecules, or ions are packed in a regularly ordered, repeating pattern extending in all three spatial dimensions. Almost any object made of an elastic material could be used like a crystal, with appropriate transducers, since all objects have natural resonant frequencies of vibration. The resonant frequency depends on size, shape, elasticity, and the speed of sound in the material. When a crystal of quartz is properly cut and mounted, it can be made to distort in an electric field by applying a voltage to an electrode near or on the crystal. This property is known as piezoelectricity. When the field is removed, the quartz will generate an electric field as it returns to its previous shape, and this can generate a voltage. The result is that a quartz crystal behaves like a circuit composed of an inductor, capacitor and resistor, with a precise resonant frequency. APPLICATIONS
  • 42. Used in consumer devices such as wristwatches, clocks, radios, computers, and cellphones. Quartz crystals are also found inside test and measurement equipment, such as counters, signal generators, and oscilloscopes. symbol of Crystal Oscillator LCD Display An LCD (Liquid Cristal Display) is a low cost display. It is easy to interface with a micro-controller because of a embedded controller. This controller is standard across many displays. 16X 2 LCD Display 8-BIT MICROCONTROLLER AT89S52 FEATURES • 8K Bytesof In-SystemProgrammable(ISP) FlashMemory • 4.0V to 5.5V OperatingRange • FullyStaticOperation:0Hz to 33 MHz • Three-level ProgramMemoryLock
  • 43. • 256 x 8-bitInternal RAM • 32 Programmable I/OLines • Three 16-bitTimer/Counters • EightInterruptSources • Full Duplex UARTSerial Channel • Low-powerIdle andPower-downModes • InterruptRecoveryfromPower-downMode • WatchdogTimer • Dual Data Pointer • Power-offFlag • FastProgrammingTime • FlexibleISPProgramming(Byte andPage Mode) DESCRIPTION The AT89S52 is a low-power, high-performance CMOS 8-bit microcontroller with 8K bytes of in-system programmable Flash memory. The device is manufactured using Atmel’s high-density nonvolatile memory technology and is compatible with the indus-try-standard 80C51 instruction set and pinout. The on-chip Flash allows the program memory to be reprogrammed in-system or by a conventional nonvolatile memory pro-grammer. By combining a versatile 8-bit CPU with in-system programmable Flash on a monolithic chip, the Atmel AT89S52 is a powerful microcontroller which provides a highly-flexible and cost-effective solution to many embedded control applications. The AT89S52 provides the following standard features: 8K bytes of Flash, 256 bytes of RAM, 32 I/O lines, Watchdog timer, two data pointers, three 16-bit timer/counters, a six-vector two-level interrupt architecture, a full duplex serial port, on-chip oscillator, and clock circuitry.
  • 44. In addition, the AT89S52 is designed with static logic for operation down to zero frequency and supports two software selectable power saving modes. PIN CONFIGURATION 5.1 pin configuration of 89S52microcontroller BLOCK DIAGRAM
  • 45. block diagram of 89S52 microcontroller APENDIX A Instruction Set • ACALL - Absolute Call • ADD, ADDC - Add Accumulator (With Carry) • AJMP - Absolute Jump
  • 46. • ANL - Bitwise AND • CJNE - Compare and Jump if Not Equal • CLR - Clear Register • CPL - Complement Register • DA - Decimal Adjust • DEC - Decrement Register • DIV - Divide Accumulator by B • DJNZ - Decrement Register and Jump if Not Zero • INC - Increment Register • JB - Jump if Bit Set • JBC - Jump if Bit Set and Clear Bit • JC - Jump if Carry Set • JMP - Jump to Address • JNB - Jump if Bit Not Set • JNC - Jump if Carry Not Set • JNZ - Jump if Accumulator Not Zero • JZ - Jump if Accumulator Zero • LCALL - Long Call • LJMP - Long Jump • MOV - Move Memory • MOVC - Move Code Memory • MOVX - Move Extended Memory • MUL - Multiply Accumulator by B • NOP - No Operation • ORL - Bitwise OR • POP - Pop Value From Stack • PUSH - Push Value Onto Stack • RET - Return From Subroutine • RETI - Return From Interrupt • RL - Rotate Accumulator Left • RLC - Rotate Accumulator Left Through Carry • RR - Rotate Accumulator Right • RRC - Rotate Accumulator Right Through Carry • SETB - Set Bit • SJMP - Short Jump • SUBB - Subtract From Accumulator With Borrow • SWAP - Swap Accumulator Nibbles • XCH - Exchange Bytes • XCHD - Exchange Digits • XRL - Bitwise Exclusive OR • Undefined - Undefined Instruction
  • 47. An "addressing mode" refers to how you are addressing a given memory location. In summary, the addressing modes are as follows, with an example of each: Immediate Addressing MOV A,#20h Direct Addressing MOV A,30h Indirect Addressing MOV A,@R0 External Direct MOVX A,@DPTR Code Indirect MOVC A,@A+DPTR APENDIX B SOFTWARE PROGRAGMING OF MICROCONTROLLER #include<reg51.h> #define lcd P0 sbit rs=P1^4; sbit rw=P1^5; sbit en=P1^6; sbit ex0=P3^2; sbit sen=P3^4; sbit bs1_a=P2^0; sbit bs1_b=P2^1; sbit bs2_a=P2^2; sbit bs2_b=P2^3; sbit gt1_a=P2^4; sbit gt1_b=P2^5; sbit gt2_a=P2^6; sbit gt2_b=P2^7; void lcd_cmd(char); void lcd_data(char); void lcd_init(); void delay(unsigned int); void lcd_msg(char*);
  • 48. void forward(char); void stop(); void gate_open(); void gate_close(); void gate_stop(); void laxmi_ngr(char); void geeta_colny(char); void krishna_ngr(char); void noida15(char); void greater_noida(); void cab_terminate(); void ext0_isr() interrupt 0 { IE=0x00; gate_open(); delay(50); gate_stop(); delay(100); gate_close(); delay(50); gate_stop(); IE=0x81; } void main() { lcd=0x00; rs=0; rw=0; en=0; ex0=1; IE=0x81; while(1) {
  • 50. } void lcd_init() { lcd_cmd(0x01); delay(5); lcd_cmd(0x06); delay(5); lcd_cmd(0x0e); delay(5); lcd_cmd(0x38); delay(5); lcd_cmd(0x80); delay(5); } void lcd_msg(char *m) { while(*m!='0') { lcd_data(*m); delay(10); m++; } } void delay(unsigned int x) { char ch; unsigned int y,z; ch=P2; for(y=0;y<=x;y++) for(z=0;z<1000;z++) { if(sen==1) P2=0x00; else
  • 51. P2=ch; } } void forward(char a) { if(a==0) { bs1_a=1; bs1_b=0; bs2_a=1; bs2_b=0; } else { bs1_a=0; bs1_b=1; bs2_a=0; bs2_b=1; } } void stop() { bs1_a=0; bs1_b=0; bs2_a=0; bs2_b=0; } void gate_open() { gt1_a=0; gt1_b=1; gt2_a=0; gt2_b=1;
  • 52. } void gate_close() { gt1_a=1; gt1_b=0; gt2_a=1; gt2_b=0; } void gate_stop() { gt1_a=0; gt1_b=0; gt2_a=0; gt2_b=0; } void laxmi_ngr(char a) { lcd_cmd(0x01); delay(5); lcd_msg("LAXMI NAGAR"); gate_open(); delay(50); gate_stop(); delay(100); gate_close(); delay(50); gate_stop(); delay(10); forward(a); delay(200); stop(); delay(200); }
  • 53. void geeta_colny(char a) { lcd_cmd(0x01); delay(5); lcd_msg("GEETA COLONY"); gate_open(); delay(50); gate_stop(); delay(100); gate_close(); delay(50); gate_stop(); delay(10); forward(a); delay(200); stop(); delay(200); } void krishna_ngr(char a) { lcd_cmd(0x01); delay(5); lcd_msg("KRISHNA NAGAR"); gate_open(); delay(50); gate_stop(); delay(100); gate_close(); delay(50); gate_stop(); delay(10); forward(a); delay(200); stop(); delay(200); }
  • 54. void noida15(char a) { lcd_cmd(0x01); delay(5); lcd_msg("NOIDA SEC. 15"); gate_open(); delay(50); gate_stop(); delay(100); gate_close(); delay(50); gate_stop(); delay(10); forward(a); delay(200); stop(); delay(200); } void greater_noida() { lcd_cmd(0x01); delay(5); lcd_msg("GREATER NOIDA"); } void cab_terminate() { lcd_init(); delay(5); lcd_msg("CAB TERMINATE"); lcd_cmd(0xc0); delay(5); lcd_msg("HERE"); delay(100); gate_open(); delay(50);
  • 55. gate_stop(); delay(100); gate_close(); delay(50); gate_stop(); delay(10); } REFERENCES • C Programming by Yashwant Kanakter • The 8051 Microcontroller and Embedded System by • Muhammed Ali Mazidi • Janice Gillispie Mazidi • Rolin D. McKinlay