SlideShare uma empresa Scribd logo
1 de 23
Teaching material 
based on Distributed 
Systems: Concepts and 
Design, Edition 3, 
Addison-Wesley 2001. Distributed Systems Course 
Copyright © George 
Coulouris, Jean Dollimore, 
Tim Kindberg 2001 
email: authors@cdk2.net 
This material is made 
available for private study 
and for direct use by 
individual teachers. 
It may not be included in any 
product or employed in any 
service without the written 
permission of the authors. 
Viewing: These slides 
must be viewed in 
slide show mode. 
Operating System Support 
Chapter 6: 
6.1 Introduction 
6.2 The operating system layer 
6.4 Processes and threads 
6.5 Communication and invocation 
6.6 operating system architecture
Learning objectives 
 Know what a modern operating system does to support 
distributed applications and middleware 
– Definition of network OS 
– Definition of distributed OS 
 Understand the relevant abstractions and techniques, 
focussing on: 
– processes, threads, ports and support for invocation mechanisms. 
 Understand the options for operating system architecture 
– monolithic and micro-kernels 
2 
 If time: 
– Lightweight RPC 
*
System layers 
Processes, threads, 
communication, ... 
3 
Applications, services 
OS1 
Computer & 
Platform 
Middleware 
OS: kernel, 
libraries & 
servers 
network hardware 
OS2 
Processes, threads, 
communication, ... 
Middleware 
Computer & 
network hardware 
Node 1 Node 2 
Figure 6.1 
Figure 2.1 
Software and hardware service layers in distributed systems 
Applications, services 
Computer and network hardware 
Platform 
Operating system 
*
Middleware and the Operating System 
 Middleware implements abstractions that support network-wide 
programming. Examples: 
 RPC and RMI (Sun RPC, Corba, Java RMI) 
 event distribution and filtering (Corba Event Notification, Elvin) 
 resource discovery for mobile and ubiquitous computing 
 support for multimedia streaming 
 Traditional OS's (e.g. early Unix, Windows 3.0) 
– simplify, protect and optimize the use of local resources 
 Network OS's (e.g. Mach, modern UNIX, Windows NT) 
– do the same but they also support a wide range of communication standards and 
enable remote processes to access (some) local resources (e.g. files). 
4 
What is a distributed OS? 
• Presents users (and applications) with an integrated computing 
platform that hides the individual computers. 
• Has control over all of the nodes (computers) in the network and 
allocates their resources to tasks without user involvement. 
• In a distributed OS, the user doesn't know (or care) where his programs 
are running. 
• Examples: 
• Cluster computer systems 
• V system, Sprite, Globe OS 
• WebOS (?) *
The support required by middleware and distributed applications 
OS manages the basic resources of computer systems: 
– processing, memory, persistent storage and communication. 
It is the task of an operating system to: 
– raise the programming interface for these resources to a more useful 
level: 
 By providing abstractions of the basic resources such as: 
processes, unlimited virtual memory, files, communication channels 
 Protection of the resources used by applications 
 Concurrent processing to enable applications to complete their work with 
minimum interference from other applications 
– provide the resources needed for (distributed) services and applications 
to complete their task: 
 Communication - network access provided 
 Processing - processors scheduled at the relevant computers 
5 
*
Core OS functionality 
6 
Process manager 
Communication 
manager 
Thread manager Memory manager 
Supervisor 
Figure 6.2 
*
Process address space 
7 
Auxiliary 
regions 
Stack 
Heap 
Text 
N 
2 
0 
* 
Figure 6.3 
 Regions can be shared 
– kernel code 
– libraries 
– shared data & communication 
– copy-on-write 
 Files can be mapped 
– Mach, some versions of UNIX 
 UNIX fork() is expensive 
– must copy process's address space
Copy-on-write – a convenient optimization 
Process A’s address space Process B’s address space 
8 
RB copied 
from RA 
RA RB 
Shared 
frame 
Kernel 
a) Before write b) After write 
A's page 
table 
B's page 
table 
* 
Figure 6.4
9 
Process 
Thread activations 
Activation stacks 
(parameters, local variables) 
Threads concept and implementation 
Heap (dynamic storage, 'text' (program code) 
objects, global variables) 
system-provided resources 
(sockets, windows, open files) 
*
Client and server with threads 
10 
Figure 6.5 
Client 
Thread 2 makes 
Thread 1 
requests to server 
generates 
results 
N threads 
Server 
Input-output 
Receipt & 
queuing 
Requests 
* 
The 'worker pool' architecture 
See Figure 4.6 for an example of this architecture 
programmed in Java.
Alternative server threading architectures 
workers 
per-connection threads 
per-object threads 
a. Thread-per-request b. Thread-per-connection c. Thread-per-object 
11 
Figure 6.6 
* 
remote 
I/O 
objects 
server 
process 
remote 
objects 
server 
process 
I/O remote 
objects 
server 
process 
– Implemented by the server-side ORB in CORBA 
(a) would be useful for UDP-based service, e.g. NTP 
(b) is the most commonly used - matches the TCP connection model 
(c) is used where the service is encapsulated as an object. E.g. could have 
multiple shared whiteboards with one thread each. Each object has only one 
thread, avoiding the need for thread synchronization within objects.
Java thread constructor and management methods 
Figure 6.8 
13 
* 
Methods of objects that inherit from class Thread 
• Thread(ThreadGroup group, Runnable target, String name) 
• Creates a new thread in the SUSPENDED state, which will belong to group and be 
identified as name; the thread will execute the run() method of target. 
• setPriority(int newPriority), getPriority() 
• Set and return the thread’s priority. 
• run() 
• A thread executes the run() method of its target object, if it has one, and otherwise its 
own run() method (Thread implements Runnable). 
• start() 
• Change the state of the thread from SUSPENDED to RUNNABLE. 
• sleep(int millisecs) 
• Cause the thread to enter the SUSPENDED state for the specified time. 
• yield() 
• Enter the READY state and invoke the scheduler. 
• destroy() 
• Destroy the thread.
Java thread synchronization calls 
14 
Figure 6.9 
* 
See Figure 12.17 for an example of the use of object.wait() and 
object.notifyall() in a transaction implementation with locks. 
• thread.join(int millisecs) 
Input-output 
Receipt & 
queuing 
• Blocks the calling thread for up to the specified time until thread has terminated. 
• thread.interrupt() 
• Interrupts thread: causes it to return from a blocking method call such as sleep(). 
• object.wait(long millisecs, int nanosecs) 
• Blocks the calling thread until a call made to notify() or notifyAll() on object wakes 
Requests 
the thread, or the thread is interrupted, or the specified time has elapsed. 
• object.notify(), object.notifyAll() 
N threads 
• Wakes, respectively, one or all of any threads that have called wait() on object. 
Server 
object.wait() and object.notify() are very similar to the semaphore operations. E.g. a worker 
thread in Figure 6.5 would use queue.wait() to wait for incoming requests. 
synchronized methods (and code blocks) implement the monitor abstraction. The operations 
within a synchronized method are performed atomically with respect to other synchronized 
methods of the same object. synchronized should be used for any methods that update the 
state of an object in a threaded environment.
Threads implementation 
Threads can be implemented: 
– in the OS kernel (Win NT, Solaris, Mach) 
– at user level (e.g. by a thread library: C threads, pthreads), or in the 
language (Ada, Java). 
+ lightweight - no system calls 
+ modifiable scheduler 
+ low cost enables more threads to be employed 
- not pre-emptive 
- can exploit multiple processors 
- - page fault blocks all threads 
– Java can be implemented either way 
– hybrid approaches can gain some advantages of both 
 user-level hints to kernel scheduler 
 heirarchic threads (Solaris) 
 event-based (SPIN, FastThreads) 
15 
*
17 
10,000 times slower! 
* 
Support for communication and invocation 
 The performance of RPC and RMI mechanisms is critical for 
effective distributed systems. 
– Typical times for 'null procedure call': 
– Local procedure call < 1 microseconds 
– Remote procedure call ~ 10 milliseconds 
– 'network time' (involving about 100 bytes transferred, at 100 megabits/sec.) 
accounts for only .01 millisecond; the remaining delays must be in OS and 
middleware - latency, not communication time. 
 Factors affecting RPC/RMI performance 
– marshalling/unmarshalling + operation despatch at the server 
– data copying:- application -> kernel space -> communication buffers 
– thread scheduling and context switching:- including kernel entry 
– protocol processing:- for each protocol layer 
– network access delays:- connection setup, network latency
Implementation of invocation mechanisms 
 Most invocation middleware (Corba, Java RMI, HTTP) is 
implemented over TCP 
– For universal availability, unlimited message size and reliable transfer; see 
section 4.4 for further discussion of the reasons. 
– Sun RPC (used in NFS) is implemented over both UDP and TCP and 
generally works faster over UDP 
 Research-based systems have implemented much more 
efficient invocation protocols, E.g. 
– Firefly RPC (see www.cdk3.net/oss) 
– Amoeba's doOperation, getRequest, sendReply primitives (www.cdk3.net/oss) 
– LRPC [Bershad et. al. 1990], described on pp. 237-9).. 
 Concurrent and asynchronous invocations 
– middleware or application doesn't block waiting for reply to each invocation 
18 
*
Invocations between address spaces 
19 
Control transfer via 
trap instruction 
Thread 
User Kernel 
Control transfer via 
privileged instructions 
Protection domain 
boundary 
Thread 1 Thread 2 
User 1 User 2 
(a) System call 
(b) RPC/RMI (within one computer) 
Kernel 
(c) RPC/RMI (between computers) 
Thread 1 Network Thread 2 
User 1 User 2 
Kernel 1 Kernel 2 
Figure 6.11 
*
Bershad's LRPC 
 Uses shared memory for interprocess communication 
– while maintaining protection of the two processes 
– arguments copied only once (versus four times for convenitional RPC) 
 Client threads can execute server code 
– via protected entry points only (uses capabilities) 
 Up to 3 x faster for local invocations 
21
A lightweight remote procedure call 
1. Copy args 4. Execute procedure and copy results 
23 
Client 
User stub 
Server 
Kernel 
stub 
A 
A stack 
3. Upcall 2. Trap to Kernel 5. Return (trap) 
Figure 6.13 
*
Monolithic kernel and microkernel 
S4 ....... 
S1 S2 S3 
25 
Figure 6.15 
....... 
S1 S2 S3 S4 
....... 
Monolithic Kernel Microkernel 
* 
Kernel Kernel 
Middleware 
Language 
support 
subsystem 
Language 
support 
subsystem 
OS emulation 
subsystem 
.... 
Microkernel 
Hardware 
Figure 6.16 
The microkernel supports middleware via subsystems
Advantages and disadvantages of microkernel 
+ flexibility and extensibility 
• services can be added, modified and debugged 
• small kernel -> fewer bugs 
• protection of services and resources is still maintained 
- service invocation expensive 
- • unless LRPC is used 
- • extra system calls by services for access to protected resources 
26
Relevant topics not covered 
 Protection of OS resources (Section 6.3) 
– Control of access to distributed resources is covered in Chapter 7 - Security 
 Mach OS case study (Chapter 18) 
– Halfway to a distributed OS 
– The communication model is of particular interest. It includes: 
 ports that can migrate betwen computers and processes 
 support for RPC 
 typed messages 
 access control to ports 
 open implementation of network communication (drivers outside the kernel) 
 Thread programming (Section 6.4, and many other sources) 
– e.g. Doug Lea's book Concurrent Programming in Java, (Addison Wesley 1996) 
27 
*
28 
Summary 
 The OS provides local support for the implementation of 
distributed applications and middleware: 
– Manages and protects system resources (memory, processing, communication) 
– Provides relevant local abstractions: 
 files, processes 
 threads, communication ports 
 Middleware provides general-purpose distributed abstractions 
– RPC, DSM, event notification, streaming 
 Invocation performance is important 
– it can be optimized, E.g. Firefly RPC, LRPC 
 Microkernel architecture for flexibility 
– The KISS principle ('Keep it simple – stupid!') 
 has resulted in migration of many functions out of the OS 
*

Mais conteúdo relacionado

Mais procurados

distributed shared memory
 distributed shared memory distributed shared memory
distributed shared memory
Ashish Kumar
 
Distributed Systems
Distributed SystemsDistributed Systems
Distributed Systems
Rupsee
 
resource management
  resource management  resource management
resource management
Ashish Kumar
 
Chapter 14 replication
Chapter 14 replicationChapter 14 replication
Chapter 14 replication
AbDul ThaYyal
 
Chapter 4 a interprocess communication
Chapter 4 a interprocess communicationChapter 4 a interprocess communication
Chapter 4 a interprocess communication
AbDul ThaYyal
 

Mais procurados (20)

distributed shared memory
 distributed shared memory distributed shared memory
distributed shared memory
 
Distributed Systems
Distributed SystemsDistributed Systems
Distributed Systems
 
Naming in Distributed System
Naming in Distributed SystemNaming in Distributed System
Naming in Distributed System
 
Physical and Logical Clocks
Physical and Logical ClocksPhysical and Logical Clocks
Physical and Logical Clocks
 
Unit 3 cs6601 Distributed Systems
Unit 3 cs6601 Distributed SystemsUnit 3 cs6601 Distributed Systems
Unit 3 cs6601 Distributed Systems
 
Distributed Systems Naming
Distributed Systems NamingDistributed Systems Naming
Distributed Systems Naming
 
Distributed Operating System
Distributed Operating SystemDistributed Operating System
Distributed Operating System
 
Distributed file system
Distributed file systemDistributed file system
Distributed file system
 
Clock synchronization in distributed system
Clock synchronization in distributed systemClock synchronization in distributed system
Clock synchronization in distributed system
 
Naming in Distributed Systems
Naming in Distributed SystemsNaming in Distributed Systems
Naming in Distributed Systems
 
Coda file system
Coda file systemCoda file system
Coda file system
 
File models and file accessing models
File models and file accessing modelsFile models and file accessing models
File models and file accessing models
 
resource management
  resource management  resource management
resource management
 
Chapter 14 replication
Chapter 14 replicationChapter 14 replication
Chapter 14 replication
 
Chapter 4 a interprocess communication
Chapter 4 a interprocess communicationChapter 4 a interprocess communication
Chapter 4 a interprocess communication
 
6.distributed shared memory
6.distributed shared memory6.distributed shared memory
6.distributed shared memory
 
2. Distributed Systems Hardware & Software concepts
2. Distributed Systems Hardware & Software concepts2. Distributed Systems Hardware & Software concepts
2. Distributed Systems Hardware & Software concepts
 
Process Management-Process Migration
Process Management-Process MigrationProcess Management-Process Migration
Process Management-Process Migration
 
Quality of Service
Quality of ServiceQuality of Service
Quality of Service
 
Message and Stream Oriented Communication
Message and Stream Oriented CommunicationMessage and Stream Oriented Communication
Message and Stream Oriented Communication
 

Destaque

Unit 1 architecture of distributed systems
Unit 1 architecture of distributed systemsUnit 1 architecture of distributed systems
Unit 1 architecture of distributed systems
karan2190
 
Chapter 1 characterisation of distributed systems
Chapter 1 characterisation of distributed systemsChapter 1 characterisation of distributed systems
Chapter 1 characterisation of distributed systems
AbDul ThaYyal
 
Report-An Expert System for Car Failure Diagnosis-Report
Report-An Expert System for Car Failure Diagnosis-ReportReport-An Expert System for Car Failure Diagnosis-Report
Report-An Expert System for Car Failure Diagnosis-Report
Viralkumar Jayswal
 
4.file service architecture (1)
4.file service architecture (1)4.file service architecture (1)
4.file service architecture (1)
AbDul ThaYyal
 
Chapter 2 system models
Chapter 2 system modelsChapter 2 system models
Chapter 2 system models
AbDul ThaYyal
 
Stij5014 distributed systems
Stij5014 distributed systemsStij5014 distributed systems
Stij5014 distributed systems
lara_ays
 

Destaque (20)

Chapter 5
Chapter 5Chapter 5
Chapter 5
 
Distributed system notes unit I
Distributed system notes unit IDistributed system notes unit I
Distributed system notes unit I
 
Unit 1 architecture of distributed systems
Unit 1 architecture of distributed systemsUnit 1 architecture of distributed systems
Unit 1 architecture of distributed systems
 
Java RMI Detailed Tutorial
Java RMI Detailed TutorialJava RMI Detailed Tutorial
Java RMI Detailed Tutorial
 
Chapter 11b
Chapter 11bChapter 11b
Chapter 11b
 
Remote Method Invocation
Remote Method InvocationRemote Method Invocation
Remote Method Invocation
 
Chapter 1 characterisation of distributed systems
Chapter 1 characterisation of distributed systemsChapter 1 characterisation of distributed systems
Chapter 1 characterisation of distributed systems
 
Report-An Expert System for Car Failure Diagnosis-Report
Report-An Expert System for Car Failure Diagnosis-ReportReport-An Expert System for Car Failure Diagnosis-Report
Report-An Expert System for Car Failure Diagnosis-Report
 
Chapter 1 slides
Chapter 1 slidesChapter 1 slides
Chapter 1 slides
 
4.file service architecture (1)
4.file service architecture (1)4.file service architecture (1)
4.file service architecture (1)
 
Chapter 9 names
Chapter 9 namesChapter 9 names
Chapter 9 names
 
Chapter 3 slides (Distributed Systems)
Chapter 3 slides (Distributed Systems)Chapter 3 slides (Distributed Systems)
Chapter 3 slides (Distributed Systems)
 
2. microkernel new
2. microkernel new2. microkernel new
2. microkernel new
 
Chapter 17 corba
Chapter 17 corbaChapter 17 corba
Chapter 17 corba
 
Chapter 1 slides
Chapter 1 slidesChapter 1 slides
Chapter 1 slides
 
3. challenges
3. challenges3. challenges
3. challenges
 
08 Operating System Support
08  Operating  System  Support08  Operating  System  Support
08 Operating System Support
 
Chapter 2 system models
Chapter 2 system modelsChapter 2 system models
Chapter 2 system models
 
Remote Method Invocation (Java RMI)
Remote Method Invocation (Java RMI)Remote Method Invocation (Java RMI)
Remote Method Invocation (Java RMI)
 
Stij5014 distributed systems
Stij5014 distributed systemsStij5014 distributed systems
Stij5014 distributed systems
 

Semelhante a Chapter 6 os

Processes and Threads in Windows Vista
Processes and Threads in Windows VistaProcesses and Threads in Windows Vista
Processes and Threads in Windows Vista
Trinh Phuc Tho
 
Operating System 4 1193308760782240 2
Operating System 4 1193308760782240 2Operating System 4 1193308760782240 2
Operating System 4 1193308760782240 2
mona_hakmy
 
Operating System 4
Operating System 4Operating System 4
Operating System 4
tech2click
 
OS Module-2.pptx
OS Module-2.pptxOS Module-2.pptx
OS Module-2.pptx
bleh23
 
WEEK07operatingsystemdepartmentofsoftwareengineering.pptx
WEEK07operatingsystemdepartmentofsoftwareengineering.pptxWEEK07operatingsystemdepartmentofsoftwareengineering.pptx
WEEK07operatingsystemdepartmentofsoftwareengineering.pptx
babayaga920391
 
Introduction to NetBSD kernel
Introduction to NetBSD kernelIntroduction to NetBSD kernel
Introduction to NetBSD kernel
Mahendra M
 
Introto netthreads-090906214344-phpapp01
Introto netthreads-090906214344-phpapp01Introto netthreads-090906214344-phpapp01
Introto netthreads-090906214344-phpapp01
Aravindharamanan S
 

Semelhante a Chapter 6 os (20)

Os
OsOs
Os
 
Processes and Threads in Windows Vista
Processes and Threads in Windows VistaProcesses and Threads in Windows Vista
Processes and Threads in Windows Vista
 
Ch04 threads
Ch04 threadsCh04 threads
Ch04 threads
 
Operating System 4 1193308760782240 2
Operating System 4 1193308760782240 2Operating System 4 1193308760782240 2
Operating System 4 1193308760782240 2
 
Operating System 4
Operating System 4Operating System 4
Operating System 4
 
Threading.pptx
Threading.pptxThreading.pptx
Threading.pptx
 
OS Module-2.pptx
OS Module-2.pptxOS Module-2.pptx
OS Module-2.pptx
 
Distributed computing
Distributed computingDistributed computing
Distributed computing
 
CH04.pdf
CH04.pdfCH04.pdf
CH04.pdf
 
Chorus - Distributed Operating System [ case study ]
Chorus - Distributed Operating System [ case study ]Chorus - Distributed Operating System [ case study ]
Chorus - Distributed Operating System [ case study ]
 
UNIT II DIS.pptx
UNIT II DIS.pptxUNIT II DIS.pptx
UNIT II DIS.pptx
 
WEEK07operatingsystemdepartmentofsoftwareengineering.pptx
WEEK07operatingsystemdepartmentofsoftwareengineering.pptxWEEK07operatingsystemdepartmentofsoftwareengineering.pptx
WEEK07operatingsystemdepartmentofsoftwareengineering.pptx
 
4 threads
4 threads4 threads
4 threads
 
Sucet os module_2_notes
Sucet os module_2_notesSucet os module_2_notes
Sucet os module_2_notes
 
Introduction to NetBSD kernel
Introduction to NetBSD kernelIntroduction to NetBSD kernel
Introduction to NetBSD kernel
 
4.Process.ppt
4.Process.ppt4.Process.ppt
4.Process.ppt
 
Topic 4- processes.pptx
Topic 4- processes.pptxTopic 4- processes.pptx
Topic 4- processes.pptx
 
Unit 2(oss) (1)
Unit 2(oss) (1)Unit 2(oss) (1)
Unit 2(oss) (1)
 
Amoeba
AmoebaAmoeba
Amoeba
 
Introto netthreads-090906214344-phpapp01
Introto netthreads-090906214344-phpapp01Introto netthreads-090906214344-phpapp01
Introto netthreads-090906214344-phpapp01
 

Mais de AbDul ThaYyal

Chapter 15 distributed mm systems
Chapter 15 distributed mm systemsChapter 15 distributed mm systems
Chapter 15 distributed mm systems
AbDul ThaYyal
 
Chapter 12 transactions and concurrency control
Chapter 12 transactions and concurrency controlChapter 12 transactions and concurrency control
Chapter 12 transactions and concurrency control
AbDul ThaYyal
 
Chapter 11d coordination agreement
Chapter 11d coordination agreementChapter 11d coordination agreement
Chapter 11d coordination agreement
AbDul ThaYyal
 
Chapter 11c coordination agreement
Chapter 11c coordination agreementChapter 11c coordination agreement
Chapter 11c coordination agreement
AbDul ThaYyal
 
Chapter 8 distributed file systems
Chapter 8 distributed file systemsChapter 8 distributed file systems
Chapter 8 distributed file systems
AbDul ThaYyal
 
Chapter 3 networking and internetworking
Chapter 3 networking and internetworkingChapter 3 networking and internetworking
Chapter 3 networking and internetworking
AbDul ThaYyal
 
4. concurrency control
4. concurrency control4. concurrency control
4. concurrency control
AbDul ThaYyal
 
3. distributed file system requirements
3. distributed file system requirements3. distributed file system requirements
3. distributed file system requirements
AbDul ThaYyal
 

Mais de AbDul ThaYyal (13)

Chapter 15 distributed mm systems
Chapter 15 distributed mm systemsChapter 15 distributed mm systems
Chapter 15 distributed mm systems
 
Chapter 13
Chapter 13Chapter 13
Chapter 13
 
Chapter 12 transactions and concurrency control
Chapter 12 transactions and concurrency controlChapter 12 transactions and concurrency control
Chapter 12 transactions and concurrency control
 
Chapter 11d coordination agreement
Chapter 11d coordination agreementChapter 11d coordination agreement
Chapter 11d coordination agreement
 
Chapter 11c coordination agreement
Chapter 11c coordination agreementChapter 11c coordination agreement
Chapter 11c coordination agreement
 
Chapter 11
Chapter 11Chapter 11
Chapter 11
 
Chapter 10
Chapter 10Chapter 10
Chapter 10
 
Chapter 8 distributed file systems
Chapter 8 distributed file systemsChapter 8 distributed file systems
Chapter 8 distributed file systems
 
Chapter 7 security
Chapter 7 securityChapter 7 security
Chapter 7 security
 
Chapter 3 networking and internetworking
Chapter 3 networking and internetworkingChapter 3 networking and internetworking
Chapter 3 networking and internetworking
 
4. system models
4. system models4. system models
4. system models
 
4. concurrency control
4. concurrency control4. concurrency control
4. concurrency control
 
3. distributed file system requirements
3. distributed file system requirements3. distributed file system requirements
3. distributed file system requirements
 

Chapter 6 os

  • 1. Teaching material based on Distributed Systems: Concepts and Design, Edition 3, Addison-Wesley 2001. Distributed Systems Course Copyright © George Coulouris, Jean Dollimore, Tim Kindberg 2001 email: authors@cdk2.net This material is made available for private study and for direct use by individual teachers. It may not be included in any product or employed in any service without the written permission of the authors. Viewing: These slides must be viewed in slide show mode. Operating System Support Chapter 6: 6.1 Introduction 6.2 The operating system layer 6.4 Processes and threads 6.5 Communication and invocation 6.6 operating system architecture
  • 2. Learning objectives  Know what a modern operating system does to support distributed applications and middleware – Definition of network OS – Definition of distributed OS  Understand the relevant abstractions and techniques, focussing on: – processes, threads, ports and support for invocation mechanisms.  Understand the options for operating system architecture – monolithic and micro-kernels 2  If time: – Lightweight RPC *
  • 3. System layers Processes, threads, communication, ... 3 Applications, services OS1 Computer & Platform Middleware OS: kernel, libraries & servers network hardware OS2 Processes, threads, communication, ... Middleware Computer & network hardware Node 1 Node 2 Figure 6.1 Figure 2.1 Software and hardware service layers in distributed systems Applications, services Computer and network hardware Platform Operating system *
  • 4. Middleware and the Operating System  Middleware implements abstractions that support network-wide programming. Examples:  RPC and RMI (Sun RPC, Corba, Java RMI)  event distribution and filtering (Corba Event Notification, Elvin)  resource discovery for mobile and ubiquitous computing  support for multimedia streaming  Traditional OS's (e.g. early Unix, Windows 3.0) – simplify, protect and optimize the use of local resources  Network OS's (e.g. Mach, modern UNIX, Windows NT) – do the same but they also support a wide range of communication standards and enable remote processes to access (some) local resources (e.g. files). 4 What is a distributed OS? • Presents users (and applications) with an integrated computing platform that hides the individual computers. • Has control over all of the nodes (computers) in the network and allocates their resources to tasks without user involvement. • In a distributed OS, the user doesn't know (or care) where his programs are running. • Examples: • Cluster computer systems • V system, Sprite, Globe OS • WebOS (?) *
  • 5. The support required by middleware and distributed applications OS manages the basic resources of computer systems: – processing, memory, persistent storage and communication. It is the task of an operating system to: – raise the programming interface for these resources to a more useful level:  By providing abstractions of the basic resources such as: processes, unlimited virtual memory, files, communication channels  Protection of the resources used by applications  Concurrent processing to enable applications to complete their work with minimum interference from other applications – provide the resources needed for (distributed) services and applications to complete their task:  Communication - network access provided  Processing - processors scheduled at the relevant computers 5 *
  • 6. Core OS functionality 6 Process manager Communication manager Thread manager Memory manager Supervisor Figure 6.2 *
  • 7. Process address space 7 Auxiliary regions Stack Heap Text N 2 0 * Figure 6.3  Regions can be shared – kernel code – libraries – shared data & communication – copy-on-write  Files can be mapped – Mach, some versions of UNIX  UNIX fork() is expensive – must copy process's address space
  • 8. Copy-on-write – a convenient optimization Process A’s address space Process B’s address space 8 RB copied from RA RA RB Shared frame Kernel a) Before write b) After write A's page table B's page table * Figure 6.4
  • 9. 9 Process Thread activations Activation stacks (parameters, local variables) Threads concept and implementation Heap (dynamic storage, 'text' (program code) objects, global variables) system-provided resources (sockets, windows, open files) *
  • 10. Client and server with threads 10 Figure 6.5 Client Thread 2 makes Thread 1 requests to server generates results N threads Server Input-output Receipt & queuing Requests * The 'worker pool' architecture See Figure 4.6 for an example of this architecture programmed in Java.
  • 11. Alternative server threading architectures workers per-connection threads per-object threads a. Thread-per-request b. Thread-per-connection c. Thread-per-object 11 Figure 6.6 * remote I/O objects server process remote objects server process I/O remote objects server process – Implemented by the server-side ORB in CORBA (a) would be useful for UDP-based service, e.g. NTP (b) is the most commonly used - matches the TCP connection model (c) is used where the service is encapsulated as an object. E.g. could have multiple shared whiteboards with one thread each. Each object has only one thread, avoiding the need for thread synchronization within objects.
  • 12. Java thread constructor and management methods Figure 6.8 13 * Methods of objects that inherit from class Thread • Thread(ThreadGroup group, Runnable target, String name) • Creates a new thread in the SUSPENDED state, which will belong to group and be identified as name; the thread will execute the run() method of target. • setPriority(int newPriority), getPriority() • Set and return the thread’s priority. • run() • A thread executes the run() method of its target object, if it has one, and otherwise its own run() method (Thread implements Runnable). • start() • Change the state of the thread from SUSPENDED to RUNNABLE. • sleep(int millisecs) • Cause the thread to enter the SUSPENDED state for the specified time. • yield() • Enter the READY state and invoke the scheduler. • destroy() • Destroy the thread.
  • 13. Java thread synchronization calls 14 Figure 6.9 * See Figure 12.17 for an example of the use of object.wait() and object.notifyall() in a transaction implementation with locks. • thread.join(int millisecs) Input-output Receipt & queuing • Blocks the calling thread for up to the specified time until thread has terminated. • thread.interrupt() • Interrupts thread: causes it to return from a blocking method call such as sleep(). • object.wait(long millisecs, int nanosecs) • Blocks the calling thread until a call made to notify() or notifyAll() on object wakes Requests the thread, or the thread is interrupted, or the specified time has elapsed. • object.notify(), object.notifyAll() N threads • Wakes, respectively, one or all of any threads that have called wait() on object. Server object.wait() and object.notify() are very similar to the semaphore operations. E.g. a worker thread in Figure 6.5 would use queue.wait() to wait for incoming requests. synchronized methods (and code blocks) implement the monitor abstraction. The operations within a synchronized method are performed atomically with respect to other synchronized methods of the same object. synchronized should be used for any methods that update the state of an object in a threaded environment.
  • 14. Threads implementation Threads can be implemented: – in the OS kernel (Win NT, Solaris, Mach) – at user level (e.g. by a thread library: C threads, pthreads), or in the language (Ada, Java). + lightweight - no system calls + modifiable scheduler + low cost enables more threads to be employed - not pre-emptive - can exploit multiple processors - - page fault blocks all threads – Java can be implemented either way – hybrid approaches can gain some advantages of both  user-level hints to kernel scheduler  heirarchic threads (Solaris)  event-based (SPIN, FastThreads) 15 *
  • 15. 17 10,000 times slower! * Support for communication and invocation  The performance of RPC and RMI mechanisms is critical for effective distributed systems. – Typical times for 'null procedure call': – Local procedure call < 1 microseconds – Remote procedure call ~ 10 milliseconds – 'network time' (involving about 100 bytes transferred, at 100 megabits/sec.) accounts for only .01 millisecond; the remaining delays must be in OS and middleware - latency, not communication time.  Factors affecting RPC/RMI performance – marshalling/unmarshalling + operation despatch at the server – data copying:- application -> kernel space -> communication buffers – thread scheduling and context switching:- including kernel entry – protocol processing:- for each protocol layer – network access delays:- connection setup, network latency
  • 16. Implementation of invocation mechanisms  Most invocation middleware (Corba, Java RMI, HTTP) is implemented over TCP – For universal availability, unlimited message size and reliable transfer; see section 4.4 for further discussion of the reasons. – Sun RPC (used in NFS) is implemented over both UDP and TCP and generally works faster over UDP  Research-based systems have implemented much more efficient invocation protocols, E.g. – Firefly RPC (see www.cdk3.net/oss) – Amoeba's doOperation, getRequest, sendReply primitives (www.cdk3.net/oss) – LRPC [Bershad et. al. 1990], described on pp. 237-9)..  Concurrent and asynchronous invocations – middleware or application doesn't block waiting for reply to each invocation 18 *
  • 17. Invocations between address spaces 19 Control transfer via trap instruction Thread User Kernel Control transfer via privileged instructions Protection domain boundary Thread 1 Thread 2 User 1 User 2 (a) System call (b) RPC/RMI (within one computer) Kernel (c) RPC/RMI (between computers) Thread 1 Network Thread 2 User 1 User 2 Kernel 1 Kernel 2 Figure 6.11 *
  • 18. Bershad's LRPC  Uses shared memory for interprocess communication – while maintaining protection of the two processes – arguments copied only once (versus four times for convenitional RPC)  Client threads can execute server code – via protected entry points only (uses capabilities)  Up to 3 x faster for local invocations 21
  • 19. A lightweight remote procedure call 1. Copy args 4. Execute procedure and copy results 23 Client User stub Server Kernel stub A A stack 3. Upcall 2. Trap to Kernel 5. Return (trap) Figure 6.13 *
  • 20. Monolithic kernel and microkernel S4 ....... S1 S2 S3 25 Figure 6.15 ....... S1 S2 S3 S4 ....... Monolithic Kernel Microkernel * Kernel Kernel Middleware Language support subsystem Language support subsystem OS emulation subsystem .... Microkernel Hardware Figure 6.16 The microkernel supports middleware via subsystems
  • 21. Advantages and disadvantages of microkernel + flexibility and extensibility • services can be added, modified and debugged • small kernel -> fewer bugs • protection of services and resources is still maintained - service invocation expensive - • unless LRPC is used - • extra system calls by services for access to protected resources 26
  • 22. Relevant topics not covered  Protection of OS resources (Section 6.3) – Control of access to distributed resources is covered in Chapter 7 - Security  Mach OS case study (Chapter 18) – Halfway to a distributed OS – The communication model is of particular interest. It includes:  ports that can migrate betwen computers and processes  support for RPC  typed messages  access control to ports  open implementation of network communication (drivers outside the kernel)  Thread programming (Section 6.4, and many other sources) – e.g. Doug Lea's book Concurrent Programming in Java, (Addison Wesley 1996) 27 *
  • 23. 28 Summary  The OS provides local support for the implementation of distributed applications and middleware: – Manages and protects system resources (memory, processing, communication) – Provides relevant local abstractions:  files, processes  threads, communication ports  Middleware provides general-purpose distributed abstractions – RPC, DSM, event notification, streaming  Invocation performance is important – it can be optimized, E.g. Firefly RPC, LRPC  Microkernel architecture for flexibility – The KISS principle ('Keep it simple – stupid!')  has resulted in migration of many functions out of the OS *

Notas do Editor

  1. George took 2.5 hours to present this material. Protection not covered. More advanced communication abstractions (e.g. Mach ports) not covered Not all of architecture is covered.
  2. Popup: System layers figure from Chapter 2
  3. Question: what represents the activation of each thread? Answer: See Figure 6.7 for full details, but essentially: the current instruction pointer and other registers and a pointer to its activation stack. Box on p. 215: An analogy for threads and processes The following memorable, if slightly unsavoury, way to think of the concepts of threads and execution environments was seen on the comp.os.mach USENET group and is due to Chris Lloyd. An execution environment consists of a stoppered jar and the air and food within it. Initially, there is one fly – a thread – in the jar. This fly can produce other flies and kill them, as can its progeny. Any fly can consume any resource (air or food) in the jar. Flies can be programmed to queue up in an orderly manner to consume resources. If they lack this discipline, they might bump into one another within the jar – that is, collide and produce unpredictable results when attempting to consume the same resources in an unconstrained manner. Flies can communicate with (send messages to) flies in other jars, but none may escape from the jar, and no fly from outside may enter it. In this view, a standard UNIX process is a single jar with a single sterile fly within it.
  4. Describes a typical architecture for the use of threads - the &amp;apos;worker pool&amp;apos; architecture. Server spawns a thread to handle each incoming request. Concurrency of threads enables several disk requests to be performed concurrently (on different disks, or on a single disk with a disk driver that re-orders the requests for optimum head movement (elevatror algorithm)). Client uses threads to enable processing to proceed while waiting for return of remote invocations.
  5. Hidden
  6. Animation: a red box outlines each method in turn
  7. Hidden
  8. Hidden
  9. Hidden - next slide is an animated version of this figure.
  10. Hidden