TRHEADS

-In computer science, a thread of execution results from a fork of a computer program into two or more concurrently running tasks. The implementation of threads and processes differs from one operating system to another, but in most cases, a thread is contained inside a process. Multiple threads can exist within the same process and share resources such as memory, while different processes do not share these resources.
On a single processor, multithreading generally occurs by time-division multiplexing (as in multitasking): the processor switches between different threads. This context switching generally happens frequently enough that the user perceives the threads or tasks as running at the same time. On a multiprocessor or multi-core system, the threads or tasks will generally run at the same time, with each processor or core running a particular thread or task. Support for threads in programming languages varies: a number of languages simply do not support having more than one execution context inside the same program executing at the same time. Examples of such languages include Python, and OCaml, because the parallel support of their runtime support is limited by the use of a central lock, called "Global Interpreter Lock" in Python, "master lock" in Ocaml. Other languages may be limited because they use threads that are user threads, which are not visible to the kernel, and thus cannot be scheduled to run concurrently. On the other hand, kernel threads, which are visible to the kernel, can run concurrently.
Many modern operating systems directly support both time-sliced and multiprocessor threading with a process scheduler. The kernel of an operating system allows programmers to manipulate threads via the system call interface. Some implementations are called a kernel thread, whereas a lightweight process (LWP) is a specific type of kernel thread that shares the same state and information.
Programs can have user-space threads when threading with timers, signals, or other methods to interrupt their own execution, performing a sort of ad-hoc time-slicing.


-single threaded process

-multi threaded process


-Benefits of Multi-threaded Programming


-Responsiveness

-Resource Sharing

-Economy

-Utilization of MP Architechtures


  • User Thread

-Thread management done by user-level threads library


(e.g.):


1. POSIX Pthreads


2. Mach C-threads


3.Solaris threads



  • Kernel Thread

- Supported by Kernel


(e.g.):


1.Windows 95/98/NT/2000


2.Solaris


3.Tru64 UNIX


4.BeOS


5.Linux



  • Thread Library




  • Multithreading models

1)Many-to-one Model


-Many user-level threads mapped to single kernel thread.


-Used on systems that do not supported kernel threads.



2)One-to-One Model


-Each user-level thread maps to kernel thread.


(e.g.):


-Windows 95/98/NT/2000


-OS/2


3)Many-to-Many Model

-Allows many user level threads to be mapped to many kernel threads.

-Allows the operating system to create a sufficient number of kernel threads.

-Solaris 2

-Windows NT/2000 with the ThreadFiber package.

Producer-Consumer Example

One process generates data – the producer

• The other process uses it – the consumer

• If directly connected – time coordination



How would they coordinate the time ?

BUFFERING

-The mechanism that buffers messages (a.k.a. queue)

may have the following properties






  • zero capacity-queue has lenght 0, no messages can be out standing on link, sender blocks for message exchange.



  • bounded capacity-queue has length N, N message can be in queue at any point in time, sender blocks if queue at any point in tme, sender blocks if queue is full, otherwise it may continue to execute.



  • unbounded capacity- queue has infinite length, sender never blocks

INTERPROCESS COMMUNICATION

*-*Inter-process communication (IPC) is a set of techniques for the exchange of data among multiple threads in one or more processes. Processes may be running on one or more computers connected by a network. IPC techniques are divided into methods for message passing, synchronization, shared memory, and remote procedure calls (RPC). The method of IPC used may vary based on the bandwidth and latency of communication between the threads, and the type of data being communicated.


There are several reasons for providing an environment that allows process cooperation:
-Information sharing


-Computation speedup


-Modularity


- Convenience






1)Direct Communication


  • sender/reciever refer to each other, as seen before

  • properties of communication link

-link is associated with exactly two processes


-exactly one link for every pair of processes



  • communication is symmetric (above) or asymmetric




2)Indirect Communication



3)Synchronization-message passing maybe blocking or non-blocking (synchronous or asynchronous)




  • blocking send-sender bloked until message is recieve by receiver (orby mailbox)


  • Non-blocking send-sending process resumes operation right after sending.


  • blocking receive-reiever blocks until message is available.

  • nonblocking receive-receiver retrieves a vald message or returns an error code.

5)INTERPROCESS COMMUNICATION


  • For Communication and Synchronization

-Share memory


-OS provide IPC



  • Message system

-No need for shared available


-Two operations:


1.send (message)-message fixed or variable


2.recieve message



  • if P and Q wish to communicate, they need to:

-establish a communication link betweeen them


-exchange message via send/ recieve



  • Implemention of communication Link

-physical(e.g. , shared memory, hardware bus)


-logical(e.g. , logical properties)

4)COOPERATING PROCESSES


  • Advantages of process cooperation

-Information sharing

-Comunication speed-up

-Modularity

-Convinience


  • Independent process connot affect/be affected by the execution of another process, cooperating once can.

  • Issue

-Communication


-avoid processes getting into each other's ways


-Ensure proper sequencing when there are dependencies



  • Common Paradigm:producer costumer

-unbounded-buffer - no practical consumer


-bounded buffer-assumes fixed buffer size

3)OPERATION PROCESS

a)Process Creation


-Parent process create children processes, which, in turn create other processes, forming a tree of processes.




  • Resource sharing


  1. Parent and children share all resources.


  2. Children share subset of parent’s resources.


  3. Parent and child share no resources.




  • Execution


  1. Parent and children execute concurrently.


  2. Parent waits until children terminate.




  • Address space


  1. Child duplicate of parent.


  2. Child has a program loaded into it.



  • UNIX examples


  1. fork system call creates new process


  2. exec system call used after a fork to replace the process’ memory space with a new program.


b)Process Termination





  • Process executes last statement and asks the operating system to decide it (exit).




  1. Output data from child to parent (via wait).


  2. Process resources are deallocated by operating system.




  • Parent may terminate execution of children processes (abort).


  1. Child has exceeded allocated resources.


  2. Task assigned to child is no longer required.


-Parent is exiting.



  1. Operating system does not allow child to continue if its parent terminates.


  2. Cascading termination.

2)PROCESS SCHEDULING

a)Scheduling Queues

-Scheduling is a key concept in computer multitasking and multiprocessing operating system design, and in real-time operating system design. In modern operating systems, there are typically many more processes running than there are CPUs available to run them. Scheduling refers to the way processes are assigned to run on the available CPUs. This assignment is carried out by software known as a scheduler.




  • Job queue – set of all processes in the system.


  • Ready queue – set of all processes residing in main memory, ready and waiting to execute.


  • Device queues – set of processes waiting for an I/O device.


  • Process migration between the various queues.

b)Schedulers




  • Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready queue.


  • Short-term scheduler (or CPU scheduler) – selects which process should be executed next and allocates CPU.






  1. Short-term scheduler is invoked very frequently (milliseconds) Þ (must be fast).


  2. Long-term scheduler is invoked very infrequently (seconds, minutes) Þ (may be slow).


  3. The long-term scheduler controls the degree of multiprogramming.


  4. Processes can be described as either:


  5. I/O-bound process – spends more time doing I/O than computations, many short CPU bursts.


  6. CPU-bound process – spends more time doing computations; few very long CPU bursts.






c)Context Switch





  • When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process.


  • Context-switch time is overhead; the system does no useful work while switching.


  • Time dependent on hardware support.

2nd Trinal

1.The Concept of Process




  • An operating system executes a variety of programs:


  1. Batch system – jobs

  2. Time-shared systems – user programs or tasks .


  • Textbook uses the terms job and process almost interchangeably.


  • Process – a program in execution; process execution must progress in sequential fashion.


  • A process includes:



  1. program counter - is a processor register that indicates where the computer is in its instruction sequence. Depending on the details of the particular computer, the program counter holds either the address of the instruction being executed, or the address of the next instruction to be executed.

  2. stack -abstract data type and data structure based on the principle of Last In First Out (LIFO)

  3. data section -This low-level code is used, among other things, to initialize and test the system hardware prior to booting the OS


a)Process State- As a process executes, it changes state





  • new: The process is being created.


  • running: Instructions are being executed.


  • waiting: The process is waiting for some event to occur.


  • ready: The process is waiting to be assigned to a process.


  • terminated: The process has finished execution


b)Process Control Block


- Information associated with each process.





  • Process state - The status of a process as running, ready, blocked, etc.


  • Program counter- is a processor register that indicates where the computer is in its instruction sequence. Depending on the details of the particular computer, the program counter holds either the address of the instruction being executed, or the address of the next instruction to be executed.


  • CPU registers -In computer architecture, a processor register is a small amount of storage available on the CPU whose contents can be accessed more quickly than storage available elsewhere


  • CPU scheduling information


  • Memory-management information -a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already used by another program. Since programs time share, each program must have independent access to memory.


  • Accounting information -An accounting information system (AIS) is the system of records a business keeps to maintain its accounting system. This includes the purchase, sales, and other financial processes of the business. The purpose of an AIS is to accumulate data and provide decision makers (investors, creditors, and managers) with information.



  • I/O status information


c)Threads


  • In computer science, a thread of execution results from a fork of a computer program into two or more concurrently running tasks. The implementation of threads and processes differs from one operating system to another, but in most cases, a thread is contained inside a process. Multiple threads can exist within the same process and share resources such as memory, while different processes do not share these resources.



  • On a single processor, multithreading generally occurs by time-division multiplexing (as in multitasking): the processor switches between different threads. This context switching generally happens frequently enough that the user perceives the threads or tasks as running at the same time. On a multiprocessor or multi-core system, the threads or tasks will generally run at the same time, with each processor or core running a particular thread or task. Support for threads in programming languages varies: a number of languages simply do not support having more than one execution context inside the same program executing at the same time. Examples of such languages include Python, and OCaml, because the parallel support of their runtime support is limited by the use of a central lock, called "Global Interpreter Lock" in Python, "master lock" in Ocaml. Other languages may be limited because they use threads that are user threads, which are not visible to the kernel, and thus cannot be scheduled to run concurrently. On the other hand, kernel threads, which are visible to the kernel, can run concurrently.



  • Many modern operating systems directly support both time-sliced and multiprocessor threading with a process scheduler. The kernel of an operating system allows programmers to manipulate threads via the system call interface. Some implementations are called a kernel thread, whereas a lightweight process (LWP) is a specific type of kernel thread that shares the same state and information.
    Programs can have user-space threads when threading with timers, signals, or other methods to interrupt their own execution, performing a sort of ad-hoc time-slicing.

OPERATING SYSTEM SERVICES



  • Program execution – system capability to load a program into memory and to run it.


  • I/O operations – since user programs cannot execute I/O operations directly, the operating system must provide some means to perform I/O.


  • File-system manipulation – program capability to read, write, create, and delete files.


  • Communications – exchange of information between processes executing either on the same computer or on different systems tied together by a network. Implemented via shared memory or message passing.


  • Error detection – ensure correct computing by detecting errors in the CPU and memory hardware, in I/O devices, or in user programs.

CHAPTER 3: OPERATING STRUCTURES

SYTEM COMPONENTS






  • Operating Systems Process Management
    -A process is a program in execution. A process needs certain resources, including CPU time, memory, files, and I/O devices, to accomplish its task.
    -The operating system is responsible for the following activities in connection with process management.
    1.Process creation and deletion.
    2.Process suspension and resumption.
    3.Provision of mechanisms for:
    -process synchronization
    -process communication


  • Main Memory Mangement-
    -Memory is a large array of words or bytes, each with its own address. It is a repository of quickly accessible data shared by the CPU and I/O devices.
    -Main memory is a volatile storage device. It loses its contents in the case of system failure.
    -The operating system is responsible for the following activities in connections with memory management:
    1.Keep track of which parts of memory are currently being used and by whom.
    2.Decide which processes to load when memory space becomes available.
    3.Allocate and deallocate memory space as needed.


  • File Management-
    -A file is a collection of related information defined by its creator. Commonly, files represent programs (both source and object forms) and data.
    -The operating system is responsible for the following activities in connections with file management:
    1.File creation and deletion.
    2.Directory creation and deletion.
    3.Support of primitives for manipulating files and directories.
    4.Mapping files onto secondary storage.
    5.File backup on stable (nonvolatile) storage media.


  • I/O System Management-
    -The I/O system consists of:
    a)A buffer-caching system .
    b)A general device-driver interface .
    c)Drivers for specific hardware devices .


  • Secondary Storage Management
    -Since main memory (primary storage) is volatile and too small to accommodate all data and programs permanently, the computer system must provide secondary storage to back up main memory.
    -Most modern computer systems use disks as the principle on-line storage medium, for both programs and data.
    -The operating system is responsible for the following activities in connection with disk management:
    1.Free space management
    2.Storage allocation
    3.Disk scheduling




  • Protetion System
    -Protection refers to a mechanism for controlling access by programs, processes, or users to both system and user resources.
    -The protection mechanism must:
    1.distinguish between authorized and unauthorized usage.
    2.specify the controls to be imposed.
    3.provide a means of enforcement.


  • Command-Interpreter System
    -Many commands are given to the operating system by control statements which deal with:
    a)process creation and management
    b)I/O handling
    c)secondary-storage management
    d)main-memory management
    e)file-system access
    f)protection
    g)networking