DEADLOCK DETECTION


  • The system may enter a deadlock state

  • The system needs:

-an algorithm that periodically determines wheather a deadlock has occured in the system


-a procedure to recover from a deadlock



  • Two algorithms

1) one instance for resource type


2)multiple instance for resource type

DEADLOCK RECOVERY


•Abort all deadlocked processes.
•Abort one process at a time until the deadlock cycle is eliminated.
•In which order should we choose to abort?

–Priority of the process.
–How long process has computed, and how much longer to completion.
–Resources the process has used.
–Resources process needs to complete.
–How many processes will need to be terminated.
–Is process interactive or batch?


Selecting a victim – minimize cost
Rollback – return to some safe state, restart process from that state
–Require the system to keep more information about the state of all the running processes
Starvation – same process may always be picked as victim, include number of rollback in cost factor

DEADLOCK PREVENTION


-Ensure that at least one of the necessary conditions cannot hold
-Prevent deadlocks by constraining how requests for resources can be made


•Mutual Exclusion –
not required for sharable resources; must hold for non-sharable resources
•Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold any other resources.
–Method 1: require each process to request and be allocated all its resources before it begins execution

–Method 2: allow a process to request resources only when the process has none
–Example:
copy data from tape drive to disk file, sort disk file, print

–Disadvantage

•Low resource utilization

•Starvation possible



•No Preemption –
if process A holding resources requests another resource that cannot be immediately allocated to it
–Method 1: All resources currently being held by A are preempted


Preempted resources are added to A’s waiting resource list


A will be restarted only when it can regain its old resources, as well as the new ones that it is requesting.
–Method 2:
Check the requested resources for following conditions


If it is allocated to a process waiting for additional resources, preempt it from the waiting process and allocate it to A
•If it is held by a process not waiting, A must wait
–A’s resources may be preempted, but only if another process requests them



•Circular Wait –
impose a total ordering of all resource types
–Example: F(tape drive) = 1, F(disk drive) = 5, F(Printer) = 12
F is defined according to the normal order of resource usage
–Method 1: require that each process requests resources in an increasing order of enumeration
OK if tape drive è disk drive è Printer
•Not OK if disk drive è tape drive è Printer
–Method 2: Whenever a process requests an instance of Rj, it has released any resources Ri such that F(Ri) > F(Rj)


METHODS for HANDLING DEADLOCKS


•Use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a deadlock state
–Deadlock prevention
–Deadlock avoidance

•Allow the system to enter a deadlock state, detect it and then recover

•Ignore the problem and pretend that deadlocks never occur in the system
–Used by most operating systems, including UNIX
–The undetected deadlock will result in the deterioration of the system performance. Eventually, the system will stop functioning and will need to be restarted manually

DEADLOCK CHARACTERIZATION

•Mutual exclusion: At least one resource must be held in a non-sharable mode

•Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by others


•No preemption: a resource can be released only voluntarily by the process holding it, after it has completed its task

•Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0.

Multiprocessor Scheduling

  • CPU scheduling more complex when multiple CPUs are available

  • Homogeneous processors within a multiprocessor

  • Load sharing

  • Asymmetric multiprocessing – only one processor accesses the system data structures, alleviating the need for data sharing

Real-Time Scheduling

-Real-Time Scheduling (1) (2)

Real-time scheduling is becoming an increasingly important discipline, in which the scheduler is the most important component of a real-time system.In a real-time system, time is an essential role. Real-time computing may be defined as that typo of computing in which the correctness of the system depends not only on the logical result of he computation, but also on the time at which the results are produced. For example, the computer in a compact disc player gets the bits as they come off the drive and must convert them into music within a very tight interval. If the calculation takes too long, the music will sound peculiar.


Real-time tasks can be classified into two categories:

-hard real-time task: the task must meet its deadline; otherwise it will cause undesirable damage or a fatal error to the system.
-soft real-time task: there is an associated deadline that is desirable but not mandatory.

Another characteristic of real-time task is whether they are periodic or aperiodic. A periodic task is one that occurs at regular intervals, while an aperiodic task occurs unpredictably. If there are m periodic events and event i occurs with period Pi and requires Ci seconds of CPU time to handle each event, then the load can only be handled if

(1)
A real-time system that meets this criterion is said to be schedulable.

Real-time scheduling algorithm: (2)

In a real-time system, the various scheduling approaches depends on

1)whether the system performs schedulability analysis
2)if it does, whether it is done statically or dynamically
3)whether the result of the analysis itself produces a schedule or plan according to which tasks are dispatched at run time.


Based on these considerations, there can be these algorithms:
  • Static table-driven schedulingThis is applicable to tasks that are periodic. Input to the analysis consists of the periodic arrival time, execution time, periodic ending deadline, and relative priority of each task. The scheduler attempts to develop a schedule that enables it to meet the requirement of all periodic tasks. This is a predictable approach but inflexible because any change to any task requirement requires the scheduler be redone. One of the typical scheduling algorithms in this category is earliest-deadline-first.
  • Static priority-driven preemptive schedulingIt makes use of the priority-driven preemptive scheduling mechanism common to most non-real-time multiprogramming system. In real-time system, priority assignment is related to the time constraints associated with each task. One example of this approach is rate monotonic algorithm, which assigns static priorities to tasks based on the length of their periods.
  • Dynamic planning-based schedulingWith this approach, after one task arrives, but before its execution begins, an attempt is made to create a schedule that contains the previously scheduled tasks as well as the new arrival. If the new arrival can be scheduled in such a way that its deadlines are satisfied and that no currently scheduled task misses a deadline, then the schedule is revised to accommodate the new task.
  • Dynamic best effort scheduling-This is the approach used by many real-time systems that are currently commercially available. When a task arrives, the system assigns a priority based on the characteristics of the task. Some forms of the deadline scheduling, such a earliest-deadline scheduling, is typically used. Usually the tasks are aperiodic and so no static scheduling analysis is possible. With this type of scheduling, until a deadline arrives or until the task completes, we do not know whether a timing constraint will be met. This is the major disadvantage of this form of scheduling. The advantage of this approach is easy to implement

Thread Scheduling

-Across platforms, thread scheduling1 tends to be based on at least the following criteria:
a priority, or in fact usually multiple "priority" settings that we'll discuss below; a quantum, or number of allocated timeslices of CPU, which essentially determines the amount of CPU time a thread is allotted before it is forced to yield the CPU to another thread of the same or lower priority (the system will keep track of the remaining quantum at any given time, plus its default quantum, which could depend on thread type and/or system configuration); a state, notably "runnable" vs "waiting"; metrics about the behaviour of threads, such as recent CPU usage or the time since it last ran (i.e. had a share of CPU), or the fact that it has "just received an event it was waiting for". Most systems use what we might dub priority-based round-robin scheduling to some extent. The general principles are:
a thread of higher priority (which is a function of base and local priorities) will preempt a thread of lower priority; otherwise, threads of equal priority will essentially take turns at getting an allocated slice or quantum of CPU; there are a few extra "tweaks" to make things work. StatesDepending on the system, there are various states that a thread can be in. Probably the two most interesting are:
runnable, which essentially means "ready to consume CPU"; being runnable is generally the minimum requirement for a thread to actually be scheduled on to a CPU; waiting, meaning that the thread currently cannot continue as it is waiting for a resource such as a lock or I/O, for memory to be paged in, for a signal from another thread, or simply for a period of time to elapse (sleep).


SUBSTANTIAL INFORMATION ABOUT THREADS (least atleast three OS)

Windows Microsoft

A thread is the entity within a process that can be scheduled for execution. All threads of a process share its virtual address space and system resources. Each process is started with a single thread, but can create additional threads from any of its threads.

For more information, see the following topics:

* Creating Threads
* Thread Stack Size
* Thread Handles and Identifiers
* Suspending Thread Execution
* Synchronizing Execution of Multiple Threads
* Multiple Threads and GDI Objects
* Thread Local Storage
* Creating Windows in Threads
* Terminating a Thread
* Thread Security and Access Rights







CPU SCHEDULING ALGORITHMS

SCHEDULING ALGORITHMS
1.First-come, first-served (FCFS) scheduling
2.Shortest-job first (SJF) scheduling
3.Priority scheduling
4.Round-robin scheduling
5.Multilevel queue scheduling
6.Multilevel feedback queue scheduling


  • First-come, First-served (FCFS) scheduling
-is the simplest scheduling algorithm, but it can cause short processes to wait for very long processes.


  • Shortest-job-first (SJF) scheduling
-is provably optimal, providing the shortest average waiting time. Implementing SJF scheduling is difficult because predicting the length of the next CPU burst is difficult. The SJF algorithm is a special case of the general

  • Priority-scheduling algorithm,
- which simply allocates the CPU to the highest-priority process. Both priority and SJF scheduling may suffer from starvation. Aging is a technique to prevent starvation.


  • Round-robin (RR) scheduling
- is more appropriate for a time-shared (interactive) system. RR scheduling allocates the CPU to the first process in the ready queue for q time units, where q is the time quantum. After q time units, if the process has not relinquished the CPU, it is preempted and the process is put at the tail of the ready queue. The major problem is the selection of the time quantum. If the quantum is too large, RR scheduling degenerates to FCFS scheduling; if the quantum is too small, scheduling overhead in the form of context-switch time becomes excessive.

The FCFS algorithm is nonpreemptive, the RR algorithm is preemptive. The SJF and priority algorithms may be either preemptive or nonpreemptive.

  • Multilevel queue algorithms
-allow different algorithms to be used for various classes of processes. The most common is a foreground interactive queue which uses RR scheduling, and a background batch queue, which uses FCFS scheduling.
  • Multilevel feedback queues
-allow processes to move from one queue to another.

Because such a wide variety of scheduling algorithms are available, we need methods to select among them. Analytic methods use mathematical analysis to determine the performance of an algorithm. Simulation methods determine performance by imitating the scheduling algorithm on a “representative” sample of processes, and computing the resulting performance.


Operating Systems supporting threads at the kernel level must schedule threads - not processes - for execution. This is the case with Solaris 2 and Windows 2000 where both systems schedule threads using preemptive priority based on scheduling algorithm including support for real-time threads. The Linux process scheduler also uses a priority-based algorithm with real-time supports as well. The scheduling algorithms for these three operating systems typically favor interactive over batch and CPU-bound processes.systems typically favor interactive over batch and CPU-bound processes.