Thread Scheduling

-Across platforms, thread scheduling1 tends to be based on at least the following criteria:
a priority, or in fact usually multiple "priority" settings that we'll discuss below; a quantum, or number of allocated timeslices of CPU, which essentially determines the amount of CPU time a thread is allotted before it is forced to yield the CPU to another thread of the same or lower priority (the system will keep track of the remaining quantum at any given time, plus its default quantum, which could depend on thread type and/or system configuration); a state, notably "runnable" vs "waiting"; metrics about the behaviour of threads, such as recent CPU usage or the time since it last ran (i.e. had a share of CPU), or the fact that it has "just received an event it was waiting for". Most systems use what we might dub priority-based round-robin scheduling to some extent. The general principles are:
a thread of higher priority (which is a function of base and local priorities) will preempt a thread of lower priority; otherwise, threads of equal priority will essentially take turns at getting an allocated slice or quantum of CPU; there are a few extra "tweaks" to make things work. StatesDepending on the system, there are various states that a thread can be in. Probably the two most interesting are:
runnable, which essentially means "ready to consume CPU"; being runnable is generally the minimum requirement for a thread to actually be scheduled on to a CPU; waiting, meaning that the thread currently cannot continue as it is waiting for a resource such as a lock or I/O, for memory to be paged in, for a signal from another thread, or simply for a period of time to elapse (sleep).


0 comments: