Operating Systems : a brief overview
When does Process Switch occurs?
- Interrupts — When hardware servicing is needed
- Trap — When OS support is required
- Blocking System Calls — When requests from the process is made for OS support
When a context switch / process switch occurs, all the context of the process (eg: State, memory, etc) are saved into the PCB. Then this PCB is moved to a different queue, (ready / blocked). Meanwhile, another process is selected with the help of short term scheduling algorithm and moved to the processing block. The memory management is updated accordingly. When the process finishes, the initial context of the previous process is restored. The original process is left unaware of this switch and carries on the work from where it left.
On a multi-processing computer, problems related to process switch becomes more complex and happens more often.
But first let’s understand threads better:
Threads are the smallest unit of work in programming. They share resources of the processes, yet run independently and asynchronously.
Why have multiple threads?
- Ease of communication
- Effective solution to prevent blocking while reading data
- Easy to create (creates TCB)
Disadvantages of having Multiple Threads
Risk of Asynchronicity : might occur when two threads are running at the same time.
Eg1: A running thread may be interrupted due to hardware consideration and a different threads chosen to run.
Ex2: Two threads running on two different CPUs not synchronized together properly can cause a lot of trouble.
Three important things to note are:
- Processes concerns itself with resource ownership
- Threads are the running parts or executable of the process
- No process can exist without a thread.
Process : Process Control Blocks
The PCB is responsible for memory allocation, linkage and files.
Threads: Thread Control Block
responsible for context in processor registers, stacks that store the local variables and access to all the resources of the thread.
Reasons for Multi-threading
- Foreground / background applications
- Asynchronous processing
- Infrequent tasks
- speed reading
- synchronous processing
Downsides of Threads
- overuse of threads leads to confusion
Implementation of Threads
- Kernel Level Threads: These are different from kernel threads.
- User Level Threads: Downside of ULTs
- Hybrid Approach: Lightweight process
- Thread Scheduling details.
A critical code is the part of the program which always runs through completion without being interrupted. The critical code identifies. A good program makes sure that the critical code is as small as possible and the CPU gets in and out of the critical section as soon as possible. An important rule of critical code is that it should always run automatically.
Mutual exclusion explicitly states that no two threads can be in the critical section at the same time.
Fundamentals of Mutual Exclusion Case: using a system BUS
Using a system BUS to transfer memory is one of the most fundamental ways to execute mutual exclusion. While one thread has access to the memory in the system bus, the other thread has to wait().
Software Solution to Mutual Exclusion
- Peterson’s Algorithm
The main purpose of peterson’s algorithm is to protect two threads from accessing the same resource at the same time. Here, each thread is assigned a boolean flag variable. There is also a turn variable assigned for checking and changing the boolean flag. When the thread wants to enter a critical section, it raise its flag and offers the turn to the other thread.
flag = [False, False]
turn = 0def P0():
flag = True
turn = 1
while(flag and turn == 1):
flag = Falsedef P1():
flag = True
turn = 0
while(flag and turn ==0):
flag = False
Problem with Peterson’s algorithm: limits the number of threads that has access to critical section to 2.
Hardware Solution to Mutual Exclusion
- Disable Interrupts
Disabling interrupts may prevent CPU from being interrupted and also prevent asynchronicity. Although, in practicality, it is a very bad idea to disable interrupts.
2. Test and Set
We have a single Machine Language instruction to check memory location using a boolean flag and set it if it is not set. The result is returned to indicate success or failure.
The location in memory is swapped with a register making it unavailable for other threads and process.
Semaphore provides mutual exclusion. They are very similar to a boolean flag. The operating system provides some semaphore services most of which is done in user space. The structure is set keeping the fundamental communication data in mind. The primary functions of semaphore are:
- wait(); causes the thread to block if there are no signals.
- signal(); can be queued or if a thread is waiting, it can cause the thread to be released.
To consume, the thread is queued to await the next signal.
Semaphore relies on signal being queued initially. Before entering critical section, thread calls the wait() function and while exiting, the thread calls signal(). No thread in critical section means that there is signal() in the queue.
Internally, semaphores have countered for number of signal() sent. wait() decrements the counter. if the counter is less that 0, thread and all the subsequent threads will be blocked and queued which is invoked by the OS.
Problem with internal semaphore: The act of checking and modifying semaphore is itself a critical section. Semaphores have to use hardware solution to prevent asynchronicity.
Threads that are all waiting for each other to release resources are termed as deadlocks. They are also known as a permanent block. There is no efficient solution till date for handling deadlocks.
Deadlock Resource Types
- Reusable — memory, device, data structure and semaphore.
- Consumable — interrupts, signal(), message, data in I/O buffer.
Requirement for Deadlock
- Mutual Exclusion — occurs when resources are not shareable.
- Hold and Wait — holding a resource and waiting for new resources.
- No preemption — resources cannot be forcefully released or removed unless the process releases it.
- Circular Wait — set of processes are waiting for each other in a circular form.
- Deadlock Prevention — Eliminating any one requirement for deadlock.
- Deadlock Avoidance — Banker’s Algorithm: makes sure that resources are never allocated in a way that doesn’t satisfy every thread’s needs.
- Deadlock Detection and Recovery— Detection algorithm is run periodically to detect a deadlock and recovered preemptively.
- Ignore the problem — When deadlock is rare, it’s ignored altogether in the system.
This was part 2 of my 3 part series on operating systems. Part 1 dealt with understanding computers, defining and understanding OS, kernels, multi programming, processes and its states. In part 3 will dive deep into Memory management and talk about partitioning strategies, buddy system, paging, virtual memory and replacement strategies.Other topics that you is look into:
Lookup problem, Replacement policies, get more resources here.