Semaphore in Operating System

Semaphore in Operating System

7 mins read6.7K Views Comment
Updated on Oct 13, 2023 13:28 IST

Semaphores are one of the most important tools that the operating system has. So why they are needed,what are its different types.So lets try to findout answers to these questions.

2022_07_MicrosoftTeams-image-11.jpg

A semaphore is a communication primitive used in operating systems. It provides blocking, unblocking, and message passing facilities between processes. A semaphore can be considered an “event queue” for messages waiting to be processed by some other process or thread. A semaphore is a type of mutual exclusion lock that is commonly used in operating systems. It is a primitive synchronization primitive that allows multiple processes to share a resource without conflict. This article will study Semaphore in operating system, their need, and their types, for example.

Must Check: History of Operating Systems

Explore Interprocess Communication in Operating System

Table of Contents

Recommended online courses

Best-suited Operating Systems courses for you

Learn Operating Systems with these high-rated online courses

– / –
200 hours
– / –
2 months
2 K
20 hours
– / –
3 months
– / –
6 months
3.54 K
1 month
Free
8 hours
2.4 K
6 months

What is Semaphore?

Semaphore ensures process synchronisation.

Semaphore ensures process synchronisation.

Before understanding Semaphore, we need to understand two important terms. Concurrent processes run simultaneously or in parallel and may not depend on other processes. Process synchronization can be defined as the coordination between two processes that have access to common materials such as common code, resources, and data. Achieving process synchronization is not easy if we have concurrent processes and all common share code. Here common/shared code means critical section. So here comes the role of the Semaphore to prevent all the processes from accessing the code simultaneously. This is called Mutual exclusion.

Semaphore is an operating system-level memory management technique that can be used to manage the allocation and use of memory resources. In a nutshell, it allows multiple threads or processes to share access to a limited number of resources, such as memory blocks or locks.

When a thread tries to access a shared resource, it first checks the Semaphore to see if it’s available. If it is, the thread is granted access and can proceed. If it’s not available, the thread has to wait until the resource becomes available. This wait is known as a “semaphore wait.”.

So, Semaphore provides a way for multiple processes to access shared resources without interfering with each other. A semaphore can be taken as an analog or digital lock that prevents two processes from accessing the same resource simultaneously.

File System in Operating System
File System in Operating System
File system is a hierarchy of different files stored in operating system.Read this article if you want to know about file system and different file allocation methods with proper examples....read more
Swapping in Operating System
Swapping in Operating System
Swapping is a memory management technique that can increase the operating system’s performance. This article describes Swapping in the operating system and how to swap a memory management technique in...read more
Working of Demand Paging in Operating System
Working of Demand Paging in Operating System
This article explain a very important topic of operating system-Demand paging with real-life analogy

Also explore: What is Operating Systems (OS) – Types, Functions, and Examples

Also explore: Operating system interview questions

Why Semaphore is needed?

Now that you know what is Semaphore in operating system and how it works, you’re probably wondering when you should use it. Semaphore is most commonly used when multiple threads try to access a limited resource. This might be a file being read or written to or a database being queried.
Semaphore can help to ensure that these resources are accessed in a controlled and orderly way, preventing data corruption or inconsistency. It can also be used to protect against race conditions, which can occur when multiple threads are trying to access a resource simultaneously.

Advantages of Semaphore

  • Efficient allocation: Semaphores allow for a more efficient allocation of system resources, which means that memory can be used more effectively.
  • Control over multiple processes: Semaphores give you control over multiple processes, which means you can allocate memory to specific tasks as needed.
  • Increased performance: Semaphore-based memory management can result in increased performance and improved system responsiveness.

You May Like – Difference Between Paging And Segmentation

Disadvantages of Semaphore

  • Semaphores are prone to programming errors.
  • Due to the complexity of semaphore programming, mutual exclusion may not be achieved.
  • One of the most significant limitations of semaphores is priority inversion. Low priority processes get into a critical section, and high priority processes keep waiting. 
  • Semaphores can be expensive to implement in terms of memory and CPU usage.

When To Use Semaphore

Now that you know what Semaphore is and how it works, you’re probably wondering when you should use it. Semaphore is most commonly used when multiple threads are trying to access a limited resource. This might be a file being read or written to or a database being queried.

Semaphore can help to ensure that these resources are accessed in a controlled and orderly way, preventing data corruption or inconsistency. It can also be used to protect against race conditions, which can occur when multiple threads are trying to access a resource simultaneously.

Wait and signaling operations on semaphores

These two functions/operations are in the Semaphore. They are executed at the time of entry in the critical section-Wait operation and exit from the critical section-Signal operation. The wait and signaling operations on the Semaphore are just the “P” and “V” functions. 

1. Wait operation

This operation, also known as a “P” feature, sleep, decrement, or down operation, is a semaphore operation that controls the entry of a process into a critical section. If the mutex/semaphore value is positive, then the process can enter the critical section, and if the process enters, a semaphore value will decrease by 1.

2. Signal Operation

The “V” function or wakeup, increment, or wakeup operation is the same as the signal function. When the process leaves the critical section, the semaphore value should be updated to allow new processes to enter—the process for accessing the critical section( shared variables or resources). When the process enters the critical section, the wait operation decremented the semaphore value by one, i.e., if, in the starting, the value of the Semaphore was 1(S=1. It will be decremented to 0 once the process enters the critical section. If the process finished its execution in the critical section and left it, then the signal function is executed and will increase the semaphore value by 1. Note that this operation is executed only after the process has finished the critical section.

 

Types of Semaphores

Binary Semaphore

A binary semaphore is a binary (0 or 1) flag that can be enabled or disabled. If a binary semaphore is used as the mutual exclusion mechanism, only the allocated resources will be affected by the mutual exclusion.

Implementation of Binary Semaphore

As the name suggests binary means two. So this Semaphore will have two values, i.e., O and 1. Initially, the semaphore value is 1. When process P1 enters the critical section, the semaphore value will be 0. If P2 enters the critical section at this point, it will be impossible due to the value of the Semaphore. Less than or equal to 0. It would help if you waited until the semaphore value was greater than 0. This only happens when P1 leaves the critical section and performs a signaling operation that increases the value of the Semaphore. This is also known as mutex lock. This is how both processes cannot access the critical section while simultaneously ensuring mutual exclusion.

Counting Semaphore

Conceptually, a semaphore is a non-negative integer. Semaphores are typically used to coordinate access to resources, and the number of semaphores is initialized to the number of free resources. The value then automatically increases the count when the resource is added and decreases the count atomically when the resource is deleted.

Implementation of Counting Semaphore

If you have a resource with three instances, the initial value of the Semaphore is 3(i.e., S=3). Whenever a process needs to access a critical section/resource, it calls the wait function and then decrements the semaphore value by one. The semaphore value is greater than 0. In this case, three processes can access the critical section/resource. When the fourth process needs it, It is blocked and put in a waiting queue and wakes up only when any process out executing process performs the signaling function. In other words, the Semaphore has increased by 1.

2022_07_image-209.jpg

Conclusion

Semaphore memory management is used in operating systems to manage system resources. In semaphore-based systems, resources are managed by locks, essentially bits of data used to control access to resources.
When a process wants to access a resource, it checks the lock to see if it’s available. If the lock is unavailable, the process waits until the lock becomes available. Once the process has acquired the lock, it can access the resource. When it’s finished with the resource, it releases the lock so other processes can access it. So this article covered Semaphore in operating system in detail. If you liked this article, please do share and like it. Stay tuned for operating system blogs!!!

More blogs people are reading in Operating System:

Functions of Operating System | Types of Operating System | Memory Management in Operating System | Process Management in Operating System | Demand Paging in Operating System | Operating System Interview Questions | CPU Scheduling algorithm in Operating System | Process Scheduling Algorithm in Operating System | Kernel and its types | Memory Management Techniques in Operating System | Interprocess Communication in Operating System | Swapping in Operating System | Segmentation Technique in Operating System | Paging in Operating System | Memory Hierarchy in Operating System | Interrupts in Operating System | Deadlock in Operating System

About the Author

This is a collection of insightful articles from domain experts in the fields of Cloud Computing, DevOps, AWS, Data Science, Machine Learning, AI, and Natural Language Processing. The range of topics caters to upski... Read Full Bio