session 10(Concurrency)

In programming, these situations are encountered:

  • When two processes are assigned to different cores on a machine by the kernel, and both cores execute the process instructions at the same time.
  • When more connections arrive before earlier connections are finished, and need to be handled immediately.

More generally, it’s when we need to handle multiple tasks at about the same time.

That’s it. That’s all concurrency is. Parallel execution is when two tasks start at the same time, making it a special case of concurrent execution.

Why Does The Ability to Handle Concurrency Matter?

Because concurrency leads to several issues:

  • How do you maintain, store and ensure the consistency of information between independent tasks that require the ability to read and write to shared information?
  • How do you adequately allocate a finite number of resources to handle a potentially monotonically increasing number of jobs to finish?
  • How do you distribute novel information across existing services concurrently handling work at the same time?

A kernel is a microcosm of all these conundrums:

  • It must be able to prevent memory corruption by dynamically assigning and reassigning virtual memory addresses to programs when they require it while maintaining a mapping from virtual memory to physical memory.
  • It must allocate and assign a finite number of cores to multiple programs, often utilising scheduling algorithms to allow the same core to work on multiple programs in the same unit of time to maximise utilisation.
  • It must be able to communicate with programs in the event of unexpected issues e.g. when a hardware interrupt occurs that requires the program to quit immediately.

But these issues are not limited to the kernel:

  • A distributed database may handle several hundred thousand reads and writes at the same time. It is of interest to ensure that each read reflects the state of the database after the writes received at the same time were processed.
  • A data ingestion pipeline may require several processes to communicate with each other, often to synchronise their executions if they’re taking place on machines with a different clock frequency.
  • A content delivery network must be able to handle the unexpected loss of a server or the addition of brand new ones without compromising the integrity of the information, as they may be launched at any moment to handle increased load such as during a DDoS attack).

Approaches to Handling Concurrency

From a very high-level perspective, there are really only a few tactics for safely dealing with tasks that must be run concurrently:

  • Launch many workers, and have them read, as well as write, information from the same place for all to see.

    This is like hiring workers and having them all come up to a bulletin board to learn of new changes and update it with discoveries.

    This approach often runs into race conditions: what happens if one worker tries to read the bulletin board at the same time someone is trying to write on it? Should they wait for them to finish, or should they get back to work right away? What if they both try to write at the same time over each other, even though the overwritten information is more important?

    Computers deal with these situations by either

    • employing the use of thread-safe structures – effectively, all workers must stand in a queue if they want to read or write. This is one of the oldest solutions out there to this problem, and features heavily in Python, C++, and Java.
    • requiring that a shared resource never have writes written to it. This is called guaranteeing immutability, and is fashionable in modern server-side Javascript development as well as being a design feature of the programming language Rust.
  • Launch many workers, and have them update any other worker that requires information.

    This is like hiring workers and having them go up to other workers and tell them that they’re done. For programs that require a lot of cooperation, this means a lot of messages passed. For programs that have many workers at many stages, it is ideal for intimating other stages that one stage is done.

    This is also the concurrency model espoused by many languages, such as Eralng and Scala, and is referred to as the actor model or message-passing concurrency.

  • Launch one very, very, very good worker and have them split their time effectively on tasks.

    This is a design staple of web servers like Nginx and Node.js, or in-memory stores like Redis, where they handle multiple incoming tasks with exactly one worker.

    Exactly one worker handles incoming tasks. This worker is able to effectively divide its time very effectively between working on an existing task and beginning a new work, so that it “seems” like it’s actually doing both tasks at the same time but really isn’t.

    It immediately sidesteps all the problems associated with multiple people, but, well, even one worker can only do so much.

This entry was posted in Uncategorized. Bookmark the permalink.

3 Responses to session 10(Concurrency)

Leave a Reply

Your email address will not be published. Required fields are marked *