Description: Questions and answers to ask in an interview on distributed systems
categories : interviewing;
tags : distributed-systems;
When you run something asynchronously it means that is non-blocking, meaning the program will continue with other tasks even if there is a delay in executing this particular piece. Whereas in Parallel programming you can run multiple things at once–in parallel!–which works well when they are broken down into independent pieces of work to complete them faster without having too many strands connected together which would slow us down really badly later on for no good reason because all these threads were doing was slowing each other’s progress! You should use async/callback functions whenever possible rather than blocking code like Event handlers etc. since event handling happens outside your application so anyway.
Distributed System manages the distributed resources across networked computers, while Distributed Computing deals with writing software applications that can run in a distributed environment. The difference between distributed systems and cloud computing services are described above. It should be noted that there may be overlap between these two as distributed computing can provide distributed services to run on distributed systems.
Following are some common “message delivery approaches” used in distributed systems:
At-Most-Once: With at-most-once message delivery, when sending a message from the sender to receiver, there’s no guarantee that they’ll receive it. Not all messages will be delivered and if you try to deal with this problem yourself by trying either at least once delivery or an alternative system like batching.
At-Least-Once: The at-least-once approach for sending messages means that either the sender or recipient of a message is required to actively participate and ensure every instance they send it, there’s no way of knowing if someone will receive it. To ensure that each message is delivered either the sender must detect the failure and resend it, or the receiver continuously requests messages which have not been received. The message receiver can be either a sender that pushes messages until they get a response or someone who just won’t give up and keeps pulling them in.
Exactly-Once: With the at-least-once messaging approach, we can only hope that our processes lead to the delivery of some messages more than once. Ideally, we want to get exactly-once delivery of messages. But sometimes life just isn’t fair and you can’t always get what your heart desires!
Sharding is a process of splitting the large logical dataset into multiple databases. It also refers to horizontal partitioning of data as it will be stored on multiple machines. By doing so, a sharded database becomes capable of handling more requests than a single large machine. Consider an example - in the following image, assume that we have around 1TB of data present in the database, when we perform sharding, we divide the large 1TB data into smaller chunks of 256GB into partitions called shards
Database Sharding - Sharding is a technique for dividing a single dataset among many databases, allowing it to be stored across multiple workstations. Larger datasets can be divided into smaller parts and stored in numerous data nodes, boosting the system’s total storage capacity. A sharded database, similarly, can accommodate more requests than a single system by dividing the data over numerous machines. Sharding, also known as horizontal scaling or scale-out, is a type of scaling in which more nodes are added to distribute the load. Horizontal scaling provides near-limitless scalability for handling large amounts of data and high-volume tasks.
Database Partitioning - Partitioning is the process of separating stored database objects (tables, indexes, and views) into distinct portions. Large database items are partitioned to improve controllability, performance, and availability. Partitioning can enhance performance when accessing partitioned tables in specific instances. Partitioning can act as a leading column in indexes, reducing index size and increasing the likelihood of finding the most desired indexes in memory. When a large portion of one area is used in the resultset, scanning that region is much faster than accessing data scattered throughout the entire table by index. Adding and deleting sections allows for large-scale data uploading and deletion, which improves performance. Data that are rarely used can be uploaded to more affordable data storage devices.
- Consistency - Every read receives the most recent write or an error.
- Availability - Every request receives a (non-error) response, without the guarantee that it contains the most recent write.
- Partition tolerance - The system continues to operate despite an arbitrary number of messages being dropped (or delayed) by the network between nodes.
It states that in case of network partitioning (P) in a distributed computer system, one has to choose between availability (A) and consistency (C) (as per the CAP theorem), but else (E), even when the system is running normally in the absence of partitions, one has to choose between latency (L) and consistency (C).