Jump to content

Two Generals' Problem

From Wikipedia, the free encyclopedia
Positions of the armies. Armies A1 and A2 cannot see one another directly, so need to communicate by messengers, but their messengers may be captured by army B.

In computing, the Two Generals' Problem is a thought experiment meant to illustrate the pitfalls and design challenges of attempting to coordinate an action by communicating over an unreliable link. In the experiment, two generals are only able to communicate with one another by sending a messenger through enemy territory. The experiment asks how they might reach an agreement on the time to launch an attack, while knowing that any messenger they send could be captured.

The Two Generals' Problem appears often as an introduction to the more general Byzantine Generals problem in introductory classes about computer networking (particularly with regard to the Transmission Control Protocol, where it shows that TCP cannot guarantee state consistency between endpoints and why this is the case), though it applies to any type of two-party communication where failures of communication are possible. A key concept in epistemic logic, this problem highlights the importance of common knowledge. Some authors also refer to this as the Two Generals' Paradox, the Two Armies Problem, or the Coordinated Attack Problem.[1][2] The Two Generals' Problem was the first computer communication problem to be proven to be unsolvable.[3] An important consequence of this proof is that generalizations like the Byzantine Generals problem are also unsolvable in the face of arbitrary communication failures, thus providing a base of realistic expectations for any distributed consistency protocols.

Definition

[edit]

Two armies, each led by a different general, are preparing to attack a fortified city. The armies are encamped near the city, each in its own valley. A third valley separates the two hills, and the only way for the two generals to communicate is by sending messengers through the valley. Unfortunately, the valley is occupied by the city's defenders and there's a chance that any given messenger sent through the valley will be captured.[4]

While the two generals have agreed that they will attack, they haven't agreed upon a time for an attack. It is required that the two generals have their armies attack the city simultaneously to succeed, lest the lone attacker army die trying. They must thus communicate with each other to decide on a time to attack and to agree to attack at that time, and each general must know that the other general knows that they have agreed to the attack plan. Because acknowledgement of message receipt can be lost as easily as the original message, a potentially infinite series of messages is required to come to consensus.[5]

The thought experiment involves considering how they might go about coming to a consensus. In its simplest form, one general is known to be the leader, decides on the time of the attack, and must communicate this time to the other general. The problem is to come up with algorithms that the generals can use, including sending messages and processing received messages, that can allow them to correctly conclude:

Yes, we will both attack at the agreed-upon time.

Allowing that it is quite simple for the generals to come to an agreement on the time to attack (i.e. one successful message with a successful acknowledgement), the subtlety of the Two Generals' Problem is in the impossibility of designing algorithms for the generals to use to safely agree to the above statement.[citation needed]

Illustrating the problem

[edit]

The first general may start by sending a message: "Attack at 0900 on August 4." However, once dispatched, the first general has no idea whether or not the messenger got through. This uncertainty may lead the first general to hesitate to attack due to the risk of being the sole attacker.

To be sure, the second general may send a confirmation back to the first: "I received your message and will attack at 0900 on August 4." However, the messenger carrying the confirmation could face capture, and the second general may hesitate, knowing that the first might hold back without the confirmation.

Further confirmations may seem like a solution—let the first general send a second confirmation: "I received your confirmation of the planned attack at 0900 on August 4." However, this new messenger from the first general is liable to be captured, too. Thus, it quickly becomes evident that no matter how many rounds of confirmation are made, there is no way to guarantee the second requirement that each general is sure the other has agreed to the attack plan. Both generals will always be left wondering whether their last messenger got through.[6]

Proof

[edit]

Because this protocol is deterministic, suppose there is a sequence of a fixed number of messages, one or more successfully delivered and one or more not. The assumption is that there should be a shared certainty for both generals to attack. Consider the last such message that was successfully delivered. If that last message had not been successfully delivered, then one general at least (presumably the receiver) would decide not to attack. From the viewpoint of the sender of that last message, however, the sequence of messages sent and delivered is exactly the same as it would have been, had that message been delivered. Since the protocol is deterministic, the general sending that last message will still decide to attack. We've now created a situation where the suggested protocol leads one general to attack and the other not to attack—contradicting the assumption that the protocol was a solution to the problem.

A non-deterministic protocol with a potentially variable message count can be compared to an edge-labeled finite tree, where each node in the tree represents an explored example up to a specified point. A protocol that terminates before sending any messages is represented by a tree containing only a root node. The edges from a node to each child are labeled with the messages sent in order to reach the child state. Leaf nodes represent points at which the protocol terminates. Suppose there exists a non-deterministic protocol P which solves the Two Generals' Problem. Then, by a similar argument to the one used for fixed-length deterministic protocols above, P' must also solve the Two Generals' Problem, where the tree representing P' is obtained from that for P by removing all leaf nodes and the edges leading to them. Since P is finite, it then follows that the protocol that terminates before sending any messages would solve the problem. But clearly, it does not. Therefore, a non-deterministic protocol that solves the problem cannot exist.

Engineering approaches

[edit]

A pragmatic approach to dealing with the Two Generals' Problem is to use schemes that accept the uncertainty of the communications channel and not attempt to eliminate it, but rather mitigate it to an acceptable degree. For example, the first general could send 100 messengers, anticipating that the probability of all being captured is low. With this approach, the first general will attack no matter what, and the second general will attack if any message is received. Alternatively, the first general could send a stream of messages and the second general could send acknowledgments to each, with each general feeling more comfortable with every message received. As seen in the proof, however, neither can be certain that the attack will be coordinated. There is no algorithm that they can use (e.g. attack if more than four messages are received) that will be certain to prevent one from attacking without the other. Also, the first general can send a marking on each message saying it is message 1, 2, 3 ... of n. This method will allow the second general to know how reliable the channel is and send an appropriate number of messages back to ensure a high probability of at least one message being received. If the channel can be made to be reliable, then one message will suffice and additional messages do not help. The last is as likely to get lost as the first.

Assuming that the generals must sacrifice lives every time a messenger is sent and intercepted, an algorithm can be designed to minimize the number of messengers required to achieve the maximum amount of confidence the attack is coordinated. To save them from sacrificing hundreds of lives to achieve very high confidence in coordination, the generals could agree to use the absence of messengers as an indication that the general who began the transaction has received at least one confirmation and has promised to attack. Suppose it takes a messenger 1 minute to cross the danger zone, allowing 200 minutes of silence to occur after confirmations have been received will allow us to achieve extremely high confidence while not sacrificing messenger lives. In this case, messengers are used only in the case where a party has not received the attack time. At the end of 200 minutes, each general can reason: "I have not received an additional message for 200 minutes; either 200 messengers failed to cross the danger zone, or it means the other general has confirmed and committed to the attack and has confidence I will too".

History

[edit]

The Two Generals' Problem and its impossibility proof was first published by E. A. Akkoyunlu, K. Ekanadham, and R. V. Huber in 1975 in "Some Constraints and Trade-offs in the Design of Network Communications",[7] where it is described starting on page 73 in the context of communication between two groups of gangsters.

This problem was given the name the Two Generals Paradox by Jim Gray[8] in 1978 in "Notes on Data Base Operating Systems"[9] starting on page 465. This reference is widely given as a source for the definition of the problem and the impossibility proof, though both were published previously as mentioned above.

References

[edit]
  1. ^ Gmytrasiewicz, Piotr J.; Edmund H. Durfee (1992). "Decision-Theoretic Recursive Modeling and the Coordinated Attack Problem". Artificial Intelligence Planning Systems. San Francisco: Morgan Kaufmann Publishers. pp. 88–95. doi:10.1016/B978-0-08-049944-4.50016-1. ISBN 9780080499444. Retrieved 27 December 2013. {{cite book}}: |journal= ignored (help)
  2. ^ The coordinated attack and the jealous amazons Alessandro Panconesi. Retrieved 2011-05-17.
  3. ^ Leslie Lamport. "Solved Problems, Unsolved Problems and Non-Problems in Concurrency". 1983. p. 8.
  4. ^ Ruby, Matt. "How the Byzantine General's Problem Relates to You in 2024". Swan Bitcoin. Retrieved 2024-02-16.
  5. ^ "The Byzantine Generals Problem (Consensus in the presence of uncertainties)" (PDF). Imperial College London. Retrieved 16 February 2024.
  6. ^ Lamport, Leslie; Shostak, Robert; Pease, Marshall. "The Byzantine Generals Problem" (PDF). SRI International. Retrieved 16 February 2024.
  7. ^ Akkoyunlu, E. A.; Ekanadham, K.; Huber, R. V. (1975). Some constraints and trade-offs in the design of network communications. Portal.acm.org. pp. 67–74. doi:10.1145/800213.806523. S2CID 788091. Retrieved 2010-03-19.
  8. ^ "Jim Gray Summary Home Page". Research.microsoft.com. 2004-05-03. Retrieved 2010-03-19.
  9. ^ R. Bayer, R. M. Graham, and G. Seegmüller (1978). Operating Systems. Springer-Verlag. pp. 393–481. ISBN 0-387-09812-7.{{cite book}}: CS1 maint: multiple names: authors list (link) Online version: Notes on Data Base Operating Systems. Portal.acm.org. January 1978. pp. 393–481. ISBN 978-3-540-08755-7. Retrieved 2010-03-19.

See also

[edit]