jump to navigation

The prosecutor’s dilemma May 25, 2015

Posted by Ezra Resnick in Game theory.
Tags:
add a comment

“Here’s the deal: We’ve got enough evidence to convict you and your partner of a misdemeanor right now, and you’d each get two years in prison. Now, if both of you come clean, and confess to the felony I know you committed, you’ll each get three years. Why would you do that, you ask? Well, my colleague is in the other room right now making the same offer to your partner. If he signs a confession and you don’t, we let him off with one year and give you four! Of course, if you sign a confession and he doesn’t, the reverse applies. Now, if you think about it rationally, you’ll see that —”

“Yeah, yeah — I’m better off confessing no matter what my partner does, and we both end up doing three years — I’m familiar with the Prisoner’s Dilemma. Do you think I’m stupid? Listen up, cause here’s the deal: I’ll confess, but I only serve one year — no matter what my partner does. If you refuse, then I confess nothing, and I go public with the story of how you tried to use Game Theory to bully me into signing a false confession. Oh, and did I mention that my partner is making the same offer to your colleague in the other room right now? Do you want your colleague to walk away with a felony conviction, while you’re left to defend misconduct charges on your own? Think about it rationally…”

John Nash 1928-2015

John Nash
1928-2015

Advertisements

Playing by the rules November 15, 2014

Posted by Ezra Resnick in Game theory, Logic.
add a comment

chess-cube“Want to play a game?”

“Sure! But first we need to agree on the rules.”

“Of course. I propose that we take turns proposing rules.”

“Agreed, and since you just proposed the first rule, I guess I get to propose the next one.”

“Wait a minute: I didn’t propose a rule for the game itself — I merely proposed a rule for how we ought to go about proposing the game rules.”

“Apologies; yours was indeed a meta-rule. In that case, let me propose a meta-rule of my own: Any disagreement about a proposed game rule will be decided by flipping a coin.”

“I’m not sure I agree with that.”

“Well, we haven’t yet agreed on a method for resolving disagreements about meta-rules. Do you have a suggestion?”

“How about we take turns: one of us gets to decide the first disagreement, the other decides the next disagreement, and so on.”

“OK, then: following your meta-meta-rule, I now get to decide our meta-rule disagreement about how to resolve disagreements about game rules.”

“Hold on: Who said you get to decide the first meta-rule disagreement?”

“Well, I let you determine the meta-meta-rule on how to decide meta-rule disagreements, so now it’s my turn.”

“Nice try, but we never agreed on how to resolve disagreements about meta-meta-rules. You can’t just make unilateral assumptions.”

“Well, how come you got to propose the first meta-rule to begin with? If you get to propose the first meta-rule then I should get to decide the first meta-rule disagreement.”

“Then I get to propose the first game rule.”

“Agreed.”

“First, I propose the following meta-rule: If the first rule proposal is challenged and loses the coin flip, the challenger must propose the following as his next rule: ‘The winner is whoever proposed playing the game.'”

“I don’t agree to that!”

“Noted, but according to our meta-meta-rule, it’s my turn to decide in case of disagreement on a meta-rule. And now for my first proposed game rule: The winner is whoever proposed playing the game.”

“Even if I disagree I still lose. Nicely played.”

“Thanks! That was fun.”

“Indeed. But maybe we should play a different game next time?”

“Sure! As long as we can agree on the rules…”

A computer scientist plays Twenty Questions June 12, 2012

Posted by Ezra Resnick in Computer science, Game theory.
4 comments

“Would you like to play Twenty Questions?”

“Sure, how do we play?”

“I think of a famous person — alive or dead, real or fictional — and you have to guess who it is, in no more than twenty yes-or-no questions.”

“Well… I could use binary search to identify a single letter of the alphabet in 4.7 yes-or-no questions (on average), so nineteen questions should allow me to identify four letters. Why don’t you just tell me the first four letters of your person’s last name, and I’ll guess who it is.”

“Actually, I don’t think I want to play any more.”

“That’s OK — the game is flawed, anyway: it assumes there are no more than 1,048,576 famous people to choose from…”

Voting like it’s 1938 November 5, 2011

Posted by Ezra Resnick in Democracy, Game theory, Politics.
add a comment

It’s Election Day this week in the U.S., and in Cambridge we’ll be electing the City Council (nine members) and the School Committee (six). Cambridge uses a “Proportional Representation” voting method, which allows each voter to rank all of the candidates (instead of just picking a single favorite). The advantage of such a method is that in the event that your top candidate can’t use your vote — either because he got too few votes to be elected or because he has more votes than needed — the rest of your preferences can be taken into account for choosing among the remaining candidates.

For instance, under Cambridge’s system, a City Council candidate needs a tenth of the votes (plus one) to be elected; any votes he receives above that quota are called surplus, and surplus ballots are redistributed among the remaining candidates according to the next preference on each ballot. This is not as straightforward as it sounds, however: we need to decide which of a candidate’s ballots to transfer — and the manner in which we do so can influence the election’s result.

Cambridge uses the “Cincinnati Method” for surplus distribution, wherein the surplus ballots to be transferred from a candidate are drawn at regular intervals from the ordered sequence of all that candidate’s ballots. For example: if some candidate received 500 top-rank votes and only 400 votes are needed for election (i.e. he has a 100-vote surplus), every fifth one of the candidate’s ballots (in the order they were originally counted) — ballots 5, 10, 15, 20, etc. — will be selected for redistribution.

The disturbing aspect of the Cincinnati Method is its somewhat random nature, and the fact that its outcome is dependent on the order in which the ballots happen to be counted: if they’re counted in a different order, the election results could be different!

There are alternative methods for transferring surplus ballots. As explained by mathematician Robert Winters, the method that is generally adopted these days is known as “fractional transfer,” wherein a fraction of all of an elected candidate’s ballots are transferred to the candidates ranked next on those ballots. This method has the virtue of being completely independent of the order in which the ballots are counted, and it is actually the default method built in to the tabulation software used in Cambridge elections — Cambridge explicitly overrides the default option and instructs the software to use the Cincinnati Method instead!

Why does Cambridge insist on using a flawed method instead of switching to a better one? The answer is rather depressing.

Cambridge is […] in the position where we must abide by a 1938 law (Mass. General Laws, Chapter 54A) that restricts our methods for redistributing surplus ballots to systems that were in use somewhere in the United States at that time.

Ah, tradition: things should always be done the way they were done in the past. When will our elected officials get around to repealing this embarrassingly stupid law?

The game of life and death March 19, 2011

Posted by Ezra Resnick in Game theory.
2 comments

You and your sweetheart have been captured and brought before a semi-barbaric (yet game-theory-savvy) tyrant. He informs you that your fate will be decided by the outcome of a little game he invented, which he calls “The Game of Life and Death.” The tyrant produces two identical gold coins, and hands one to you. One side of the coin is engraved with a tree (i.e., Life), while the other side is engraved with a skull (i.e., Death). All you need to do is choose which side of the coin to play. The tyrant will do likewise, and your choices will be revealed simultaneously. You and your sweetheart’s fate will then be determined as follows:

Tyrant plays Life Tyrant plays Death
You play Life Only your sweetheart dies You both live
You play Death Only you die You both die

You have until the next morning to make your decision. Before you are taken away, though, you see the tyrant whispering something into your sweetheart’s ear. Later, in your cell, you ask her what he said. Your sweetheart looks into your eyes, and tells you that the tyrant promised he would play Death. If the tyrant is to be believed (and he is known for being an honest tyrant), this is good news, because then you can save both yourself and your sweetheart by playing Life. Your sweetheart encourages you to do so.

But suddenly a thought occurs to you: what if the tyrant actually told your sweetheart that he intended to play Life? In that case, she might be lying in order to get you to play Life as well, and so sacrifice her life for yours! If this is the case, you must certainly play Death, so that she will live. On the other hand, if you play Death, and it turns out that both the tyrant and your sweetheart were truthful (meaning that the tyrant plays Death as well), then both of you will be killed — even though you would both have gone free if only you had followed your sweetheart’s advice…

So what would you do? Life or Death?

But does he know that I know that he knows? December 25, 2010

Posted by Ezra Resnick in Game theory, Puzzles.
add a comment

Five children have been playing together, and three of them have gotten mud on their foreheads. Each child can see mud on others but not on himself. When the children come home, their father says that at least one of them has mud on his forehead; he then asks if anyone can deduce whether there is mud on his own forehead. The children look around, but no one answers. So the father asks again: Does anyone know whether he has mud on his own forehead? Silence. The father then repeats his question a third time, at which point all three dirty children immediately step forward and proclaim that their foreheads must be muddy.

The first (and simplest) puzzle is: How did the kids know? And why did the father have to ask three times?

The solution is inductive: begin by considering what would happen if there were only one muddy child, Alex. He would see that all the others are clean, so when the father states that at least one child is dirty, Alex would immediately know it must be him. Therefore, if there were only one dirty child, he would come forward after the father’s first query. Next, suppose there are two muddy children, Alex and Bob. Alex sees that Bob is dirty, and vice versa. As far as Alex knows, Bob could be the only dirty one, and as far as Bob knows, Alex could be the only dirty one; so neither of them step forward after the father’s first query (each expecting the other to do so). However, when Alex sees that Bob did not immediately come forward, and since we already concluded that if there were only one muddy child he would have identified himself right away, Alex can deduce that his own forehead must be muddy as well. Bob reasons likewise, and so they both step forward when the father asks a second time. Applying the same logic to our original scenario shows that the three dirty children can identify themselves as soon as they see that no one came forward after the second query — so in this case, the third time is the charm.

A more interesting puzzle, however, is this: Was the father’s opening statement that “at least one child has mud on his forehead” necessary? Reexamining the solution above, we find that the “base case” in which there is only one muddy child doesn’t work without the father’s statement — the dirty child would never come forward, since as far as he knows, no one is muddy. Consequently, the entire chain of inference collapses. So the father’s statement is necessary — but this seems paradoxical, since he merely told the children something they already know! After all, if there are three muddy foreheads, each of the five children knows just by looking at the others that “at least one child has mud on his forehead.” So what information does the father’s statement actually provide?

Think again about the case where only two children have mud on their foreheads, Alex and Bob, and let p represent the statement “at least one child has mud on his forehead.” It is then true that each of the five children knows p even without the father saying so, but the problem is that not everyone knows that everyone else knows p: as far as Alex knows, Bob could be the only muddy one, in which case Bob would not know that “at least one child is muddy” (since Bob would see only clean faces). Likewise, as far as Bob knows, Alex might not know p. This is what prevents them from deducing their own situation. What about our original case? When there are three dirty children, not only does everyone already know p, everyone knows that everyone knows p. However, not everyone knows that everyone knows that everyone knows p! What the father provides, then, is common knowledge: after his statement, they all know that there is at least one muddy child, and they all know that they all know it, and they all know that they all know that they all know it…

A closely related scenario is the coordinated attack problem. Suppose that two generals, each in command of an army division, wish to launch a surprise attack on the enemy. However, the only way for the attack to succeed is if both divisions attack simultaneously; if a single division attacks alone it will be defeated. Unfortunately, the only way for the generals to communicate is by messenger, and the messengers can be lost or delayed along the way. What should the generals do? Suppose that general A sends general B the message: “Attack at dawn on January 1st.” This is certainly not enough to guarantee a coordinated attack, since general A cannot be sure that his message was received. So let’s say that general A waits until he receives a confirmation message back from general B. Can both generals then attack with confidence? No, since general B doesn’t know whether the confirmation he sent was received by general A — and general B knows that general A will not attack without receiving that confirmation. (And general A knows that general B knows this.) So even though both generals know when to attack, and they each know that the other knows, they cannot attack because neither general knows that the other knows that he knows!

This situation is familiar to anyone trying to make an appointment with someone by email. How many rounds of confirmation are necessary to be sure you both know the engagement is on? It’s easy to see that no number of acknowledgments and counter-acknowledgments will allow the parties to achieve absolute certainty. As explained in a paper by Ronald Fagin, Joseph Halpern, Yoram Moses and Moshe Vardi, guaranteeing coordinated attack requires common knowledge, and common knowledge cannot be achieved where communication is not guaranteed. Actually, the situation is worse than that: even in a system where communication is guaranteed, if there is any uncertainty about the delivery time — e.g., even if all messages are guaranteed to arrive in one millisecond or less — common knowledge is impossible to achieve.

Fagin et al. go on to explore various ways out of this apparent stalemate. But if you ever miss a date, you can always claim that you didn’t show up because the other party didn’t acknowledge your acknowledgment of their acknowledgment…

Is predicting human behavior inherently paradoxical? August 7, 2010

Posted by Ezra Resnick in Computer science, Game theory, Puzzles.
add a comment

Suppose that someone has a method for predicting how people will behave in certain controlled situations. We need not assume perfect clairvoyance; let’s say that past performance has shown the method to be 90% accurate. The predictor invites you into a room where you are presented with two closed boxes, marked A and B. The rules are as follows: you may either take both boxes, or take box B only. Box A definitely contains $1000; the contents of box B, however, depend on the prediction that was made (in advance) by the predictor. If he predicted that you would take both boxes, box B was left empty. If, however, he predicted that you would take only box B, $1 million were placed inside it.

This is called Newcomb’s paradox, created by the theoretical physicist William Newcomb in 1960. Why is it a paradox? At first glance, it seems obvious that you should take both boxes. Regardless of the contents of box B, taking both boxes always gives you an extra $1000! This is the principle of dominance in game theory, and it is difficult to dispute. However, an equally well-founded principle is that of expected utility: when you know the probabilities of the various outcomes, you should attempt to maximize your expected winnings. If you take both boxes, there is a 90% chance you will end up with $1000 (the predictor was right and so box B is empty), and a 10% chance of ending up with $1,001,000 (the predictor was wrong). So the expected utility when taking both boxes is $101,000. On the other hand, if you take box B only, there is a 90% chance of getting $1 million, and a 10% chance of getting nothing, so the expected utility is $900,000! It seems you should take only box B, then. But isn’t it irrational to leave behind the guaranteed $1000 in box A? And round and round we go…

As is often the case, the root of the paradox is self-reference. (Is the sentence “this sentence is false” true or false?) In Newcomb’s paradox, the subject knows his actions have been predicted, and this knowledge influences his decision. Therefore, the predictor must take his own prediction into account when making his prediction. In other words, if the predictor wishes to create a computer model which will simulate the subject’s decision-making process, the model must include a model of the computer itself — it must predict its own prediction, implying an infinite regress which cannot be achieved by any finite computer. This inherent limitation is reminiscent of Rice’s theorem in computability theory: it is impossible to write a computer program that can determine whether any given computer program has some nontrivial property (such as the well-known undecidable “halting problem”).

It is tempting to dismiss Newcomb’s paradox as logically impossible, under the assumption that we can never predict with any accuracy the behavior of a person who knows that his actions have been predicted. (Asimov’s Foundation introduces “psychohistory” as a branch of mathematics which can predict the large-scale actions of human conglomerates, but only under the assumption that “the human conglomerate be itself unaware of psychohistorical analysis.”) Nevertheless, William Poundstone (in Labyrinths of Reason) describes a way of conducting Newcomb’s experiment that doesn’t require a computer model to model itself. It does, however, require a matter scanner. We would use the scanner to create an exact copy of the entire setting, atom by atom — two rooms, two sets of boxes, two subjects. (Neither subject knows whether he is the “real” subject or the clone.) We would then use the actual decision made by the first subject as our prediction for the behavior of the second subject.

So what would you choose? Box B, or both boxes?

Is it rational to give in to irrationality? July 24, 2010

Posted by Ezra Resnick in Game theory, Politics, Reason.
Tags:
1 comment so far

Reuben and Shimon are placed into a small room with a suitcase containing $100,000 of cash. The owner of the suitcase offers them the following: “I’ll give you all the money in the suitcase, but only on the condition that you negotiate and reach an amicable agreement on its division. That’s the only way I will give you the money.”

Reuben, who is a rational person, appreciates the golden opportunity presented to him and turns to Shimon with the obvious suggestion: “Come, you take half the amount, I’ll take the other half, and each of us will go away with $50,000.” To his surprise, Shimon, with a serious look on his face and a determined voice says: “Listen, I do not know what your intentions are with the money, but I’m not leaving this room with less than $90,000. Take it or leave it. I’m fully prepared to go home with nothing.”

Reuben can not believe his ears. What happened to Shimon? he thinks to himself. Why should he get 90%, and I only 10%? He decides to try to talk to Shimon. “Come, be reasonable,” he pleads. “We’re both in this together, and we both want the money. Come let’s share the amount equally and we’ll both come out ahead.”

But the reasoned explanation of his friend does not seem to register on Shimon. He listens attentively to Reuben’s words, but then declares even more emphatically, “There is nothing to discuss. 90-10 or nothing, that’s my final offer!” Reuben’s face turns red with anger. He wants to smack Shimon across his face, but soon reconsiders. He realizes that Shimon is determined to leave with the majority of the money, and that the only way for him to leave the room with any money is to surrender to Shimon’s blackmail. He straightens his clothes, pulls out a wad of bills from the suitcase in the amount of $10,000, shakes hands with Shimon and leaves the room looking forlorn.

Robert Aumann calls this the blackmailer paradox, or the paradox of the extortionist. It is seemingly a paradox because the rational person gets less than the irrational person, suggesting that the most rational thing you can do is be irrational. Aumann takes this as a model for Israeli-Arab negotiations:

Twenty years ago, a brigadier general came here, maybe a major general, who didn’t identify himself. He wanted to talk to me about the negotiations with Syria. He said to me: ‘You know, Prof. Aumann, the Syrians will not give up a single centimeter of land and the reason is that the land is holy for them. Therefore, they will not concede.’ Then I told him about the extortionist. I said to him: ‘The Syrians have succeeded in convincing you because first they convinced themselves.’

What is our problem? That nothing is holy. We don’t manage to convince ourselves that anything is sacred. Not Jerusalem, not the right of return, not even Tel Aviv. We will be prepared to negotiate, and in the end even to give up Tel Aviv. We are rational as can be, and that’s our problem. . . .

There isn’t anything that we can convince the other side is sacred to us, that we’re willing to ‘be killed for it, rather than transgress.’ If there were something like that, then we wouldn’t be in the situation we are in today.

Before considering Aumann’s proposed solution, let us concede that there is indeed a serious problem here which is too often overlooked: the problem of negotiating with someone who is utterly dogmatic and inflexible. If there is truly no evidence or argument that would ever cause our opponent to change his position, then there is no point in talking to him at all, and we will just have to work around him as best we can. Suppose for a moment that the extortionist Shimon is not a human being, but rather a computer, programmed to accept only those divisions where Reuben gets no more than 10% of the money. Should Reuben attempt to argue with the computer screen, trying to convince it to be reasonable? Clearly not. The rational thing to do in such a case is to take the $10,000 and leave, and there is no paradox. The original scenario feels paradoxical because we expect people not to behave like computers, but rather to be reasonable and open to argument. The scary thing, which many good-intentioned people overlook (at their peril), is that there are human beings who have essentially made themselves into robots following a program: people who have surrendered their mind so completely to dogma that they are incapable of entertaining any alternate positions, even for a moment.

What does Aumann suggest we do when faced with such a dogmatic opponent? Apparently, he thinks we should be more dogmatic ourselves! We must distinguish, of course, between genuine dogmatism and “bluffing:” it may be a clever negotiating tactic to initially feign inflexibility so as to improve your end of the bargain, like hagglers in a market. But it seems that Aumann goes further than this, insisting that we must start by convincing ourselves that our goals are “holy” and “sacred.” Some things may indeed be nonnegotiable and worth dying for, but we need to think very carefully about what those things are, and not mimic the dogmatism of our opponents. Aumann’s proposition seems wrong both as a matter of principle and of practice. Take the issue of Jerusalem, for instance. Is maintaining Israeli sovereignty over all of Jerusalem a cause worth dying for? Is it even a just cause? I see no rational basis for such claims. And even if we did convince ourselves that we’re willing to “be killed for it, rather than transgress,” assuming that the other side is just as dogmatic, how exactly is this going to lead to any kind of solution? Even in the face of irrationality, giving up your own rationality is never the answer – even if it somehow allows you to win a battle, you will have lost the war.

This doesn’t mean we should give in to every demand of an extortionist. When placed in the blackmailer paradox against a computer, it makes sense for Reuben to take the $10,000 because that is better for him than getting nothing; however, if the computer demands that Reuben pay an additional $200,000 out of his own pocket, Reuben would have no reason to comply. If giving the extortionist what he wants is worse for us than the alternative, as it often is, then the rational course of action is to turn him down and do the best we can without him. It’s also important to remember that in real life, we usually find ourselves interacting with the same players over and over, so it may be rational for Reuben to leave the first encounter with nothing in order to make Shimon more likely to cooperate in future encounters. This, of course, assumes that Shimon is rational and is capable of changing his original strategy; against absolute dogmatism, unilateral action may be the only option. And in extreme cases, the only alternative to conversation is force. There is no room for negotiation with a suicide bomber.

Dogmatism is a very serious problem, and we must be willing to use force against it when necessary. More importantly, though, we must fight the foundations of dogmatism by promoting rationality, critical thinking, and the free flow of ideas. Not by becoming dogmatists ourselves.

Calling spades spades July 11, 2010

Posted by Ezra Resnick in Game theory, Reason.
add a comment

I often play online Spades. Spades is a trick-taking card game for four players (like Bridge, Whist, Hearts, etc.), meaning that every round is composed of thirteen tricks in which each player lays one card. The winner of each trick (determined by simple rules) begins the next trick. The first cool thing about Spades is that it’s a partnership game: players seated opposite each other are on the same team. The objective of the game is to be the first team to reach a predetermined score (e.g., 500 points). Points are given at the end of each round based on the number of tricks won by each team relative to their bid – the number of tricks they declared they would take at the start of the round. The scoring rules are what actually determine the character of the game, and the scoring rules for Spades are pretty simple:

  • A team which won at least the number of tricks they bid receives 10 points for each trick they bid, plus 1 point for each additional trick they won (on top of their bid). However, they are also given 1 penalty point (a bag) for each of the additional tricks. For example, if a team bid 3 but won 5 tricks they receive 32 points plus 2 bags. If a team accumulates 10 bags they are penalized 100 points.
  • A team which won less tricks than they bid is said to be set, and they lose 10 points for each trick they bid. For example, if a team bid 3 but won 2 tricks (or 1, or none), they lose 30 points.

From these rules we can extrapolate some basic strategy. Taking even one trick less than you bid is usually much worse than taking a few extra, so you should only bid an amount you are reasonably sure you can get (when in doubt, underbid). Consequently, the sum of a round’s bids is almost always less than 13 – usually between 10 and 12 (and an occasional sum of 8 or 9 is not unheard of). The asymmetry between overshooting and undershooting also means that it’s often worth taking more tricks than you bid in order to set the other team. This is where things get tricky: the question you are always asking yourself in Spades is, “set or bag?” If you think you have a chance of setting the other team, you may try to take as many tricks as you can, but if you think a set is unlikely then it’s probably better not to take any extra tricks and get your opponents to take all the bags. This razor’s edge is where skill comes into play: you may start a round trying to set your opponents, then switch to bagging them once you realize a set is improbable, or vice versa.

Another thing that makes Spades interesting is the partnership aspect. Your optimal play is obviously dependent on your partner’s cards, and it’s essential that the two of you cooperate on the same plan (e.g., set or bag) – but you don’t know what’s in your partner’s hand (and any discussion of your cards or your preferred strategy is forbidden). Therefore, you must try to infer what your partner has and what strategy he is favoring based on his play, and likewise, you must try to signal your intentions to your partner via your actions within the game. Of course, any such signals can be seen by the opponents as well…

Playing online, you will inevitably encounter some annoying players. Some people can’t resist the urge to berate their partner over an unsuccessful round, even when bad luck is actually to blame. A more interesting example of irrationality, however, is opponents who get upset when they feel you are using a strategy that makes the game “not fun.” David Sirlin, in his “Playing to Win” manifesto, calls such players scrubs. In Spades, scrubs are typically revealed when the sum of bids was low (10 or under), and you have skillfully maneuvered your opponents into taking all the bags. They will then accuse you of being “a bagger” or of playing a “bag game,” i.e., purposely underbidding in order to give the other team more bags. Now, I happen to prefer it when the sum of bids is high, and I usually bid aggressively, but even if I were playing a “bag game” on purpose, this complaint would make no sense. We are all presumably trying to win, so we ought to use whatever strategy works best. If being a “bagger” is a winning strategy (which I don’t think it is), wouldn’t I be a fool not to use it? The thing is, some people only know one strategy, and they get annoyed if their opponent somehow causes it to backfire. But that is the whole point of the game. If the strategy you have chosen is not working well against your opponent’s play, you need to come up with a different strategy. Scrubs will claim that always “playing to win” makes the game less fun, but for me, the fun is in trying to anticipate what the other players will do and figure out how to maximize my chances of success in any given situation. If playing the winning strategy really makes the game no fun for you, find a different game. But don’t be a scrub.