Quickie No. 2

Discussion in 'Blackjack Tournament Strategy' started by Reachy, Dec 6, 2006.

  1. Reachy

    Reachy New Member

    Again, this is more of a reality check rather than anything else. When you are staring at spreadsheets for any period of time all the numbers start to blend into one and your grasp on reality starts to weaken. And as you all know mine is a rather tenuous grip anyway :D

    Scenario:

    BR2 acting last, final hand, max bet 1000.

    BR1 - BR 2350, bet 655
    BR2 - BR 2000, bet 1000

    BR1 - Hard 12, DD, card face down
    BR2 - Hard 4, ??? (hypothetically assume we can't split)
    Dealer - 2

    Assuming normal tourney rules apply, what would you do iof you are BR2? What about if the dealer was showing a 6?

    Cheers

    Reachy
     
  2. KenSmith

    KenSmith Administrator Staff Member

    OK, here goes, assuming dealer H17.

    If surrender is allowed, and BR1 doubled for at least $200, BR2 should surrender for a 60.2% chance of advancing.

    A better play for BR1 in that case is to double for anything between $5 and $195, just to hide the card. The extra chips are meaningless, but the lower amount prevents a BR2 surrender.

    So, if BR1 had doubled for $5, now how should BR2 play?
    BR2 should double for enough to take the high back, for a 35.7% chance of advancing.

    The second part of your question has the dealer with a 6 up, and we'll still assume H17.

    BR1's best play is still to double for a small amount to hide the card, and BR2 should double to take back the high. BR2 will advance 44.0%.

    If BR1 makes the worse play as described of doubling for the full amount, BR2 should surrender for 56.5% chance.
     
  3. Reachy

    Reachy New Member

    Thanks Ken

    Now I know my maths is correct which is great. I was starting to wonder. Did you figure the odds from 1st principles or do you have an off-the-shelf tourney teaser solver? :D

    Cheers

    Reachy
     
  4. KenSmith

    KenSmith Administrator Staff Member

    What I have is my own software creation, and it has a lot of shortcomings, but for two player games it works pretty well. I'm still working on it.
     
  5. London Colin

    London Colin Top Member

    Software

    Ken,

    Does your creation take a simulation approach or an analytical one? Simulation seems to be the norm, and I've been thinking about whether it would be feasible to put together an analytical version (which Reachy already seems to be doing :) ).

    N.B. Reachy:- You've used the term simulation once or twice, when I'm not sure that's what you really meant.
    AIUI, a simulation involves playing out the same situation and strategy millions of times, with a newly-shuffled deck, and recording the results. The statistics of those results then give a good indication of the probabilities associated with the strategy.

    Analysis, OTOH, would involve combining the probabilities of all the possible outcomes; so that you predict what should happen, rather than record what actually does happen.
     
  6. KenSmith

    KenSmith Administrator Staff Member

    My software uses combinatorial analysis, not simulation.
     
  7. Reachy

    Reachy New Member

    terminology

    Thanks Colin, I realise that I mix up the terminology and am trying to put a stop to it ;). You are correct I am not running a simulation, it is an analysis of all the combinations for 2 players in various commonly occurring scenarios. I am just doing a feasibility study at present because it would be a big job if I see it through to the end. I want to know whether it's worth the investment of my time.

    If I was to incorporate multiple players would a simulation become the more manageable option since each additional player would up the complexity of the problem exponentially? As you know I am a rank amateur at this programing lark and you'd probably wet yourself if you saw the code for my probability calculating program :D

    Cheers

    Reachy
     
  8. London Colin

    London Colin Top Member

    Thanks Ken.
    Possibly. That's one of the issues that I've been pondering.

    I'd say there are two aspects to the increasing complexity. One is the difficulty involved in keeping the logic correct, so that the answers are hopefully valid. The other is the increased processing time that may be required.

    For both simulation and analysis, increasing the number of players complicates the logic if you try to allow for strategy changes in those who act late, having seen the results of the earlier players, and also for what they might do if more than one advances. I think these issues may be marginally less difficult to deal with in the case of a simulation.

    For analysis, I would think there are also additional logical complications that are simply the nature of the beast. By no means insurmountable, but possibly making it difficult to bolt features onto your existing BASIC code.

    The term 'combinatorial analysis' is something I'm familiar with in terms of evaluating a single hand and working out the perfect strategy and its EV, or the EV of following a particular strategy. In that context you are dealing with the probabilties associated with each card that may be drawn by the player and the dealer. My starting point for any software of my own is likely to be the C++ library from Eric Farmer that does this. However, it doesn't use infinite deck calculations but precise ones, which means that each 'what if?' question has to be unwound by returning a card from the hand to the shoe and drawing a different one (and the dealer probabilites must be repeatedly recalculated).

    That must surely be out of the question for multiple players, as it would take forever. So another potential benefit of a simulation might actually be accuracy, since you could simulate a specific number of decks (or composition of remaining cards) if you wished, without a massive increase in the time taken compared to the infinite deck.

    I suppose combinatorial analysis in the context of comparing player outcomes can mean combining the players' and dealer's pre-calculated probabalities of reaching various totals, these being independent for the infinte deck. (Or maybe you could use the probabilities for the specific number of decks, with the initial cards removed, but fixed thereafter.)

    I think I'm rambling now, so I'll stop ... ;)
     
  9. KenSmith

    KenSmith Administrator Staff Member

    I was considering doing this, and I still may eventually. At this point, I'm doing everything with infinite decks. And, yes, processing time is the killer. When I tried to run a 3 player situation yesterday, it was going to take more than 24 hours to complete. I'm still working on improving performance.
     
  10. Reachy

    Reachy New Member

    What about this!

    Can you not network a group of PC together each running a remote "TBJ Simulated Player" to increase speed and power?

    Cheers

    Reachy
     
  11. London Colin

    London Colin Top Member

    BJT@home

    I'm surprised it's as long as that, which is a sure sign that I haven't fully appreciated the complexity of the task. :)

    There's the S word again.;)

    For an actual simulation, a single PC ought to be fast enough. If you did need to speed it up then, since it's all about collecting results, the easiest way to make use of a network of computers would probably be to just have them all run the exact same program and then add up the results. E.g., instead of playing your hand the same way 100 million times on one PC to get a figure for the chances of winning by that strategy, play it one million times on 100 PCs to get the same data.

    For combinatorial analysis you are building a tree of probabilities, so at any branch there is the option to off-load the task of building the sub-tree to another machine and move on to the next possibility. The networked PCs would all be working on just a portion of the overall task, not representing individual players but individual states of the game.

    So, if we were working at the level of individual cards, the initiating PC might start by dealing an ace to player one, and then rather than go on to continue with the scenario of 'player one's first card is an ace', it could ship that whole task to a free PC and move straight to the 'player one's first card is a deuce' scenario.

    If we could convince people to stop searching for extra-terrestrials and replace their SETI@home screen savers with TBJ ones then no teaser could survive our scrutiny! :D
     
  12. KenSmith

    KenSmith Administrator Staff Member

    The idea of a computing grid to calculate these scenarios is an interesting one, but it sounds pretty challenging to create.

    To understand the magnitude of the problem, consider the complexity of a three player situation:
    If we assume a $500 bet, and a $5 minimum, there are approximately 120 strategies that can be used for a particular hand. That includes the various hard and soft total targets, and the 100 different double down amounts.

    So, player one examines 120 strategies, that each lead to a final hand total of stiff to bust (7 values).

    Player two will be confronted with 7 X 101 situations to analyze. (7 hand values for player one X the 101 different possible bet amounts after doubling or not doubling).

    For each of those 707 situations, player two needs to check all 120 or so different strategies to see which one works best.

    Player 3 examines 120 strategies for the 707 X 707 = half million possible situations he'll see. That's 60 million strategies analyzed for player three.

    And, this assumes that the bets are predetermined, and the cards have already been dealt. If I hope to extend this to be able to work on betting decisions, you have to multiply that by every possible starting bet, and every possible two-card hand for each player. I think there are about 35 starting hands, and 100 initial bet sizes in this example, for each player. So, that's 3500 X 3500 X 3500 = 42 billion. And, for each one of those 42 billion starting situations, we'll need to analyze those same 60 million strategies. That ends up being 2520 quadrillion possibilities.

    I suspect there are some huge shortcuts available, if I can find them.
     
  13. London Colin

    London Colin Top Member

    My Random Thoughts

    You've clearly given this a good deal of thought and of course have infinitely more expertise to bring to bear on the subject than I do; so that I feel a little embarrassed to be making suggestions. For what it's worth though, I'll try and formalise some of the half-formed ideas and assumptions I have been working with ...

    I wasn't anticipating such a complete tree to be formed. In particular -

    For the bet sizes (both DD for less and the initial bet if the procedure is extended) it ought to be possible to identify the important 'milestone' bets at which the player's possible results overlap with those of an opponent, so that you derive a set of candidate bet sizes for which betting more (or less) would make no difference until you reach the next milestone. (This would not be so easy in the case of the inital bet if you are betting before some of your opponents. You have to include some educated guesses as to what they might bet, which of course will be affected by your own bet, leading to a vicious circle!)

    For the other players, I wasn't necessarily thinking in terms of applying the same exhaustive search for the perfect option as I would to the player of 'my' hand, but rather some sort of formulaic strategy that equates to how you would expect most people to play the situation. In fact there would probably be at least two separate strategies defined - 'ploppy' and 'non-ploppy'. :)

    Thinking about this now, I'm not sure how significant the time savings would be from using these sub-optimal strategies, but I think they would be a useful feature in any case. In Wong's book you occasionally see 'If your opponent is a tournament expert do this, else do that' and I would expect to see this kind of divergence reflected in the results that come from employing the different opponent strategies.

    So in your example, I think Player 2 would be examining a maximum of maybe 7 * 14 options, instead of 7 *101, and probably a lot less. [On the basis that player one, when sizing a DD for less, has two opponents to consider who might win/lose -2, -1, -1/2, 0, +1, +3/2, or +2 bets), and the purpose of Player 1's bet size must be to stay ahead of one of those milestones in the event that he loses. (I've arbitarily stopped at +/-2, as it's rare to go further.]

    This means that the strategies are reduced from approx. 120 to approx. 35 (assuming 15 possible doubles). And so player 2 will be checking 35 strategies for 96 situations, instead of 120 for 707. And in 'ploppy' mode it might be reduced to just a handful of strategies, perhaps just 'full DD' vs 'stand when BS says so' vs 'don't bust' vs 'hit to same total as Player 1'.
     
  14. KenSmith

    KenSmith Administrator Staff Member

    All of those are good ideas, and I think applying something like them will be important to my eventual success. Thanks for the input.
     
  15. London Colin

    London Colin Top Member

    You are welcome

    I suspect I learned more from the discussion than you did. :D
     
  16. KenSmith

    KenSmith Administrator Staff Member

    Actually, your last post got me thinking. I believe that I have come up with a good way of determining which strategies can be culled from the list. Implementation will have to wait until my head's a little clearer though. I'm suffering with either a cold or the flu at the moment.
     
  17. London Colin

    London Colin Top Member

    That's good to hear

    (Not the bit about the cold/flu, obviously :) )

    Incidentally, did you start out by considering both the analysis and the simulation approach? If so, what drew you towards the former?

    From my perspective, I like the idea of analysis because the act of trying to design such a system seems like a useful exercise in itself, potentially offering some insights into the decision-making process of human players. A simulation, OTOH, could be more of a 'brute force and ignorance' approach.

    But, in terms of arriving at an answer, what would you say are the practical considerations that favour one over the other?

    (A factor in favour of simulation that has occurred to me is that you don't have to wait until the very end of the computation to get an answer. At any point you can request the current answer, which will become more and more accurate as time goes by and the sample size increases.)
     
  18. KenSmith

    KenSmith Administrator Staff Member

    I like the mathematical purity of an analysis-based solution, but I admit that it would be nice being able to see a simulation slowly converge on better and better answers. I have a fair amount of experience in the combinatorial analysis process, having used that technique for blackjack basic strategy work, and similar work on a few math consulting projects.

    In this case, I'm not sure simulation will be all that much quicker. I think you would still need to go through the various strategies at each player's decision point, and then you would have to simulate enough hands with that decision to make the results statistically significant.
     

Share This Page