Sunday, July 28, 2013

Decision Modeling and Optimization in Game Design, Part 4: Player-vs-Player Class Balancing

This article is the fourth in a continuing series on the use of decision modeling and optimization techniques for game design.  The full list of articles includes:


The spreadsheet for this article can be downloaded here: link

Spreadsheets vs Simulations


In the previous three articles in this series, we introduced the concept of decision modeling and optimization and the Solver tool available in Excel, and we show how they can be used to figure out the optimal tax rates for a city in a 4X strategy game, determine the optimal placement of wormhole teleporters in a space game, and figure out the optimal weapon loadout for the SuperTank problem explained in Part 1.

A question naturally arises: what about game balancing?  Is it possible to apply these sorts of techniques to the types of difficult balancing problems that you typically find in many different types of games, particularly strategy games, RPGs, and MMORPGs?

The answer is yes, of course, but with a lot of caveats.  Spreadsheets in particular have a lot of limitations because in most non-trivial cases they can’t accurately represent our game.  This will make it difficult for us to do any robust balancing using optimization techniques; the real balancing problems in the vast majority of games will be well outside the scope of what we can model in a spreadsheet.  The simulation of a game alone is usually too complex, and has too many moving parts, and is usually in real-time, which throws up obstacles to any sort of discrete simulation we might want to do.

So if we wanted to try to use these kinds of techniques to balance classes in an MMORPG like WildStar or a strategy game like Planetary Annihilation, we’d have to integrate them with the game simulation itself in order for it to be in any way accurate or useful.

Also, there is the fact that there are some aspects of balancing can’t be automated; as we stated in the Disclaimers section of the first article, there’s no way to automatically tune a game’s “feel.”

As a result, the best we can hope to do here is show a simple example to illustrate the overall approach to this sort of problem: how you would go about framing this type of balancing problem and optimizing it with a simple example in Excel.  We’ll show that at least for a simple combat example, Solver can do a remarkably good job of balancing several RPG classes against each other.  You can then use that basic structure to serve as a basic framework to structure that type of optimization problem in a broader setting more deeply rooted in a game’s simulation.

Having said that, though, we hope you’ll bear with us through all the caveats and see where a simple example can get us.


“Balancing” is Undefined


There is no single, generally accepted definition for the word “balancing.”  It has many meanings, and the true meaning usually depends on the context of the game in question.  In different settings, “balancing” can refer to setting multiple character classes to be equivalent in capabilities in a role-playing game or team-based action game, balancing some number of opposing forces against each other in a strategy game, and/or adjusting the costs of different units or resources to be equivalent to their utility (among others).

The best definition of “balancing” usually depends on the design goals of the game in question – but since those design goals could be just about anything, there’s no way to determine a priori what balancing really means for games in general outside of a particular game or genre.

Some players tend to assume that balancing in combat means equal damage.  This is particularly the case in massively-multiplayer role-playing games (MMORPGs), where players frequently complain that the damage per second (DPS) of one class is too low or too high relative to another.

Of course, classes can’t be balanced on DPS alone; it’s perfectly valid for one class to have higher DPS than another, and compensate for it with other factors that limit the overall utility of that class, such as lower survivability or lower long-term DPS compared to short-term burst DPS.


The Tiny MMO


Imagine we’re creating a new game, a very simplified massively-multiplayer online role-playing game called “Tiny MMO.”  As part of our initial design, we are trying to balance four classes for player vs player (“PVP”) combat such that despite being different, all four classes are relatively equally capable in combat against all others, and there is no clear “best” or “worst” class to select against the other classes.

Although “Tiny MMO” is a real-time game, each player action lasts exactly 3 seconds, so we can discretize it by representing it as a turn-based game where each “turn” is 3 seconds’ worth of gameplay.

In this game, players will be able to play as any of four different character classes:


  • The Warrior does the most damage
  • The Mage casts spells from a distance and has the greatest range of all four classes
  • The Healer is automatically healed for a certain amount of her total health every turn
  • The Barbarian has the most health of all four of the classes

That’s all we know about these four classes, and we need to set up the initial health (“HP”), damage, healing, and attack range parameters on all four classes.  We need to balance them such that each class is unique and its stats are significantly different from all of the other classes, but each class also ends up as “balanced” as possible against all three of the other classes.

In other words, we are trying to optimize the following table:



For now, we’re using placeholder values, and assuming each class starts with 50 hit points, does 10 damage per turn when attacking, heals 0 points per turn, and has a range of 40 meters.  Each unit moves 10 meters per turn.  Since the design specifies that all four character classes can move at the same speed, we’ll consider this to be fixed and we won’t put movement speed into the decision variable table.

Obviously, this is a toy example with a very simplified model for damage.  This is a continuous average damage-per-turn value, which ignores the burst-damage vs effective long-term damage that comes with mana or other mechanics that modify classes' attack capabilities.  There is only a single damage type, which is quite unrealistic since most classes will have several if not dozens of damage types, and we'd need to implement an AI system to decide what attack to use on any given turn.  Also, damage has a random element in most games, but we'll ignore this for now and assume that the variance of the damage will not be large enough to meaningfully alter the outcome of combat between any two classes.

Of course, any balancing we do in Excel isn’t likely to be perfect or match the game’s final balancing; it’s going to have to go through a lot of playtesting.  But if we can take an hour or two to quickly get a good first pass for this simple game in Excel, at the very least it’s likely to get us that much closer to solid settings for the initial balancing pass, which will get us that much closer to where the balancing ultimately needs to be.


The Victory Table


We need to balance the four classes against each other in one-to-one combat.  Since we have only 4 classes (Warrior, Mage, Healer, and Barbarian), there really are only 6 possible 1-to-1 pairings of different classes:

  • Warrior vs Mage
  • Warrior vs Healer
  • Warrior vs Barbarian
  • Mage vs Healer
  • Mage vs Barbarian
  • Healer vs Barbarian
This kind of balancing can be remarkably difficult.  Even with the fairly simple case we have of four classes, it creates six inter-class relationships, like the six lines that can be drawn between four points on a square.



Any time we want to make even a small change to one of the parameters of any of the other classes, that change will also affect the PvP balancing between that pair of classes and the other two classes.  This exponential interconnectedness can only grow as we add more classes, and decisions based on individual PvP balancing between any pair of classes can be very dangerous when considered in a vacuum, without taking the whole of the class balancing situation into account.

Ideally, what we’d like to do is have some sort of victory table like the one below.  If we could model the combat between each of these 6 pairs of units in the spreadsheet, we could then generate some kind of a “score” variable for each of the 6 pairings.  Since higher scores are better, we could then combine all six of those scores to generate our objective function.



Note that in the table above, the cells along the diagonal are all zero because they represent pairings of the same classes against themselves, which by definition are already balanced.  Also, the cells in the upper-right corner are also zero because they represent exactly the same pairings as the cells in the lower left.

Now let’s set up a model for combat between one class and another.


The “Combat Sim”


We’ll set up each pair of classes against each other at 100 meters.  Each character takes 3 seconds to attack, so we can represent this as a turn-based simulation where each “turn” represents 3 seconds.  Every “turn,” each unit will either attack the other unit if it is in attack range, or keep walking to attempt to close the gap if it isn’t.

The simulation looks like this:



Across the top, it shows the pair of units that are squaring off: in this case, the Mage (class #1) and the Healer (class #2).  Along the left side, it shows the current distance between the two simulated units.

For each unit, the columns are as follows:


  • Max Range: This is the maximum range at which the unit can attack.  It’s drawn directly from the yellow decision variables in our decision variable table.
  • Healing: This is the unit’s rate of healing per turn, drawn directly from the decision variable table.
  • HP: This is the unit’s hit points each turn.  It begins as the corresponding HP value in the decision variable table, but is reduced over time as it’s attacked by the other unit.  It’s also raised every turn by the amount of healing it can apply to itself each turn.
  • Damage: This is the amount of damage the unit applies to the enemy unit per turn when in range.  It falls to 0 when the unit dies.
  • Attacks?: This tests whether the unit is in range.  If so, it will be ‘1,’ indicating that the unit attacks this turn; if not, it will be ‘0,’ indicating that the unit moves closer to attempt to reach the other unit instead.


    In this way, both units will start moving toward each other, and then begin attacking until one or both are dead.  Since each unit moves at 5 meters every 3 seconds (5 meters per “turn”), the “Range” will be reduced by 10 each turn that both units are walking toward each other, and 5 in each turn that only one unit is moving toward the other.  The game itself is designed so that both units can initate a move at the same time and then the move is resolved simultaneously, so it’s entirely possible for both units to end up dead simultaneously.

    Next, we need to set up the scoring for this table, and generate a numeric value that indicates how “good” the combat was – in other words, how close it is to reaching our design goals.

    Clearly, we want both units to be dead by the end of combat – or at least, we want both of them to be as close to dead as possible.  If combat is balanced, then both of the competing classes should be able to get one another’s health down as far as possible by the end of their combat.

    However, that, by itself, isn’t going to be enough.  If we only score in that way, the optimizer is just going to ramp up the damage values as high as possible, so that both units kill each other instantly!  (If you’re curious, try modifying the attached spreadsheet so you can see it for yourself).  Clearly, instant death is not what we want: we want both units dead or near-dead by the end of combat, but we also want combat to last a reasonable amount of time.

    In other words, we’re not only trying to ensure that all classes are relatively evenly balanced against each other; we’re also trying to balance them to be interesting, and part of that is ensuring that fights will last the right amount of time.

    In order to generate that score, we set up a few cells just to the right of each table.  Duration indicates how long the combat lasted; it counts the number of rows in the table in which both units were still alive.  Total HP counts the total combined hit points of the two surviving units – ideally, we want this to be 0, meaning that both units are dead by the time combat is over.



    Finally, “Score” combines duration and total hit points, as ( Duration / ( 1 + Total HP ) ) -- note that we add the “+1” to the denominator since Total HP can be 0, and in this case, we would have a denominator of 0, and thus a divide-by-zero error, unless we added something to the denominator.  In this way, we can ensure that we reward the optimizer for finding a maximum combat duration and a minimum final combined hit point value.

    (Note that since there are 17 rows in each "simulation" of class vs class combat, this means we are effectively making a design choice here that combat should last roughly 17 rounds.  If we want combat to be longer or shorter, we can change the number of rows, adjust the scoring formulas to match, and re-optimize.)

    Finally, we take these six “Score” values (one for each table) and use them in our “Victory table” above, to show the results of combat between any pair of classes.

    We could simply sum up these six scoring values, and use that as our final “Score” value.  However, if we do this, it seems very likely that the Solver will not achieve a good balance between the highest and lowest scores for the individual battles, and will achieve very high scores for some class-vs-class pairings and low scores for others.  This is not what we want; we want all of the scores to be high, with a focus on raising all of them across the board.  In order to fix this, we will multiply the sum of the scores by the lowest score in the group (using the Excel MIN() function), in order to encourage Solver to focus on the lowest-valued score.


    Adding Constraints


    We’re still not done yet.  If we optimize the decision model with the current settings, it will probably not set up our classes correctly – in fact, it will probably just give them all identical HP, Damage, Healing, and Range values in our decision variable table.

    And what we want, of course, is for each class to have its own distinct identity.  We want the Warrior to do the most Damage, the Mage to have the highest Range, the Healer to have the highest Healing, and the Barbarian to have the highest HP.  And we also want to ensure that these margins aren’t small, either – we want these classes to be different from each other by a substantial amount.

    In order to ensure this, we’ll set up a small “constraints table.”  This table will ensure that each of our four classes has the appropriate attribute, and then scores a 0 or a 1 depending on whether the constraint is satisfied.



    The “Min difference” table on the right specifies the minimum difference in each attribute for that class against all the other classes.  In other words, the Warrior must have at least 4 more HP than any other class, the Mage has to have a range at least 10 longer, and so on.

    Now that we’ve added these special constraints, it’s time to optimize!


    Solving


    Now we can run Excel’s built-in Solver to try to optimize the initial parameters (see Part 2 for an introduction to Solver and an explanation of how to enable it in Excel).  We set the “Objective” cell to be our “Score” cell that combines the results of all six of the tournaments.  We set the decision variables to encompass all 16 cells in the yellow “Decision variables” table we set up at the beginning.

    We also set up the constraints (in the “Subject to the Constraints” box) as follows:

    • All of the decision cells must be integers with a minimum value of 0.
    • All of the cells in the “HP” column have a maximum value of 200 and a minimum of 30.
    • All of the cells in the “Damage” column have a maximum value of 20.
    • All of the cells in the “Healing” column have a maximum value of 15.
    • All of the cells in the “Range” column have a maximum value of 100.
    • Also, all four of the cells in the special “Constraints” section we set up must have a value of 1 to ensure that these special constraints are satisfied.




      Finally, making sure the “Solving Method” is set to “Evolutionary,” we run Solver.  Note that since this is an evolutionary solver, you may be able to improve the solution you find by running the Solver a second or third time after it finishes, or tweaking the settings (via the “Options” button) for evolutionary optimization.

      We should now end up with something like this:



      ...  And as if by magic, Solver has given us a good initial balancing pass.

      As you can see, the Warrior now does the most damage, the Mage has the greatest range, the Healer has the most healing, and the Barbarian has the most HP.  Also, you can scroll down to the individual class-vs-class tournaments see how the classes fared against each other; as you can see, most of them are very evenly balanced, with either both classes dying by the end of combat, or one class only barely surviving.  Also, all of the tournaments are reasonably long-lasting, with none of the classes “one-shotting” any of the others.

      Not bad for a few hours of work, is it?


      Conclusion


      In this example, we've set up a simple balancing problem and shown that you can, in fact, solve it through simulation and optimization.  Though obviously a toy example, it shows the power of decision modeling and optimization techniques and could also serve as an inspiration for more exhaustive balancing tools more tightly integrated with your own game simulation.  If nothing else, we hope it will serve as a guide for how to frame this type of problem in practice.

      In the next two articles in this series, we’ll delve into assignment problems, which involve selecting the optimal assignments between two or more sets of entities.  We’ll show how to solve these types of problems and illustrate how we used that approach to design the towers in our iOS/Android strategy game City Conquest.

      -Paul Tozour

      Part 5 in this series is now available here.

      This article was edited by Professor Christopher Thomas Ryan, Assistant Professor of Operations Management at the University of Chicago Booth School of Business.

      Sunday, July 21, 2013

      Decision Modeling and Optimization in Game Design, Part 3: Allocation and Facility Location Problems

      This article is the third in an ongoing weekly series on the use of decision modeling and optimization techniques for game design.  The full list of articles includes:


      Spreadsheets for this article can be downloaded here: (SuperTank) (Teleporters part 1) (Teleporters part 2)

      SuperTank: Solved!

      In the first article in this series, we introduced an example problem for a game called SuperTank. In the second article, we introduced the basic concepts of decision modeling and guided you through a simple example using Excel's Solver tool.

      Now, it’s time to apply the techniques we learned in Part 2 to the SuperTank problem from Part 1, and prove that we can use them to quickly and easily solve SuperTank.  Just to refresh your memory, SuperTank is a game where you’re able to fight with a customizable tank.  A SuperTank looks something like this:



      Each SuperTank can have any number of weapons of five different types, as shown below:



      Your SuperTank can hold 50 tons worth of weapons, and you have 100 credits to spend. Your SuperTank also has 3 “critical slots” used by special weapons like MegaRockets and UltraLasers.

      You can download the spreadsheet for this example here.

      The goal is to pick the weapons that maximize the SuperTank’s damage, while staying within the limits of 50 tons, 100 credits, and 3 critical slots. We also assume that this table properly encapsulates everything we need to know, and that factors such as weapon range, refire rate, and accuracy are either irrelevant or are already properly factored into the “Damage” setting for that weapon.

      In order to optimize this, we’ll first enter the table above into a spreadsheet. Then, just below it, we’ll enter a second table that has a set of 5 “quantity” cells to specify the quantity of each of the 5 weapon types.

      For now, we’ll enter a value of ‘1’ into these cells just to test that they work properly, but these are our decision cells – we will ask Excel Solver to find the correct values of these cells for us (you can tell that they are decision cells due to the yellow coloring, as we are following the formatting guidelines laid out in the second article). To the right of “quantity” cells, we’ll add calculation cells that multiply the quantity values in these decision cells by the Damage, Weight, Cost, and Critical Slots values from the table above. Thus, each row of this table will properly add up how much damage, weight, cost, and critical slots all the weapons in each weapon category will use up.



      Finally, we set up a section below that sums up all the quantity, weight, cost, and critical slots values from the table above, and compares those against the maximum weight, cost, and critical slots settings specified in the problem statement (50, 100, and 3, respectively).



      Following our formatting conventions from Part 2 of this series, the blue cells along the top are our criteria from the problem statement. The grey cells are calculation cells representing the total weight, cost, and critical slot values based on the summations from the quantity table above (i.e. the totals of the “Weight x Quantity,” “Cost x Quantity,” and “Critical Slots x Quantity” columns. Finally, the orange cell represents the total damage of our SuperTank, based on the total damage from the “Damage x Quantity” column of the table above.

      Before we run ahead and solve this, let’s take the time to make our spreadsheet a bit more user-friendly. We’ll take advantage of Excel’s ability to name each cell to give user-friendly names to these 7 cells in the final calculation table. This isn’t strictly necessary, but in the long run, it will help make our spreadsheet significantly more understandable if we can give cells readable names such as “MaxCriticalSlots” instead of “$F$21.” To do this, we simply select a cell and go to the name entry box above the spreadsheet and just to the left of the formula bar and type in the new name.




      Finally, let’s go to Excel Solver and find the solution (go to the right side of the “Data” tab and select “Solver”; if you don’t see it, go to Excel Options, select the Add-Ins category, ensure that the “Manage” drop box is set to “Excel Add-Ins,” hit “Go…” and ensure that “Solver Add-in” is checked.)

      Under “Set Objective,” we’ll select the orange objective cell and select the “Max” radio button below it. Under “By Changing Variable Cells,” we’ll select the decision cells (the yellow cells in the “Quantity” column of the second table). Below that, we’ll hit the “Add” button to add constraints as follows:
      • The decision cells should be between 0 and some reasonable maximum (we pick 50, even though this is probably a much larger upper limit than we need). It’s also essential to set the “= integer” constraint on each decision cell, since you cannot have a fractional amount of any given weapon, and since Excel Solver assumes any variable is a real number unless you tell it otherwise, it would undoubtedly try to do this if we let it.
      • We also set the total cost, total weight, and total critical slots values to less than the maximums specified in the problem statement. You can see from the dialog image below that these now have the nice user-specified names we added in the table below, making this dialog much more readable.


      Now we hit the “Solve” button, and after a brief wait, Solver fills in the “Quantity” values for us, giving us:
      • 1 Machine Gun
      • 3 Rockets
      • 2 MegaRockets
      • 1 Laser
      • 1 UltraLaser

      All of this gives us total damage of 83 and uses exactly 50 tons, 100 credits, and 3 critical slots. You can see that the best solution does not change no matter how much time you give Solver to run.  If you reset these values and re-optimize, or if you go to Options and change the random seed, it will still give you these values.  We can't be 100% sure this is the optimal solution, but given that Solver seems to be unable to improve on it after repeated optimization passes, it seems quite likely that this is the real optimum and not just a local maximum.

      Problem solved!


      Additional Uses

      What’s exciting about this is that we've not only solved the problem much more quickly than we could have done by hand, we've also set it up to allow us to test which weapons are most useful with different (weight, cost, critical slots) settings for a SuperTank. This means that we can relatively easily see and measure the effects of various changes to any of these parameters on the SuperTank, and if we wanted to introduce a new alternate model of SuperTank that was lighter, heavier, or had a different number of critical slots, we could do so very easily.

      We can also get a sense of the relative utility of all of our five weapon types as we modify all of these parameters, and quickly identify which weapons are too useful, not useful enough, inappropriately priced for their weight and damage characteristics, and so on.

      Again, the point is that this kind of tool allows us to search through the design space much more quickly than we could do by hand. For any incremental design decision we might consider, whether it be changing any of the parameters of the weapons or the SuperTank itself, adding new weapons or SuperTank models, or adding new parameters (say, a Size requirement measured in cubic meters), this gives us a very easy way to get visibility into some of the ramifications of that potential change.

      To see what I mean, go to the blue “Max Cost” cell and change it from 100 to 99. Then re-run Solver, and you should now get a very different weapon loadout:
      • 0 Machine Guns
      • 2 Rockets
      • 3 MegaRockets
      • 3 Lasers
      • 0 UltraLasers

      This loadout gets a slightly lower damage score (82 instead of 83), but is radically different from the previous loadout.

      If you set Max Cost to 101 or 102 and re-run, chances are that you’ll get a configuration similar or identical to the first one; in either case, damage will remain at 83 (actual results may vary since there are several optimal loadouts in these cases). However, if you set Max Cost to 103, you should get:

      • 1 Machine Gun
      • 4 Rockets
      • 2 MegaRockets
      • 0 Lasers
      • 1 UltraLaser

      … which increases our total damage to 84.

      This is interesting; this weapon loadout is very different from the first two.


      As you can see, we get a surprising result: the optimal choice of weapons in our loadout is highly dependent on the SuperTank's parameters and can change dramatically with even a tiny change in those parameters. This also gives us all kinds of other useful information: all five of the weapon types are useful in at least two of the three SuperTank settings, with Rockets and MegaRockets having clear utility in all three. This seems to indicate that all five weapons are well-balanced in the sense that they are all useful relative to one another, while at the same time remaining unique.

      And you can also see, this sort of decision modeling and optimization gives us an excellent way to quickly search the local neighborhood and re-optimize. For some problem types, it can allow us to gain visibility into dominant player strategies and exploits that might be difficult or impossible to find any other way.


      Wormhole Teleporters


      After looking at the last two examples (the strategy game tax rate example and the SuperTank), you may think these techniques only apply to cases where users have to deal with numbers. Nothing could be further from the truth! As we’ll see, there are countless instances where we can gain benefits from optimizing design elements that either do not appear as numbers to the user, or do not seem to involve any numbers at all!

      You might also be inclined to conclude that decision modeling only applies when modeling the decisions that players will make in our games. This is also incorrect: in some cases, you can also use it to model an optimize your own decisions as a designer.

      Assume that you are working on a massively multiplayer space-based role-playing game. One day your design lead comes to you with a look of worry on his face. “We've just wrapping up a redesign of the Omega Sector,” he says, “and we’re running into a problem. We're planning to have some wormhole teleporters in that world segment, but we can’t agree on where to put them.”

      “How many teleporters?” you ask.

      “We’re not sure yet. It will probably be three, but it could be anywhere between 2 and 4. We just don’t know right now.” Then he lays down a map that looks like this:


      “What’s that?” you ask.

      “It’s a map of the Omega Sector. Or at least, the star systems the player can visit within that quadrant. We need you to figure out which cells should contain wormholes.”

      “OK, what are the rules for wormhole placement? And can I have a wormhole in the same quadrant as a star system?”

      “We want you to place the wormholes in a way that minimizes the distance between any star system and the nearest wormhole. And yes, you can put them in the same quadrant as a star system; they’re just small teleporters hanging in space, so you can put them anywhere. And don’t forget, we haven’t agreed on how many wormholes this sector should have yet, so try to give me solutions for 2, 3, and 4 wormholes.”

      How would you frame this problem, and how would you solve it?


      Optimize Me Up, Scotty!


      Let’s start by setting up the decision cells. We’ll call the four wormhole teleporters ‘A,’ ‘B,’ ‘C,’ and ‘D.’ We know that each teleporter is essentially nothing more than an (x,y) coordinate on the Omega Sector star map above. We also know that we’ll need some way to specify how many of these four teleporters are actually active, so we’ll add a cell to let us specify the number of teleporters. We will use teleporter ‘D’ only in the case where we are using 4 wormholes, and we’ll use ‘C’ only when we have 3 or more.



      Below that, we’ll set up a table to figure out the distance from each star system to the nearest teleporter. That table looks like this:



      On the left side, in blue, we have the coordinates of each star system on the map. Each star system is represented in one row. We simply typed this in from the Omega Sector map we were handed above.

      To the right of that, we calculate the distance to each of the four teleporters A, B, C, and D. This is simply the Pythagorean theorem, calculated as the square root of the horizontal and vertical distance between the star system and the teleporter:

      =SQRT(($B14-Ax)^2+($C14-Ay)^2)

      (Don’t worry – I promise that this is as complicated as the math will get in this series!)

      We get the X and Y coordinates of each star system from the blue cells in the table above, and we get the X and Y coordinates of each teleporter (the cells named “Ax” and “Ay” for teleporter A in the SQRT() function above) from the yellow decision cells at the top.

      Finally, we take the minimum of these four values in the “Dist to Closest” column, which simply uses the MIN() function to determine the minimum of the four values immediately to its left. We then sum that entire column at the bottom; this sum is our objective cell.

      You may have also noticed that in the screenshot above, the “Dist to D” cells all have the value 99. The reason for this is that we use the “Number of Teleporters?” cell in the section at the top of the decision model to let us tweak the number of teleporters we are considering. If the number of teleporters is 2, we use the value ’99’ in both the “Dist to C” and “Dist to D” columns, while if it is 3, we use ‘99’ in the “Dist to D” column only. This ensures that each star system will ignore any extraneous teleporters when calculating the distance to the closest teleporter in the case of 2 or 3 teleporters.

      Now, we run Solver, as before:



      The objective cell is the sum at the bottom of our “Dist to Closest” column. Note that unlike other examples, we want to use “To: Min” in the radio button for this, because we want the minimum distance from all the star systems to our teleporters, not the maximum.

      Below that, we specify the decision cells (“By Changing Variable Cells”) as the eight yellow decision cells for the X and Y coordinates of wormholes A, B, C, and D. In the constraints section at the bottom, we constrain each of our coordinates to be an integer between 0 and 12. Note that we are using an integer constraint on these decision cells because we are assuming our design lead simply wants to know which cell each teleporter should be in, but we could just as easily skip this constraint if our design lead wanted to know a real-valued location.

      If we set the “Number of Teleporters?” cell to 2, 3, and 4, and re-run Solver at each setting, we get the following configurations:


      Armed with this information, you can go back to your design lead and show the optimal locations to place any number of teleporters between 2 and 4. Here is what these optimal wormhole locations look like on the map (shown in green) for 2, 3, and 4 wormholes, respectively.


      You can download the spreadsheet for this example here.

      Did I Mention There Are Ninjas?

      “OK, that’s terrific,” your design lead replies, but you can see a slight look of anguish on his face. “But, uhh … well, I forgot to tell you some of these star systems are inhabited by Space Ninjas. And we actually want the systems with Space Ninjas to be farther away from the wormholes, because we don’t want players to feel too threatened.”

      “Oh. Well, that kind of throws a monkey wrench into things.”

      “Yeah. Also, some star systems have 2 colonies in them instead of one, so it makes it twice as important for them to be closer to the wormhole teleporters. Or twice as important for them to be farther, in the case of that one star system that has 2 Space Ninja colonies. Here’s what the map looks like now:”



      He continues: “Every negative number is a Space Ninja colony. The system with a ‘2’ has two human colonies, while the ‘-2’ has two Space Ninja Colonies. So, can you tell us where to put the teleporters in this case?”

      “Tell me you've at least decided whether there will be 2, 3, or 4 teleporters by now,” you reply snarkily.

      “No such luck, I’m afraid.”


      Solving For Ninjas

      In order to solve this, we need to add a new column to our table to represent the weightings in the table above. We will call this the “multiplier.” We will then multiply this value by the value in the “Dist to Closest” column

      When we do this, though, “Dist to Closest” changes its meaning slightly. It’s not really the distance to the closest star system, since for Space Ninja star systems, it’s -1 times that. It’s more of a generalized “score,” so let’s call it that instead.



      In this way, the score now represents an aggregate value. By minimizing it, we ensure that Solver attempts to be as close as possible to human-colonized star systems and as far as possible from Space Ninja-occupied systems simultaneously.

      Now we get the following results:


      As you can see, this gives us a significantly different wormhole configuration in each case from the simpler, pre-Space-Ninja version.


      The spreadsheet for this extended version of the teleporter example can be downloaded here.

      As you can see, our decision model was able to very quickly solve this non-trivial problem, and we could adapt it to changing requirements remarkably quickly.

      This problem is from a class of problems called "facility location problems," which are very well-studied in the operations management field.  But as you can see, it has potential applications in game design and level design as well, and is easy (to the point of being trivial) to set up in Excel.

      Tune in next time, when we'll apply decision modeling and optimization to a challenging game balancing problem for multiple classes in player-vs-player (PvP) combat for a simplified role-playing game.


      -Paul Tozour


      Part 4 in this series is now available here.

      This article was edited by Professor Christopher Thomas Ryan, Assistant Professor of Operations Management at the University of Chicago Booth School of Business.  

      Friday, July 12, 2013

      Decision Modeling and Optimization in Game Design, Part 2: Optimization Basics and Unrolling a Simulation


      This article is the second in a continuing series on the use of decision modeling and optimization techniques for game design.  The full list of articles includes:


      The spreadsheet for this article can be downloaded here.

      Decision Model Setup

      Now that we've introduced decision models and explained why they’re useful and discussed some of the limitations, we'd like to illustrate the basic concepts with a simple example.

      Before we do that, though, we need to introduce some layout and format guidelines. Just as with code, spreadsheets can turn into an unreadable mess quickly if we’re not careful.

      Broadly speaking, there are four kinds of cells in our spreadsheets:
      • Decision cells contain the variables we are trying to optimize – in other words, we are going to ask the optimizer to try to find the best possible values for these cells. We will start with 0 or some other appropriate default values in these cells, and then get the optimizer to fill in the correct values for us. In most cases, we’ll also constrain them to a certain range, such as being between some minimum and maximum value, and, in some cases, being integers or binary values. For the sake of consistency and readability, decision cells will always be yellow and will be framed with a black outline. 
      • “Pre-Defined” cells are those whose value has been stated directly in the problem statement. For example, if a problem tells us that a Tootsie Pop has 17 grams and each lick removes 0.25 grams, then these two cells will be “defined.” We color predefined cells in blue. 
      • Calculation cells are cells whose value is computed from other cells in the spreadsheet, and which don’t fit into any of the other categories. We color them in some shade of grey. 
      • The Objective cell (or “output” cell) is the cell whose value we are trying to minimize (or maximize) when we run the optimizer.   In our examples, there will only ever be a single objective cell, and it will always be colored orange and framed with a black outline.  (Note: There are some advanced solvers that let you support multiple objectives, but that's getting too complicated for what we need to do.)
      When we run the optimizer (the “Solver” tool built into Microsoft Excel), it’s simply going to look at the objective cell that we identify, and then attempt to tweak the decision variables however it can (within the constraints we set up) to either minimize or maximize the value of that objective cell (whichever one we specify).

      Solver doesn't know very much about what sort of calculations are going on under the hood or what the connection is between the decision cells and the output cells; it simply runs one of several available algorithms to try to minimize or maximize the value of the objective cell through a principled search of possible values of the decision cells.  These algorithms ("Simplex LP," "GRG Nonlinear," "Evolutionary") are designed to be much smarter than brute force exploration of all possible choices for the decision variables, and they very often answer large problems with surprising efficiency.

      For example, if we wanted to know how many licks it took to get to the center of a Tootsie Pop, we might set up a spreadsheet that looked like this:


      We could ask Excel’s Solver to solve this, telling it to minimize the “Mass remaining on Tootsie Pop” objective cell, and it would quickly find by experimentation that the value of the yellow decision cell that does this (“Licks it takes to get to the center of a Tootsie Pop”) is 68.

      This is a little bit silly, of course, because it’s obvious from the problem statement that the answer is 17/0.25=68. There’s no need to run an optimizer tool to solve a problem you can solve with basic arithmetic instead.

      In practice, though, most of the problems we face won’t have simple mathematical solutions. They will have multiple decision variables that map to an objective in non-obvious ways, and the mapping between the decision variables and the output will be too complicated to solve by hand with any easily-identified set of mathematical equations (and again, complicated math is something we will studiously avoid in this series).

      Instead, we’ll focus on setting up the problems and letting Solver do the heavy lifting.


      Example 1: Taxes

      For our first real decision model, we’ll show an example of figuring out an optimal tax rate. Nobody likes taxes, but in this case, we’re receiving taxes instead of paying them, so hopefully that will lessen the pain a bit.

      Imagine that we’re building a “4X” strategy game similar to Sid Meier’s Civilization. We’re in the process of designing cities that have a certain level of unhappiness based on their size. “Unhappy” citizens are essentially non-cooperative, and we do not get any tax revenues from them. We can also attempt to raise money from our cities by adjusting our tax rate on a city, but the city’s unhappiness level will grow exponentially as we raise our tax rates, and this might make very high tax levels counterproductive.

      We’ll also assume that we can set the tax rate at 10% increments anywhere between a 0% and a 50% tax rate. Here’s a screenshot showing a similar system from the classic “4X” strategy game Master of Orion 2:


      As designers, we want to ask a simple question: what's the optimal tax rate in the general case?

      This should be a simple problem, since there are only 6 possible tax settings. We could just test each of the 6 settings by hand, find the one that gives us the highest revenues, and be done with it!

      (In fact, we could probably find a mathematical equation to solve this, as with the Tootsie Pop example above, but that would be counterproductive since we’re setting this up to grow into a more complicated model that can’t be reasonably solved with equations. And we’re all about avoiding math in this series, anyway.)

      Let’s start by setting up the problem this way:

      • We have a city of size 12 (representing 12 million people). Those people are represented as 12 discrete “citizens.”
      • Each citizen either be Happy or Unhappy at any given moment.
      • Happy citizens pay (tax rate x 10) in taxes (so that, for example, a 20% tax rate gives 2 currency units in tax revenue per Happy citizen).
      • Unhappy citizens do not pay taxes.
      • The city has 3 Unhappy citizens who will remain Unhappy regardless of the tax rate.
      • An additional number of citizens will become Unhappy based on the following formula: (Population) x ((Tax Rate) x (Tax Rate)) x 3.5, rounded down to the nearest integer. For our size-12 city, this will give us 0 additional Unhappy citizens at 0% and 10% taxes, 1 additional Unhappy citizen at a 20% taxes, 3 additional Unhappy citizens at 30% taxes, 6 at 40%, and 10 at 50%.

      Simple, right?

      We set this up in the attached spreadsheet as follows:


      You may have noticed that we set up the yellow decision cell (“Tax Level (0-5)”) as an indirect way of specifying the tax rate. Rather than specifying the tax rate directly in the decision cell, the “Tax Rate” calculation cell instead takes the Tax Level number from the decision cell and multiplies it by 10%. There is a good reason for doing it indirectly in this way, as we’ll see in a moment.

      Now, we can experiment, and plug in all six possible values for the tax level. We can just type in each of the digits from 0 to 5 in the “Tax Level” cell, and we get the following:


      As you can see, there is an optimal setting for the tax rate: 30%, which maximizes our tax revenue at 18 units.

      Let’s Automate This!

      That was nice, but what if there were more than just six possibilities? What if there were hundreds of possible tax rate settings, or if we had other decision variables we needed to tweak, too? Things would get too complicated to test all the values by hand.

      As we’ll see, that’s exactly why we’re using Solver.

      First, we’ll reset our “Tax Level” cell to zero. Then go to the “Data” tab in Excel and you should see a button called “Solver” on the right side of the ribbon, in the “Analysis” portion of the ribbon.


      If you don’t see it, go to Excel Options, select the Add-Ins category, ensure that the “Manage” drop box is set to “Excel Add-Ins,” hit “Go…” and ensure that “Solver Add-in” is checked.

      When you hit the Solver button, you should see a dialog like the one shown below.


      Let’s go through the steps to set up the Solver dialog now.

      In the “Set Objective” field, we set up the cell that represents what we’re trying to optimize. In this case, we’re trying to get as much tax revenue as possible, so we select the orange objective cell that represents our tax revenue, and then select “To: Max” in the radio button list just below it.

      In “By Changing Variable Cells,” we select the cell(s) we want Solver to figure out for us. We want it to determine the optimal tax rate for us, so we select the yellow decision cell (“Tax Level (0-5)”). If all goes as planned, it should end up setting this cell to ‘3’ for a 30% tax rate, which we already determined was optimal by tweaking it by hand.

      Finally, we need to add a few constraints. Constraints essentially act as conditions on any of the cells in our decision model, and Excel Solver will ensure that it focuses solely on solutions that respect the constraints that you specify. These can include limiting specific cells (usually decision cells and calculation cells) to be above some minimum value and/or below some maximum value, and/or forcing Solver to treat them as integers or binary variables (0 or 1). Constraints are enormously useful for helping ensure that you have a valid model and that Solver only searches through

      Solver needs at least a few constraints at a minimum to help it figure out what the boundaries are on the decision cells – in other words, the minimum and maximum values for each cell. In order to add a constraint, we hit the “Add” button on the right, which brings up a dialog like the following:


      We add two constraints, one to ensure that the “Tax Level” decision cell is constrained to be >=0, and another to ensure that the decision cell is <= 5. Then, we ensure that the “Solving Method” combo box is set to “Evolutionary” and hit “Solve.”

      After Solver runs for 30 seconds or so, it will give us an answer like this:


      Uh-oh … This is a problem! Solver got the correct amount of revenue, but the tax level is wrong. The player can only set taxes at 10% increments, so Solver is clearly setting fractional tax rates, which is something that the player can’t do.

      The solution is to ensure that the tax rate cell is constrained to be an integer. It should be either 0, 1, 2, 3, 4, or 5, with nothing in between.

      Fortunately, Solver gives us an easy way to do exactly that. Open the Solver, press the Add button, select the “Tax Level” decision cell, and then select the “int” constraint from the drop-down box in the center:


      Now, we re-run the Solver, and this is what we get:


      Perfect! Solver got the right answer for us with a very small amount of effort. As we’ll see in a moment, as we scale up to larger problems, we’ll quickly discover that the amount of work that Solver can do for us dramatically outstrips the amount of effort required to tell it what to do.


      A Growing City

      Now let’s extend it with a slightly more sophisticated model of our city.

      In any “4X” strategy game, our cities (or planets, or colonies, or whatever other population units we use) will grow over time. We’ll assume the city has a flat 8% growth rate per turn, beginning with 1500 thousand (1.5 million) citizens, and growing up to a size of 12 million citizens. Our spreadsheet now looks like this:


      Each additional subsequent row of the table represents one turn of the game.

      We've also changed our calculations for the base Unhappiness level. It’s now calculated as one-half of the base population level (in millions), rounded down. This keeps base unhappiness at 0 until the city gets to size 4, at which point it grows linearly with the city size.

      Just as before, we can experiment with the tax levels manually by manually changing “Tax Level.” We get 0, 102, 190, 222, 144, and 65 currency units of tax revenue, respectively, from each of the tax levels between 0% and 50%.

      And just as before, we can run Solver on it; it will quickly find that the optimal tax rate is the same 30% level, giving us 222 currency units of revenue. Here’s what the Solver dialog looks like:



      Variable Tax Rates

      But of course, the player doesn't play this way. Our simulated “city” sets a single tax rate and keeps it exactly the same every turn of the game. But the actual player can change the tax rate as often as he likes, and he'll often need to tune it as his city grows and circumstances change.

      Wouldn't it be great if we could do more than figuring out a single, flax optimal tax rate, and determine the optimal tax rate every turn?

      That would instantly tell us how the best possible player would adjust tax rates.

      It turns out, we can! And now that we've set up the decision model in the right way, we can do it incredibly easily.

      The major change is that we have to remove our decision cell – “Tax Level (0-5)” – and replace it with an entire column of tax level cells, as shown below.



      Now, instead of telling Solver to optimize a single cell, we’ll tell it to optimize the entire “Tax Level” column. Here’s what the Solver dialog looks like – you’ll note that it’s exactly the same as before, except that the variable cells and constraints have changed to represent the entire range of cells in the “Tax Level” column instead of the single cell.


      Sure enough, Solver proves that tweaking the tax rate does make a difference, giving us cumulative tax revenue of 232 currency units. This is only a 5% improvement over the 222 units we obtained with a flat tax rate, but it’s still substantial enough that we know some gamers will achieve it.

      Looking more closely at the solution Solver obtained, we see that it begins with a tax rate of 50%, since the size-1 city doesn't have enough population to generate Unhappiness from it at this point. As the city grows, it tweaks the tax rate between 20% and 30% each turn, depending on which one brings in more revenue.

      You can download the spreadsheet for this example here; it splits the three stages of this example into their own worksheets on the spreadsheet (flat tax with a non-growing city, flat tax with a growing city, and adjustable tax rate with a growing city).


      Conclusions

      The solution we found above reveals something interesting: the discrete nature of our game sim, which represents arbitrary groupings of millions of people as discrete “citizens” who can be in only one of two discrete states of happiness, introduces idiosyncrasies in the model. Although the actual game itself needs to do this discretization at some level for the sake of accessibility and playability, clever players and “power gamers” will be able to exploit this artificial granularity to generate advantages over players who don't want to futz around with tax levels every turn.

      This gives rise to an interesting question: Is this what we want? Does this mechanic make players feel that they need to micro-manage their tax levels every turn to play the game? And do we want to enable “power gamers” to be able to game the system in that way, and is the 5% reward they receive appropriate for doing so?

      These are questions I can’t answer. After all, you’re the designer in this case, which means that you set the design goals, so it’s up to you to determine whether this level of exploitation is in line with your goals for the game.

      Of course, this model is only a bare-bones framework. In a real 4X strategy game, you would be able to make all sorts of decisions about how to grow your city and how to create buildings and make other modifications that would affect city growth, happiness, tax revenues, and productivity further on down the line.

      In a future article in this series, we’ll build a similar but much more complex model of an entire planetary colony in a game similar to Master of Orion 2. This example will be far more sophisticated, since we will be able to make decisions every turn that will subsequently affect all of those parameters, such as growth and productivity, so that each decision has “knock-on” effects that affect subsequent decisions. However, we’ll also see that Solver’s evolutionary optimizer is up to the challenge.

      Stay tuned for the next exciting episode, where we’ll fulfill our promise of optimizing the SuperTank weapons loadout in the example we posed in the introductory article.

      -Paul Tozour


      Part 3 in this series is now available here.

      This article was edited by Professor Christopher Thomas Ryan, Assistant Professor of Operations Management at the University of Chicago Booth School of Business.

      The author would like to thank Jurie Horneman and Robert Zubek for their assistance in editing this article.

      Sunday, July 7, 2013

      Decision Modeling and Optimization in Game Design, Part 1: Introduction

      This article is the first in an ongoing series on the application of decision modeling and optimization techniques to game design problems.  The full list of articles includes:


      We’re Searching, Not Iterating

      Most of game design is a process of search.  When we design, we are evaluating many different possible design configurations to solve a given design problem, whether it be the way the rooms in a dungeon are connected, the set of features and capabilities that different types of game agents will possess, the specific “magic numbers” that govern unit effectiveness in a combat system, or even the combination of features our game will include in the first place.

      Just as an AI-driven character will use a pathfinding system in a game to navigate through the game world, design involves navigating through a very-high-level space of possible configurations by taking some initial configuration and iteratively modifying it.  We look carefully at the state of some aspect of our design – whether it be our combat system, one of the parts of our game world, a technology tree in a strategy game, or what have you – and attempt to find a way to improve it by changing that configuration.

      Designers like to use the term “iteration” to describe this process, but “search” would be a more appropriate description.  The truth is that when we “iterate” on a design, we’re experimenting with a game in development.  We are making educated guesses about small sets of modifications that will change the current design configuration into a new design configuration that we believe will better meet our design criteria.

      These “iterations” don’t resemble the generally linear changes that typically occur in “iterations” of computer code; they much more closely resemble a search through a maze, with lots of sharp turns and occasional backtracking.  They often move us forward closer to the goal, but many times it’s unclear whether they’ve improved the game or not, and we sometimes discover that design changes we assumed would improve the game have unforeseen flaws and we need to back them out or try again.

      Game design is an incredibly difficult discipline.  Design is like a dark room full of sharp objects, extraordinarily difficult to navigate safely once we stray from the beaten path.  There are nearly always some painful injuries along the way, especially if we move too quickly.  And we have relatively few tools to light up that dark room, and few well-defined and disciplined techniques for carrying out this process of design search.



      This dark room is is the reason we “iterate” -- we don’t always know what the ramifications of our design decisions will be until we try them.  In other words, we are searching (Will Wright even directly referred to it as “searching the solution space” in his 2004 GDC Talk).

      As a result, design is very often a productivity bottleneck, a major source of defects, and the biggest source of risk in game development.  Countless teams have found themselves hamstrung by ill-conceived design decisions, creative thrashing, feature creep, misperception of the target market, or other design problems that resulted in product quality problems.

      Given all of the dangers involved in experimenting with design, it’s no wonder that so many publishers and large developers are so risk-averse, preferring to hew closely to established and well-explored genres, licenses, and genre conventions rather than embracing the well-known risks of design innovation in return for relatively unknown payoffs.  Exploring the dark room is just too risky.

      We would like to find ways to change that attitude.  Rather than simply avoiding innovation, it would be better to find ways to improve our design skills and extend our capabilities, and build power tools to make design innovation safer and more efficient.



      This Series

      This article is the first in a series that will introduce decision modeling, a set of tools for decomposing decisions into formal models that can then be searched to find the most desirable output.

      Decision modeling and optimization are frequently used in management, finance, advanced project planning, and many other fields to improve the decision-making process and solve difficult decision problems and optimization problems by searching through the possible alternatives much more quickly than humans can do by hand.

      Despite all of the potential benefits, decision modeling and optimization seem to be relatively unknown among designers in the game industry.  A recent survey of professional designers on a popular developer forum indicated that only 25% respondents had even heard of decision modeling, and only 8% had used it in practice.  A similar web-based survey passed directly to designers via Facebook had nearly identical results with a similar number of respondents.


      If used properly, decision modeling can significantly enhance many aspects of the design process:
      • It can help you optimize the configuration of specific design systems or the optimal values of your game’s parameters.
      • It can help shed light on decisions as to what combination of features to include in your game.
      • It can help you model the decisions a player might make, particularly in terms of helping you to identify dominant strategies or ways that players might “game the system.”
      In this series, we’ll provide examples of all three of these usage categories.

      A Definition

      So what is decision modeling?

      Simply put:

      Decision modeling is the process of simulating a decision and then automating the search for its solution.

      We start by defining some sort of decision, attempt to pick out all the relevant factors that go into that decision, build those into a model that accurately represents the decision, and specify a set of input variables and a single output variable.  Then we search for the optimal values of a set of decision variables (or input variables) that produces the best possible output.

      If all goes well, we should be able to search through a much larger number of possible solutions than we could do by hand or with our imaginations.  Although we can't apply it to everything, for the problems where it's appropriate, we can often get better results, get results faster, and in some cases, we can even solve problems that simply can’t be solved any other way.

      Along the way, we also specify a set of one or more constraints that act as boundaries to ensure that our model is valid.  These constraints can limit the range or the type of our input variables or any aspects of our model.

      Why Build Models?

      Have you ever found yourself playing Sid Meier’s Civilization and found yourself wondering, “Hey, wait a minute – what’s the right way to start off my city?  Should I build a Monument first, then a Granary?  Or should the Granary come first?  Or maybe the Temple first, then a Granary?  What’s the best decision?  Is there even a way to answer that question?”

      Also consider combat mechanics in a real-time strategy game.  Balancing the parameters of multiple units in RTS games is a notoriously challenging problem.  What if we had a system that could allow us to speed up the balancing problem by answering questions about our game’s combat balancing without having to playtest every single time?  What if we could ask the system questions like “How many Swordsmen does it take to defeat two Pikemen and three Archers?”  Or, “What’s the cheapest combination of Archers and Catapults that can defeat an enemy Guard Tower?”

      In fact, maybe we can!

      If we can model these design problems in the right way, we might be able to use automated optimization tools to search through the possible answers to find the one that best meets our criteria, without having to play through the game thousands of times.

      Here’s an example of a similar kind of problem – an example that we’ll solve in a future episode of this series.

      Let’s say we have a game called SuperTank.  In SuperTank, we drive a gigantic sci-fi tank into battle against other SuperTanks.  Before each battle, we get to pick the exact combination of weapons on our tank.


      You have 100 credits to spend on your weapon loadout.  Your SuperTank can carry 50 tons worth of weapons, and it also has 3 “critical slots” for use by special high-powered weapons.

      You have the following five weapon types, and you can use as many of each weapon type as you like, or skip any weapons entirely:


      Assume that you want your SuperTank to have the highest possible total Damage value (assume that this represents damage per second, so it properly represents damage applied over time regardless of how quickly the weapon fires).  Also assume that all weapons have the same range, firing arc, accuracy, and rate of fire, so they’re identical in all ways except for the ones shown on the chart above.

      Quick!  How many machine guns, rockets, lasers, etc should you equip on your SuperTank?  What combination of 0ne or more of each weapon gives us the most damage without exceeding our budgets for Weight, Cost, and Critical Slots?

      See if you can solve it by hand, or with a calculator.

      Can you do it?

      If you try it, you'll quickly see that it’s a surprisingly difficult problem.

      There’s probably a way to solve this with complicated math equations.  But we’re designers, and math just isn’t our thing.

      Also think about how the answer would change if the parameters were different.  Would the answer change if our SuperTank could hold 60 tons instead of 50?  What if instead of having 100 credits to spend, we had 110, or 90 – how would the optimal weapon loadout change?  What if it had only 2 Critical Slots, or 4?

      Now imagine that we had a system that could instantly calculate the highest-damage weapon loadout for any given set of (Weight, Cost, Critical Slots) parameters.  Type in the weapon parameters from the table above, then type in the SuperTank's parameters (50 tons, 100 credits, 3 critical slots), and BOOM!, we can see the best possible loadout.

      Wouldn't that be awesome?

      We could use this to instantly give us answers to all sorts of useful questions:
      • How does the optimal loadout change as we modify the parameters for a SuperTank?
      • How does the optimal loadout change as we modify any of the weapon parameters?
      • How much maximum damage will a SuperTank do at any given (Weight, Cost, Critical Slots) setting?
      • Are the four weapon parameters (Damage, Weight, Cost, Critical Slots) appropriate and properly balanced for each weapon?
      • Are there any weapons that are overly powerful, and are used too frequently?  If any weapon is so useful that the correct decision is always to use it, then using it is always the optimal decision, and there really is no meaningful decision there.  In that case, we should probably either remove the weapon from the game or rebalance it so that there are some circumstances where the weapon is not useful.
      • Are there any weapons that are underutilized, and are rarely or never used?  Similar to the above, if any weapon is so useless that the correct decision is to never use it, then there’s no meaningful decision there.  In that case, we should either remove the weapon from the game or rebalance it so that there are some circumstances where it makes sense to use the weapon.
      All of these are very important design questions that any designer should want to know the answers to.  Knowing them will be enormously helpful to us in balancing SuperTank.

      In just a few paragraphs, we’ve described a problem that’s remarkably difficult for us to solve manually, but which is trivially solvable with tools already built into Microsoft Excel.

      In a future episode, we'll actually build a decision model for this that can answer all of these questions.

      You’ll be able to see clearly that you can set up a model like this in minutes can allow you to solve this otherwise ferociously difficult problem.  With just a bit of work, we’ll create a power tool to let us quickly and safely explore the design space.

      Roadmap

      Throughout this series, we’ll illustrate quite a few more sophisticated examples, and we’ll provide reference spreadsheets so that you can do all of these examples yourself with nothing more than a copy of Excel.  Our examples will include, among others:
      • A simple combat example for a strategy game
      • A model for optimizing the coordinates of several wormhole teleporters in a space-based massively-multiplayer game relative to each other and a number of population hubs
      • A model for determining what tax rate to use for a simplified model of a city in order to balance citizen happiness against tax income in a 4X strategy game such as Sid Meier’s Civilization
      • A model for choosing how to assign spells and traits to character classes in a massively-multiplayer game
      • An optimization model for determining the optimal build order for a planetary colony in a ‘4X’ strategy game similar to the classic Master of Orion
      • An example of a team trying to pick the right combination of features to include in a game, and using a decision model to help them make the appropriate trade-offs
      In general, this series will build up from simple examples of finding optimal player strategies in specific game subsystems and then progress toward using decision models to help optimize parameters for game systems and optimizing feature set combinations.

      In each of these cases, we’ll describe the problem and show how to model it in Excel and solve it using Excel's built-in Solver tool.  In each case, you’ll see that we can do it more easily, quickly, and robustly than you could likely do without using Solver or an equivalent tool.  We’ll also provide the spreadsheets for each example so that you can download it and try it for yourself, reproduce the results, and experiment with each of the models.

      Also, remember that the underlying representation, whether it be a spreadsheet or a program in a high-level language, or something else – is irrelevant.  The important thing isn’t whether we do this in Excel & Solver, Java/C++/C#, or anything else, but the way we model the problem and try to solve it.

      Why Use Decision Models?

      Some readers may be incredulous at this point.  Building decision models probably seems like a lot of work.  Why go through all the effort when instead, we can use user testing in the form of focus group testing and beta testing?

      Let us state up front, on the record, that decision modeling isn’t applicable to every problem.  Some problems are too complicated or too difficult to model with these techniques, and there are many aspects of the design (such as aesthetic considerations, entertainment value, and the "feel" of the game) that are difficult or impossible to model with numbers.  And decision modeling certainly does not eliminate the need for group testing, beta testing, or doing your job by playing your own game continuously throughout development on a daily basis.

      But having said all that, it should become clear by the end of this series that decision modeling and optimization methods also give us a unique and remarkably powerful set of tools.  They can fully or partially solve many kinds of problems that can’t reasonably be solved any other way, and they can help provide answers and insights to all sorts of design questions that would be difficult to answer otherwise.
      As with any tool, it’s up to the practitioner to decide when their use is appropriate.

      There are any number of cases where decision models may be inappropriate or too unwieldy to be useful.  But as you’ll see in this series, it’s also surprisingly useful, and the more we can design properly at the earliest stages and get the bugs out of our design decisions before we even get into the user testing stage, the more likely that we’ll be able to design systems that are solid, entertaining, and bug-free.

      Think about the tools available to a typical programmer.  Programmers have a very difficult job, but they’re also blessed with many tools to help them find bugs before they ever go into testing.  They have compilers that constantly scream at them the moment they make a typo; they use defensive programming practices to expose software defects; they have code reviews to help them identify one another’s defects or call out poor programming practices; and they have many profiling and static analysis tools to help them find all kinds of performance bugs and other defects.

      But designers don’t have any tools like that.  Our job is arguably just as difficult, but we have no compiler to tell us when we’ve made a syntax error.  We have no profiler, no debugging tools, and no static analysis tools.  We have no way to do code reviews since we don’t have any ‘code.’  We write specifications and design docs and that’s about it; we can share design documents and feature specifications among the team and hope they give us good feedback, but for the most part we have to actually get it in the game before we can see if it works or not.

      That makes design incredibly risky, time-consuming, and expensive.

      And just as with programming, human error is a natural and inevitable part of the process, and we need as many high-quality tools as possible to protect ourselves and our projects.

      We are a very long way from having design tools that will support designers’ exploration of the design space at anywhere near the level the way that our compilers, debuggers, profilers, and static analysis tools support programmers’ exploration of the engineering space.  But we’re beginning to see the rise of a few custom game solvers and design tools, including a recently-developed playability checker for a Cut the Rope variant called “Cut the Rope: Play Forever” (link); the abstract game design system Ludi, which generated the board game Yavalath (link); and my own Evolver automated game balancing assistant for the mobile game City Conquest (link).

      Decision modeling can help us move a few more steps toward that level of support and begin to augment and extend designers’ own intelligence with some automated tools.  And given the choice of having the tools or not having them, why wouldn't we choose to have them?

      It’s Not About Spreadsheets – It’s About Models

      This series is written for designers -- and we mean all designers, whether they come from an artistic, programming, storytelling, or board gaming background.  So we're going to keep it simple, and stick to these promises:
      • No code.  We'll keep the articles 100% free of any code and will illustrate all of our examples in Microsoft Excel using the built-in “Solver” utility.  However, it’s important to note that this series is not about spreadsheets or Excel – it’s about decision modeling and optimization.  Every single thing we do in this series can be done just as easily (and sometimes more so) in any high-level programming language.
      • No math (or at least, not anything complicated).  We’re going to keep this series mostly math-free, and we won’t use anything other than basic arithmetic here: addition, subtraction, multiplication, division, and occasionally a square root.  Greek letters will be strictly forbidden.
      If you’re a designer, this series should give you all the tools you need create decision models yourself, with no need to try to write code or rely on programmers to write any code for you.  If you’re a programmer, this should give you a fairly straightforward guide toward programming your own decision models in any high-level programming language, so that you can then build decision models of your own, either from scratch or by building off of a template that already uses Solver and Excel.

      These articles are intended to be nothing more than starting point, so that you can take the concepts presented here and choose whether to build them out in Excel, pick another optimization tool, or try to build a solver of your own in a high-level language.  Spreadsheets are a good start, but these decision models are most likely to be useful as a springboard for richer and more sophisticated models that can be integrated with your game architecture.

      Disclaimers

      Before we get too far into the thick of decision modeling, a few disclaimers are in order.  Decision modeling and optimization don't provide any kind of complete system for game design, and we won't be making any claims to that effect.  It’s helpful to view it as a tool to help with some aspects of the design process, and like any tool, it has plenty of limitations.

      Here are some of the limitations you should be aware of:
      • It can be easily misused. Like any tool, decision models can be used inappropriately or incorrectly, and an incomplete or buggy decision model can lead you to incorrect conclusions.  Just as with software, the larger your decision model becomes, the more likely it is to contain bugs.  It’s also very easy to misinterpret what a model is telling you or to build an incomplete model that doesn’t accurately model the decision you're trying to make.
      • It’s complicated (sometimes).  Some design problems are just too complex to be usefully modeled with these approaches.  Many problems have too many moving parts or are too closely integrated with other aspects of the game to be usefully represented in a standalone Excel spreadsheet.  In these cases, you’re left to decide whether to model only part of the system (which may leave you with an invalid / inaccurate model), build a complete model integrated into the game itself (which could be a lot of work), or forgo decision modeling entirely.
      • Not everything can be modeled.  Decision models can’t tell you whether something is fun, whether it’s aesthetically pleasing, whether it "feels" right, or whether it presents the player with a usable and accessible interface.  There’s generally no way to represent these kinds of subjective and aesthetic concerns in a discrete model.  This means that there are clear limits on where decision modeling can and should be used, and it will be much more useful for systems design and optimizing mechanics and dynamics rather than aesthetics.
      • It has limits.  All optimizers have their limits, including the Excel Solver that we will use, and it’s entirely possible to create decision models that have valid solutions but are so complex that no optimization tool can find them.  For large enough numbers of unconstrained inputs, the problem can grow beyond Solver’s ability to search every possible combination of inputs, and instead it must rely on various optimization methods.  As we'll see throughout this series, we can simplify the expression of our models to make them easier for Solver to handle, and the developer of Solver (Frontline) offers a more powerful solver for larger problems, but it's definitely possible to create models that Solver cannot solve.
      • It doesn't guarantee optimality. On account of that, when we’re dealing with complex models, we can’t always be 100% sure that we’ve found the optimal decision.  We sometimes have to settle for second best: we spend more time optimizing, or start from scratch and re-optimize, so that we can say that the solution we’ve found is either optimal or very close to optimal with a reasonable level of confidence.

      Finally, and most importantly:

      • We have to make sure we model the right things.  Not all problems are important enough that they need this kind of effort, and we have to make sure we know our priorities and avoid getting overly-focused on optimizing irrelevant problems while ignoring other, bigger problems that might be much more important.


      Broadly speaking, there are certain things that need to hold true for decision modeling to be useful.  The decision in question has to be something we can encapsulate within some sort of discrete model, and map the result of the decision into a single value.  In other words, we must be able to map a finite set of inputs through a decision model and onto a single output in such a way that either minimizing or maximizing the output value gives us a better decision.

      In cases where there are subjective concerns that can’t be encapsulated in the model – for example, aesthetic considerations or usability / playability concerns – we will need to either cleanly separate those out from the decision model, use decision modeling only as an initial pass, or just abandon the decision modeling approach entirely.

      In order for us to model decisions in a spreadsheet, there’s also a limit on how complex the model can be.  If our game does something very complex, we may not be able to replicate that complexity in Excel.  It’s important to keep in mind, though, that this is only a constraint on the power of the models we can build in Excel, and not on decision models themselves.  You can build vastly more powerful solvers in your own game engine than you can build in a separate spreadsheet, and I hope that this series inspires you to do exactly that.

      On the other side of the coin, all of these limitations hardly make decision modeling useless.  Even in a case where a problem is too complex for decision modeling to tune completely, it can still help you get many components of your design much closer to a correct configuration, and it can help you find and debug a number of basic problems early in development.

      And even when a decision model can’t find the optimal solution to a given problem, either because the problem is too complex or because it requires aesthetic concerns or other subjective human considerations, it can still help you narrow down the solution, helping you rule out dead ends and otherwise reducing the complexity of the problem.

      Finally, even if you choose not use decision modeling and never attempt to optimize any spreadsheets or build your own solvers, an understanding of decision modeling can still help you by changing the way you frame and think about design decisions.

      This series is an exploration.  We will look at many examples of game design problems and explore ways to model and optimize them in ways that offer us powerful design tools.  You may be skeptical, or you may feel more comfortable not using any optimization at all.  But I hope you will bear with me as we explore and see where we end up by the end of the series.

      Conclusion

      In the end, we should want to design correctly.

      Many design questions are subjective, with no “right” or “wrong” answers.  But in some cases – many more than you might think – there undeniably are.  And in those cases, we should want to know how to get the right answer, or at least understand how we would go about defining the “right” answer and searching for that solution if it exists.

      Decision modeling and optimization are powerful tools that can help us accomplish exactly that in many cases.  I believe these tools should be part of every designer’s toolbox.  With a little discipline, it should become clear that these tools have untapped potential to help us explore the dark room of game design more quickly and more safely, and we will show you how with many applications throughout the rest of this series.

      -Paul Tozour


      Part 2 in this series is now available here.

      This article is the first in a series of 10-15 on Decision Modeling and Optimization in Game Design that will launch roughly once a week starting in July 2013. We welcome any questions, comments, and feedback, or any special requests for decision models to tackle.

      The author would like to thank Robert Zubek and Jurie Horneman for their feedback on this article.