Chapter 19: Development

For all but a few of my more recent games, Matt Ward developed every title I published through Conflict Simulations LLC. His name appears in the credits of Sedan 1940, 1916, 1864: On to Jutland, and most of the rest of the catalog. Matt brought exhaustive expertise from his work as a developer on the Panzer Grenadier series for Avalanche Press, and he improved my games tenfold from what they would have been without him. I am stating this plainly because this chapter is about why you need a developer, and the strongest argument I can make is my own experience.

Development is the process that happens between “the designer thinks the game is done” and “the game is ready for players.” Look at the credits of any game published by GMT, MMP, Compass, or most other publishers, and you will see a developer listed separately from the designer. That credit exists because the work is real and the skills are distinct.

What a Developer Does

A developer’s job is to make sure the game accomplishes the goals it sets out to do while remaining fun and playable as a commercial product. This requires a distinct skillset from design. The designer creates the game. The developer pressure-tests it from the outside, with fresh eyes and no emotional attachment to any particular mechanic.

A developer reads your rules and finds the ambiguities you cannot see because you wrote them. They play your game and find the strategies that break it, the units that are too strong or too weak, the situations where the rules do not cover what happens. They look at your CRT and ask whether the results produce outcomes that match the historical conflict. They look at your OOB and ask whether the unit ratings create the force balance the game needs. They look at your sequence of play and ask whether the phase order creates the right rhythm.

A developer also handles practical concerns that designers tend to neglect. Does the counter manifest match the OOB? Are the setup instructions unambiguous? Does the player aid contain everything the player needs? Are the victory conditions clear and achievable? These are the details that separate a game that plays well on the designer’s table from a game that plays well on a stranger’s table.

The developer is the first reader who does not share your mental model of the game. As I discussed in Chapter 18, rules-writing requires stepping outside your own understanding. A developer forces that step. They read what you wrote, not what you meant, and they tell you where the two diverge.

The Difference Between Development and Playtesting

Development and playtesting overlap but serve different functions. A playtester reports that a 3:1 attack against a unit in a city produced a defender eliminated result that felt wrong. A developer looks at the CRT, checks the modifiers, determines whether the result is a calibration error, and recommends a specific fix. The playtester provides data. The developer interprets the data and acts on it.

This requires experience with game systems, familiarity with how different mechanical approaches perform in practice, and enough design literacy to propose solutions that address root causes rather than symptoms. Most playtesters tell you something feels wrong. A developer tells you why it feels wrong and how to fix it.

Self-Development

Self-development is tricky. It may feel productive but it accomplishes little unless you can genuinely detach yourself from your own design. After spending months building a game, your attachment to specific mechanics, specific unit ratings, specific rules structures makes it difficult to evaluate them objectively. You know why you made each decision. A developer does not carry that history and can look at the decision on its own merits.

I manage to self-develop my more recent games by tightly scoping designs and being extra critical, a skill that comes from experience and from years of watching Matt Ward find problems I had missed. But self-development is a compromise, not a replacement for having another informed set of eyes on your game. If you can find an experienced developer willing to work on your project, take that opportunity. The game will be better for it.

The difficulty is finding developers. The wargaming hobby does not have a large pool of experienced developers, and the ones who exist are busy. If you are self-publishing, you may need to develop your own game out of necessity. In that case, the best advice I can offer is to create as much distance as possible between yourself and the design before attempting to develop it. Set the game aside for weeks. Come back to it with as close to fresh eyes as you can manage. Read your own rules as if you have never seen the game before. Play it as if you are a stranger encountering it for the first time. You will not fully succeed at this. You will still fill in gaps from memory and overlook ambiguities that are invisible to you because you know what you intended. But the effort produces better results than developing the game while you are still in the middle of designing it.

Working with a Developer

When you work with a developer, the dynamic is collaborative but the roles are distinct. You are the designer. You created the game. You have the vision for what it should be. The developer’s job is to help you realize that vision in a form that works for players who do not share it.

A good developer will challenge your decisions. They will tell you that a mechanic you love does not work, that a subsystem you spent weeks building should be cut, that your unit ratings are wrong. This is uncomfortable, and you need to accept it as part of the process. The developer is not attacking your game. They are improving it. The instinct to defend every design choice is natural and counterproductive. Listen to the problem before dismissing the proposed solution. If you disagree with the fix, engage with the problem. Maybe a different fix exists. But the problem itself is data, and ignoring it because you do not like the diagnosis is how games ship with known issues.

A bad developer will try to redesign your game. Development is refinement. If a developer wants to overhaul your core systems, they are designing a different game rather than developing yours. A developer tightens your rules, balances your systems, polishes your game. They do not change what the game is about. If the development process is pulling the game in a direction you did not intend, push back. The game is yours.

The practical workflow varies by publisher. When I worked with MMP on Rostov ‘41, the process was my first glimpse at a professional development workflow. MMP used Basecamp, the same project management tool I now use for my own company. I would send my ideas and notes as the designer. The development team, led by Lee Forester with contributions from Carl Fung and others, would reply with what worked and what would not, then coordinate playtesting based on those assessments. Design ideas flowed in one direction, development feedback flowed back, and the playtesting schedule was built around the current state of the game rather than testing whatever happened to be ready.

Watching that process taught me how a professional company does it correctly. The designer proposes. The development team evaluates. Playtesting generates data. The developer interprets the data and recommends changes. The designer approves or negotiates. The cycle repeats until the game is ready for production. Each role has a defined function and the workflow prevents any single person from operating in isolation.

Compare that to self-publishing through CSL, where I was the designer, Matt Ward was the developer, and the communication was less formal but the roles were the same. Matt would go through the game systematically, identify problems, propose fixes, and send me a revised rulebook or a list of recommended changes. I would review his work, accept most of it, push back on the occasional change I disagreed with, and we would iterate until we were both satisfied. The process was smaller in scale but the principle was identical: the designer creates, the developer refines, and neither role replaces the other.

The Final Checklist

Before a game goes to production, someone needs to verify that every piece fits together. This is development’s last function, and skipping it leads to the kind of production failures discussed in Chapter 18.

Has every rule been tested? Not in isolation, but in the context of a full game. Rules that work in theory can break when they interact with other rules under conditions that only arise during actual play. If a rule has not been tested in a full game session, it has not been tested.

Does the OOB match the counter manifest? Every unit in the OOB should have a corresponding counter, and every counter should appear in the OOB. Mismatches here produce games where players cannot find a counter the setup requires, or where extra counters exist that the rules never reference.

Are the setup instructions unambiguous? Can a player who has never seen the game before place every unit on the map without guessing? Setup instructions that reference “the area around Rostov” are ambiguous. Instructions that say “hex 46.09” are not.

Does the CRT produce historical results across a reasonable number of plays? Not every game will produce the historical outcome. That is the point of playing. But the range of plausible outcomes should include the historical result, and the CRT should not consistently produce results that are wildly ahistorical. If your game about Sedan in 1940 consistently produces French victories with the historical setup, something is miscalibrated.

Do the player aids match the current rules? Every modifier, every terrain cost, every CRT column on the player aid should match what the rulebook says. Any discrepancy between the two produces confusion during play and errata after publication.

Is the rulebook internally consistent? Does it use the same terminology throughout? Do cross-references point to the correct case numbers? Does the sequence of play in Section 4.0 match the phase order used in the rest of the rules?

This checklist is not exhaustive, but it covers the categories where errors most commonly survive into published games. A developer working through this list catches problems that save time, money, and the publisher’s reputation. A designer working through it alone catches some of them. Either way, the list needs to be worked through. Skipping it is how games ship with playtest maps.

Case Study: Rostov ‘41 and MMP

Rostov ‘41, my Standard Combat Series game on the German drive toward Rostov-on-Don in late 1941, was published by Multi-Man Publishing. It was my first game with a major publisher, and the development experience shaped how I think about the entire process.

The SCS is Dean Essig’s system. Designing within it meant working inside an established rules framework where the core mechanics (movement, combat, supply, exploitation) were already defined. My job as the designer was to build the game-specific rules on top of that framework: the scenarios, the OOB, the special rules that capture what makes the Rostov campaign distinct from other SCS titles. The weather system, which transitions from clear to mud to freeze across the game’s timeline, was one of those specific elements. The mud rules restrict exploitation, overruns, air strikes, and artillery, simulating how the rasputitsa bogged down operations. The deep freeze opens major rivers to crossing but cripples German armor, reflecting the mechanical failures that plagued Panzer formations in the Russian winter.

Lee Forester led the development. The MMP team went through the game with a rigor I had not experienced in self-publishing. They tested scenarios systematically, tracking whether the victory conditions produced competitive games across multiple plays. They checked the OOB against historical sources and flagged discrepancies. They tested the weather transition rules to make sure the mud slowed both sides enough to matter without grinding the game to a halt, and the freeze shifted the balance toward the Soviets without making the German position hopeless.

Carl Fung contributed to playtesting, research, and proofreading. The MMP process involved multiple people checking each other’s work, which catches errors that any single person would miss. A designer checks their own OOB and finds it correct because they built it. A second person checks the same OOB against the counter sheet and finds a unit that does not match. A third person reads the setup instructions and finds a hex reference that points to the wrong location. Each additional set of eyes catches a different category of error.

The published game benefited from this process in ways that are difficult to quantify but easy to feel when you play it. The rules are tighter than they would have been without development. The scenarios are more balanced. The special rules interact cleanly with the SCS framework. The credits list reflects the reality: series design by Dean Essig, game design by me, development by Lee Forester, with a team of playtesters and proofreaders who each contributed to making the finished product better than what any one person could have produced alone.

Working with MMP taught me that development is not an optional step for designers who lack confidence. It is a standard part of professional game production, performed by people with specific skills, producing results the designer alone cannot replicate. Major publishers budget time and resources for development because the games that skip it show the gaps.