Final Thoughts About Boxed In

Boxed In came from wondering what a competitive version of Shut the Box might look like. Giving each player their own tray made the game immediately more interesting, and most of the development came from quick playtests with ChatGPT—mainly experimenting with doubles penalties, pacing issues, and ways to keep the game from stalling out.

We found a few solid ideas, like the Stalemate Release rule, but I never quite reached a final version that felt fully balanced. Still, the process paid off. A lot of what we learned while testing Boxed In directly shaped the design of Reactor 21, which grew out of the same experiments but landed in a much stronger place.

In the beginning, both players are just getting their boards started. You roll, claim a few easy tiles, and see what kind of shape your board is taking. It’s mostly about opening things up and seeing where the numbers fall.

As the game settles in, you’re making small adjustments based on what the dice give you. The doubles effects add a little movement, but most of the time you’re just trying to keep your board flexible and avoid boxing yourself into a corner.

Toward the end, there are fewer open spots and each roll gives you a couple of decisions to think through. The Stalemate Release rule helps keep things moving, and you’re mostly trying to keep the board workable long enough to reach your goal.

The boards were purchased directly from Amazon. I’ve attached a screenshot of the product photo from the website.

And this is a screenshot of how my playtest with ChatGPT looked on my screen…

Final Thoughts About Reactor 21

Reactor 21 changed quite a bit as I tested different versions of it. The basic idea was always there—two players trying to keep a failing reactor stable—but it took some back-and-forth to figure out what actually made the game interesting. Early versions had the right intention, but some of the mechanics didn’t create the amount of teamwork or pressure I wanted. The game felt like it needed a bit more structure around how instability spreads, what happens during a meltdown, and how the players recover from setbacks.

Most of the improvements came from simply seeing how people reacted to certain moments in the game. Some rules felt too loose, and others were a little unclear in how they resolved. Adding the Nuclear Waste pile, tightening the meltdown rules, and clarifying how cards move between piles helped everything feel more intentional. The goal was always to keep the experience focused on communication and shared decision-making, and those adjustments moved the game in that direction.

During all of this, ChatGPT was helpful for keeping things organized. Any time I adjusted a rule or tried a different way of handling a reactor event, I used ChatGPT to help rewrite the sections cleanly, make sure the terminology stayed consistent, and compare versions so nothing got lost. It also made it easier to step back and look at each revision as a whole instead of just patching small pieces. The mechanics themselves still came from testing and intuition, but having a tool to structure everything made the development process a lot smoother.

Reactor 21 ended up feeling more balanced and readable because of that steady cycle of testing, revising, and tightening the language around the rules.

The three acts of the game are as follows:

Act 1 – Getting your footing

The game starts off pretty gentle. You’re drawing cards, placing them where they fit, and getting a feel for how the reactors behave. Most cards go somewhere without much trouble, and the token tracks are empty, so nothing feels dangerous yet. This is where you learn the rhythm: keep totals tight, stabilize when you can, don’t waste options.

Act II – Things start heating up

Now the reactors are filling up, and suddenly every card matters. A placement that was easy earlier now feels risky. You’re choosing between Instability and Meltdown more often, and both choices actually hurt. The Nuclear Waste Pile kicks in and you start to feel the deck thinning out. This is where the team talks things through, plans moves, and tries to stay one step ahead of the system.

Act III – Hold it all together

By the end, everything’s tense. One bad draw can end the whole run, and every card feels like it might be the last piece you need—or the thing that breaks the grid. You’re trying to lock down those last stabilizations before either track fills up. When the final reactor hits 21, it feels earned; if the system blows, it’s usually by a hair.

(Final thought created with the assistance of AI, using my input)

Final Thoughts On A Game About Color, More Or Less

As I refined A Game About Colors, More or Less, the color system became one of the most important aspects of the design. Early versions relied on fully saturated colors, which made the comparisons visually clear and, in many cases, too easy. During testing, it became obvious that players could identify the stronger or weaker color channels with very little effort, which reduced the level of deduction the game was meant to encourage.

To address this, I shifted the palette to include added black (K) values between 30% and 70%. Lowering saturation created a more subtle, more challenging set of swatches. Colors that once felt predictable became more ambiguous, and players had to make more thoughtful evaluations based on small differences, rather than relying on obvious saturation cues. This adjustment aligned the visual experience more closely with the intentions of the mechanics.

Throughout the development process, ChatGPT was used as a collaborative tool to help build, refine, and organize the rule set. It played a role in structuring the language of the rules, maintaining consistency across versions, documenting changes, and evaluating how each update affected clarity and player experience. It was also useful for keeping a clean version history and ensuring that revisions—such as the shift to a reduced-saturation deck—were incorporated accurately and consistently. The core design decisions remained my own, but ChatGPT helped make the documentation process more efficient and reliable.

This combination of iterative testing and structured rule development resulted in a color system that better supports the game’s deductive, perception-based gameplay.

Act 1 – Getting a feel for the deck’s color language

Early on, players are mostly just getting acquainted with how the deck moves. They make a guess, flip the card, and start noticing which kinds of shifts catch their eye — a bump in brightness, a little pull toward red, or a change in saturation that didn’t seem obvious at first. It’s basically a warm-up for the eyes, where players start realizing that the game isn’t about naming colors; it’s about noticing how they behave.

Act 2 – Learning what actually matters in a swatch

Once everyone has a few cards in front of them, they naturally start leaning on simple bits of color theory — whether they mean to or not. Some players pay attention to value first, because brightness jumps out. Some start tracking saturation because muted colors hide shifts better. Others zero in on hue and notice how small moves between neighbors (like teal to blue-green) feel trickier than big jumps. This is where players start building their own internal system for judging the cards, one clue at a time.

Act III – Stay consistent with the system you’ve built

The game doesn’t suddenly get more intense toward the end — it just asks you to stick with whatever approach you’ve developed. By this point, players have their own way of reading the swatches, and the last part of the game is about trusting that instinct. Maybe you’re watching for low-saturation curveballs, or maybe you’re checking how the brightness sits against the last few cards you saw. It’s steady, calm decision-making — more about consistency than pressure — and the satisfaction comes from seeing how well your eye held up across the whole run.

Final Thoughts On Race to 65

Race to 65 didn’t change too dramatically as it developed. Most of the core structure was there from the beginning; the main work was just tightening the rules and making sure everything felt clear and consistent. A few of the early versions had small gaps or places where players weren’t totally sure how to handle certain situations, so the updates were mostly about smoothing out those rough edges.

The biggest adjustments were clarifying how tiles flip, how players advance toward the target number, and how the end-of-game callout works. These weren’t major changes, but they helped the game run more cleanly and made the turns feel more intentional without adding complexity.

ChatGPT was helpful mostly on the documentation side—rewriting sections for clarity, keeping the terminology consistent, and making sure each version lined up with the previous one. The game itself didn’t go through big mechanical shifts, but having support to organize the rules and clean up the language made the whole process easier.

The three acts of the game are:

Act I – Getting started

The game opens in a pretty relaxed way. Players start flipping tiles, getting a feel for their numbers, and easing into the rhythm. There’s no pressure yet—just settling in and seeing how the early moves shape things.

Act II – Building toward the goal

As the game moves along, players start paying closer attention to their totals and making more thoughtful choices. It’s still simple and approachable, but you do get that feeling of trying to outpace the hourglass a little. Small decisions start to matter, and players begin watching how close everyone is getting.

Act III – Making the final call

The endgame comes into focus once players approach the target number. At this point, the game turns into a light race against time and each other—just trying to hit the number cleanly without going over. It’s not intense or heavy; it’s more like that moment in a puzzle where you can feel you’re close, and you’re trying to line everything up just right before someone else finishes.

Prototype – Dessert Dash

2 person game (Kaelin and Madison)

Rules:

Objective: 

Be the first to finish your stack of ice cream dishes. 

Materials:

1 deck of 60 cards

Setup: 

Shuffle the Deck and deal each player 30 cards randomly

Gameplay: 

Flip over two and place in between your deck of cards. 

There are no “turns”. The players race to be the first to finish their deck by rapidly matching either the flavor, type of dish or number of dishes on their card to the respective ones on EITHER of the cards that are flipped up in the middle. 

As the game progresses, obviously the cards will change based on what cards the players place on top. Keep placing matching cards as fast as you can, whenever you can.

Winning:

The game ends when one player finishes their stack. That player is the winner. Hooray!

Changes made:

There were edits made to the rules during prototyping to specify the simple mechanics – we had a moment that somehow the game was played but completely wrong so we tightened the wording

Changes TO make:

We’re going to tweak some of the coloring on the card to be more consistent – the blue ice cream cups threw a few people off on what type

Thoughts about Playtesting:

Most people understood the concept while one group totally didn’t so that was interesting – we clarified the rules so all people would understand. It’s interesting to see how people interpret rules or completely don’t read them when they think they know how it works.

Game Card Images: