Meta-matching

In hunts with more than one metapuzzle, meta-matching refers to the hunt design where the exact set of puzzles that feed into a meta is not given, and teams must figure out the correct groupings of answers that go with each meta.

There may be external information (for instance, the titles of the puzzles) guiding this matching, or the matching may be motivated solely through commonalities found within the answers themselves. This makes solving the metas more difficult, but as more groups are found, additional groupings and commonalities can become more clear.

Background[edit | edit source]

Meta-matching does not have an extensive history, with the first major hunt attempting to involve it occurring in 2004. That year, the 2004 MIT Mystery Hunt had planned on having a 4-puzzle meta-matching round that would give solvers an item used in the final runaround. However, despite the puzzles still be functional and present in the final product, they were skipped due to time constraints. Later on, the 2008 MIT Mystery Hunt would have the first actually-used meta-matching round in the form of its 'little blue book'.

Outside of the MIT Mystery Hunt, meta-matching started to grow in popularity in the mid-2010s, with Foggy Brume's Puzzle Boat series. Starting in the second iteration, all but one of the following editions of the hunt would feature meta-matching as its primary gimmick. As other independent puzzle hunts began popping up in the late 2010s, this mechanic stuck around, showing up in the Galactic Puzzle Hunt and QoDE, among others.

It is possible that some of the inspiration for meta-matching as a concept comes from both escape rooms and classic mystery stories. Escape rooms tend to have many clues present, but players are required to follow them in the correct order, sometimes involving matching the right clue to the right puzzle. Similarly, many classic mysteries involve the reader being presented with many clues at once. It becomes their (and the detective's) job to sort them into actual clues and red herrings. While neither of these live up to the same level of sorting that meta-matching requires, it's easy to see where some influence may have occurred.

Hunt Application[edit | edit source]

Meta-matching is primarily a difficulty-enhancing element, but can also tie in well to particular flavor decisions made in a hunt.

Use as a Difficulty Boost[edit | edit source]

Regardless of the base difficulty of a puzzle hunt's puzzles, the addition of meta-matching will always increase the difficulty of the hunt. For example, if a hunt had 'average' difficulty puzzles with 'average' difficulty metapuzzles, it would likely be considered an average-difficulty hunt. However, if none of the individual puzzles' difficulties are changed, but meta-matching is introduced, an extra step must be taken by solvers in order to solve the metas. Rather than just seeing which puzzles go to a particular meta, they need to both understand how a meta works and select answers that work with that meta before solving it.

Since most meta-matching hunts are designed as meta-matching hunts, rather than being swapped out midway through, the amount of difficulty that the element adds can vary. This variance is usually based on how many answers could theoretically change metas (regardless of whether they actually work, just that they'd fit the theme/gimmick of the meta). It also can depend on the metas themselves, as some metas may be difficulty to determine the function of without actively trying several different answers in them.

Use as Flavor[edit | edit source]

Since meta-matching at its core is about sorting things into different categories, it lends itself well to any hunt or round theme involving doing the same. Hunts involving putting items or people in their correct locations fit remarkably well (as seen in Puzzle Boat 3 and the Students round of MITMH 2021).

Outside of story/thematic flavor, meta-matching is also widely considered to be a fun challenge when done well. Since it tends to be dealt with closer to the end of a hunt/round, it can also help solvers who focus more on meta-solving or organizational work to get involved on a larger scale, sorting things into categories and finding small connections between sets of answers.

Notable Examples[edit | edit source]

Hunt[edit | edit source]

  • Puzzle Boat (All but PB1 and PB5) (web) - Puzzle Boats almost use meta-matching for the sake of tradition at this point. Puzzles Boat 1 and Puzzle Boat 5 both used a round-based format, which prevented meta-matching on the hunt-wide scale. Additionally, while the first section of PB8 didn't involve meta-matching, the other half (spanning 10 metapuzzles) did.
  • MIT Mystery Hunt 2019 (web) - Notably a pared-down version of meta-matching. In the hunt, puzzles were still separated into rounds, but metapuzzles occurred on paths between two rounds, taking in answers from the rounds on either side. This meant that all but one round (the introductory Christmas Town) contributed to 2 or 3 different metapuzzles.
  • Galactic Puzzle Hunt 2019 (web) - While the introductory round didn't involve meta-matching at all, the latter half of the hunt used it in full-stride. Combined with multipurpose answers, where every answer went to one or two different metas, this meta-matching system was a doozy for difficulty.

Rounds[edit | edit source]

  • Safari Adventure (MITMH 2020) (web) - With 11 metapuzzles, this round would be expected to be massive. Instead, all of its puzzles (each named after an animal) have anywhere from 1-5 answers, indicating how many times that puzzle/animal got used in a meta. Additionally, the matching was made somewhat easier by having each of the metas involve animals in some way, allowing them to indicate which animals/puzzles were necessary for them.
  • Students (MITMH 2021) (web) - 54 puzzles sorted into 10 metas, along with one metameta that used all of the feeder answers at once (along with info added to the students after the metas were solved)
  • Round Trois (CMU Hunt Spring 2022) (web) - A round made up of only metas! Instead of getting feeders to solve, this round gave solvers a list of feeder answers to use with the metas.

See Also[edit | edit source]