Extraction By Reward
Part of a series on |
Extraction |
---|
Rewards are a way of a puzzle presenting solvers with an answer. While reward-based answer distribution is less common overall compared to the "default" way of getting an answer, it has been steadily increasing in popularity, often occurring at least once per hunt.
Background[edit | edit source]
In the early days of puzzle hunts, answers as rewards were scarce, if not completely missing in hunts. This can be mostly explained by the format hunts would usually come in (paper distributed to teams left to their own devices), which did not allow for the amount of mid-puzzle communication between solver and setter that we experience today. On top of that, the first several years of puzzle hunts had relatively small "puzzles", leaving little room for mid-puzzle calls to perform a task. The best a solver could hope for during the MITMH up until the digital era was that a puzzle would resolve to a prompt to visit a location.
As puzzle hunts became more interconnected with the internet, tasks beyond the usual scavenger hunt became more reasonable to include. In addition, having digital answer submission allowed for direct contact with hunt runners, who could respond to an intermediary submission with a request for a task to be completed. When automatic answer checking began to become commonplace at the end of the 2010s, additional tasks at the ends of puzzles became even more accessible to hunt writers, as they didn't even necessarily need to have a human respond to a submission to tell them to complete the task; with the right programming, the answer-checker could respond to a particular phrase (the "answer" to the first part of the puzzle) with instructions on how to continue.
Even outside of hunts, hunt-adjacent properties such as ARGs (Alternate reality games, like the puzzle-card-based Perplex City) were getting into it. The most notable example of answers-as-rewards outside of hunts is likely the Perplex City puzzle "Billion To One", which presented solvers with a picture of a Japanese man with the accompanying text "Find Me" (a task that took solvers 15 years to accomplish).
Puzzle Application[edit | edit source]
The one key aspect of a reward-based extraction that differentiates it from any other extraction is that the answer itself cannot be gotten from the puzzle presented to solvers. While the exact nature of the reward given can differ, this will be constant across all proper cases of reward extraction.
Possible ways of distributing a reward-based answer include:
- Presenting solvers with an additional puzzle that does solve to the final answers.
- Presenting solvers with a physical object (that may or may not be a puzzle) that contains the answer in some way.
- Telling solvers the answer verbally (often with clear emphasis)
- Note: this does require solvers to memorize the answer, often while travelling back to the HQ from somewhere else. This can be used for very fun-for-the-writer but painful-for-the-solver situations wherein a long or complicated answer is presented with no paper to write it down.
- Having digital "tasks" that result in automatic presentation of the final answer.
The exact means that result in these rewards can vary as well, but the most common is a task or challenge, such as a scavenger hunt or something resulting from a instructional submission mid-puzzle. They can also come from events, but such cases are limited to events that act as puzzles requiring a final answer submission.
Notable Examples[edit | edit source]
- Billion to One (Perplex City S1) (web) - A worldwide hunt for a single man based only on a picture. Over the course of the 15-year hunt, it was revealed that the man's name was Satoshi, and that the picture was taken in the town of Kaysersberg in Alsace, France. By the time he was found in 2020, Satoshi had actually forgotten the question he was supposed to ask the person who found him; in the end, it had to be posted online by the puzzle's author.
- Everything Is Something (MITMH 2000) (web) - Debatably the first case of a mid-puzzle task being presented, as well as the first case of automatic answer-rewarding. The puzzle revolved around a short trivia challenge embedded into the OEIS, activated by solving the puzzle initially available to solvers. After completing all of the steps, the answer would be automatically shown.