Blockchain ecosystems---such as those built around chains, layers, and services---try to engage users for a variety of reasons: user education, growing and protecting their market share, climbing metric-measuring leaderboards with competing systems, demonstrating usage to investors, and identifying worthy recipients for newly created tokens (airdrops). A popular approach is offering user quests: small tasks that can be completed by a user, exposing them to a common task they might want to do in the future, and rewarding them for completion. In this paper, we analyze a proprietary dataset from one deployed quest system that offered 43 unique quests over 10 months with 80M completions. We offer insights about the factors that correlate with task completion: amount of reward, monetary value of reward, difficulty, and cost. We also discuss the role of farming and bots, and the factors that complicate distinguishing real users from automated scripts.
Joseph Al-Chami and Jeremy Clark (Concordia University)
/Dataset : our dataset (which deidentifies the company) and README explaining the data
Reviews.md : reviews on the paper and how we addressed them in the camera-ready
/LaTeX : revisions on paper after acceptance (first commit is the paper as submitted, last commit is the final camera-ready)