In our paper "Humans are poor few-shot classifiers for Sentinel-2 land cover", we compared the accuracy of human participants with a deep learning model trained with model-agnostic meta-learning (MAML) on land cover classification tasks of different geographic regions. This game is based on the survey interface that we posed to 21 participants and shows individual classification tasks similar to the ones the MAML model would have to solve satellite data.
You can play along and see how you would have compared to the average partipant, the best participant of the study, or the MAML model.
You can compete against:
Note that we simulate the competitors accuracy by random sampling at the expected accuracy, which are 60% for the average (median) participant, 77% for the best participant, and 81% for the MAML-model according to Table 1 of our paper.