Skip to main content

First prototype: SnapHunt Bingo is making progress

This is the second part of a three part blog post about the game project "SnapHunt Bingo" which im developing with three other university students as part of our obligatory software project.

The development of our game “SnapHunt Bingo” is making progress. All the important screens of the app - login screen, main menu, lobby, game board and the evaluation screen - are now present and some of them are already fully functional. You are now able to go through the different stages of the game. The backend has also made a lot of progress but the connection between the frontend and the backend hasn't been fully carried out yet.

How the game looks and what works

To test the game you first have to install the .apk. When you open the app you are greeted with a login screen. On this screen you can choose to login with your Google Account or choose to play as a guest. The login functionality is the first that has a connection to the backend and thus is fully functional.

Login screen

After you login you get redirected to the main screen. There you can either create or join a game. The logic for joining a game is not implemented yet, so for now our proof of concept is only a single player experience. When you create a game, you'll get to the lobby screen. On that screen you can make some settings like enabling the AI integration or removing words/players. These are still static data only saved in the frontend so changing them won’t actually change anything.

Main menu and lobby view

Once you start the game you'll see the game board with your bingo field and a timer running down. You can click on the words and get to a camera view where you can snap a picture. For now the picture is only stored on your device and not sent to the server. You can see the picture on your bingo card where once the text was.
In the end you can go the evaluation screen. For now it is filled with random stock footage but it shows how it’ll look like in the finished game.

Game board and evaluation screen

Even though there is no multiplayer yet playing the game at the HdM made some fun:

Game Testing at the HdM

Frontend Update

We have already made significant progress on the frontend of our project since the last blog post. One of the first things we did was deciding on an architectural pattern. The Flutter documentation recommends the use of the MVVM pattern. As beginners in Flutter we did a lot of mistakes while trying to implement this pattern - just as we expected. For example, we intended to have the view for the guest login and the google login with the same data behind it, so we can use the created user object in the whole application. However, a View Model should have a one-to-one relationship with its corresponding view.  

At first we ignored this issue, but as similar problems occurred repeatedly, we decided to introduce a dedicated repository layer for data handling. With this change we also completely reworked our folder structure. This is the next big topic in our queue. The next picture shows an example of the architecture of our frontend:

MVVM structure example

As for the thought process behind the different views: We split the different views evenly between the two frontend developers. At first, we tried implementing the views completely with their respective view models and services. However, since we needed a first demo before Christmas, we decided to build a complete gameplay loop as a client-side application. The login views were the first to be fully completed and are currently the only ones connected to the backend. The other views only contain the basic logic and layout of them.

To make the demo a bit more realistic we tried to contain some information from the other views for example the selected playtime. We also thought about using the pictures that were taken in the evaluation, but decided not to because this would work completely different to our final version where we will push the pictures to the backend while playing and later fetch everything again. Implementing a purely client-side solution for this did not make sense to us, as it would be discarded once the backend integration is complete.

Backend Update

Since the last developer blog we have focused on building the foundation of our backend. The Django project is now running and we have integrated the Django REST Framework for API development and prepared Django Channels for real-time communication. Redis is set up as the message broker and the ASGI server Daphne is ready for WebSocket support.

We implemented the lobby features, allowing players to create a lobby, join, view details, and start a game. In addition we developed the Game model, which stores all essential settings such as duration, word list, grid size and game mode. The main endpoint POST /api/lobby/game/create/ is complete – it validates the input (e.g., exactly 9 words), takes the players from the lobby, and creates a new game.

A good application needs secure authentication so we implemented a safe way for users to create a account. User now can login via Google and as a guest. The guest login was important for us as we want to set the game entry barrier as low as possible. The login itself is quite boring as we use the basic Google API. For the guest login we basically create a user with the name the users puts in and we create a random email address. Both authentication versions create a refresh and access token.

Testing the app was a bit tricky because Google requires a "key" from the .apk file to verify the address where tokens are sent. Since the backend team isn’t familiar with Flutter, we struggled to figure out how to retrieve the token. To make matters worse, only one person on the frontend team had access to the Google Cloud SDK, so we decided not to spend too much time troubleshooting the Google-related issues.

Another major challenge for the backend team was deploying the app to a server. For the first prototype, we wanted a functional server to handle frontend requests. One team member secured server access, and the backend team developed a Dockerfile to build a production-ready version of the app. The process went smoothly, with only minor issues like routing errors and file-copying mistakes in the code.

The AI Problem Part II

One thing that we kind of set aside was the AI picture classification. We haven’t had a server to test the AI performance problems we might encounter. We decided to first focus to make a functioning prototype and then to think about the AI feature and how to implement it. One big problem with the testing of different AI Models and Image classifier is the limited hardware availability for the testing. So one thing we need to consider is that local testing can only be made by one person in the backend team.

What’s next

There are still many things we plan to do to get a production ready game. These things include:

  • Fully connect frontend with backend to achieve full multiplayer functionality
  • Create working websockets, using daphne, for a smooth multiplayer experience
  • Add a screen that shows the bingo fields of each player so you know who took which pictures and who got a bingo (full line)
  • Create the game and scoring logic
  • Add a score screen that shows the winner and how many points each player got
  • Some bug fixes and improvements (for example the timer stops after the app has been idle for some time)
  • Create a logo for the app’s main screen and the smartphone’s home screen
  • Implement some sort of AI image classification
  • Testing, testing and some more testing

SnapHunt Bingo - A real life version of GeoBingo.io

SnapHunt Bingo - The idea and our plans

This is the first part of a three part blog post about the game project "SnapHunt Bingo" which im developing with three other university students as part of our obligatory software project.

Bingo, the old peoples game? Who wants to play that?

Well, let me introduce you to our new innovative game that isn’t played in a retirement home but outside in the real world. Yes, you have to leave your gaming chair but it will be an adventure you won’t forget for a long time.

This is a social game, so you need some friends to enjoy it. Theoretically you can play with just two players, but it’s more fun to set off in groups of two - so four or more players is recommended.

This is also a mobile game: You need a smartphone running the app or the web version.

But what’s all this about?

To summarize it: You have to find things from your bingo card in the real life and take a picture of it. These things can be anything you can see or grasp in the real world for example a pink car, a burning candle, a rabbit, a helicopter, yellow road markings, an olive, a disused fridge or flying frisbee. You as a group can either come up with your own words or use a list of predefined words (or a mix of both). All players have the same set of words, but they are arranged differently on the bingo boards. Once you took a picture the word is checked off and if you get a whole row you get bonus points. At the end of each game all players get together to discuss the pics they hunted. You might argue if the car is really pink or rather purple or if the racoon is in fact a cat. To help with filtering all the pictures we might add an AI integration that analyzes the easily recognizable images.

Inspiration

To be clear upfront: We didn’t invent this game. The first version of this type of game was “GeoBingo” which became popular through twitch streamers. It is not played in real life but in Google Street View. You agree on a list of words and search for them in the whole world by skimming through Google Street View imagery. You can play this game on https://geobingo.io/ but it costs money since Google Street View API costs are rather high. Since this game is open source you can self host it and use your own API key (which is free if you don’t spend every second of your free time playing this game).

Next up some streamers came up with the idea to bring this game into real life. It looked like a fun activity for a group of friends so me and some friends tried it out. But it was a hassle to keep track of the words and all the pictures. So we decided we need a proper mobile application for this task. This is how SnapHunt Bingo was born.

Design

I already had some ideas how the game flow should be like and how the app’s pages should look like so I scribbled some concepts:

concept scribbles

After some discussion with the team I created prettier versions of the screens with Figma:

Login, Lobby and game settings screens

Bingo board, image taker and scoring selection screens

Tinder style scoring screen

Those designs are not final but they are a good foundation for discussion about the design and an inspiration for the development process.
They are also great to give an impression on how the game will look like and how it works to our readers.

Team structure

To make our development process more efficient, we split our team into two parts. Björn and Leon are responsible for the frontend section of SnapHunt Bingo while Abel and Erzan focus on the backend and AI integration of our project.
This separation of responsibilities allows each of us to concentrate on our strengths and ensures a clean and better structured development process.

If timing issues happen and the backend backlog begins to grow, Leon will also assist with backend tasks. This flexibility helps us keep the workflow steady, even when unexpected issues occur.

We manage our tasks in GitLab using separate issue tags for the frontend and backend. The team members can select the issues that they want to work on. To ensure that out project makes the desired progress we meet every Monday evening to discuss progress, upcoming tasks, and other open questions.

Frontend

We considered multiple framework options for our frontend development. After evaluation of our options we chose Flutter. We also thought about using React Native and after our mentoring professor suggested the Godot Engine, we also looked at that option.

In the end we decided on Flutter due to several reasons:

  • Cross-platform support: With a single codebase, we can deploy our application on multiple platforms. Our primary platform will be Android, but if there is enough time we want to deploy our application in the web too.
  • UI flexibility: Flutter has a very extensive widget library which enables us to easily build consistent and modern app designs without huge effort.
  • Strong community: Flutter is still actively developed by Google and therefore has a big and active developer community, which help us to find solutions to issues we might come by.

While our other options also partially had those pros a key factor was that Björn already has some basic knowledge in Flutter, while the other two options would have been completely new to us.

We use JetBrains Android Studio as our main development environment for the frontend, as it integrates easily with Flutter and provides us with an Android Emulator, which we will use for the first weeks to try our application.

Backend

The reason we decided to use Python in the backend was because we wanted to learn something new, as no one has any experience using Python as a backend language. The second and bigger reason on why we decided to use python, is because of the AI compatibility of Python. Python is the industry standard for AI development so choosing it was a no-brainer.

Django is the biggest Python backend framework and has a lot of built in features and libraries that would spare us a lot of development time. We discussed other options such as FastAPI and Flask due to some performance issues that might occur due to the way Django handles async functions.
In the end we decided for Django as our backend as a potential performance bottleneck should not occur for our use-case. And we will safe some time using the vast libraries.

The AI Problem

The last decision to be made is how to tackle the image identification.

We have to balance word-complexity, user-input-flexibility, accuracy, and computing power and that is something that needs more time to test and plan. If we let the users pick their own words and let the uploads be checked by the AI, we can not assure for the reliability of the results if the word is too obscure.

Talking about complexity, the words we provide can not be too complex but also shouldn’t be too boring. For example: A simple image classifier can identify a car. But there are a lot of cars on the street. So simply using “car” as a word would be a boring experience for the player but easy to reliably implement for us. Most simple image classifiers start to struggle if the complexity increases. If we provide “green car” as a bingo field, the image classifier might detect a non-green car next to a green bush as a false-positive which should be avoided.

There are two ways to ensure that the image classier returns a correct result:

A. We use a more powerful image classifier.
B. We train our own image classifier.

A has the problem that we will reach a computing bottleneck pretty fast as we have only limited hardware available for the MediaNight. A fully fledged LLM has a high reliability, but an answer can take up to 5 minutes depending on the hardware.

We currently tend to option B which has the problem that we have to carefully pick the words we use because the players should have unique experiences, playing multiple rounds without repeating words. So if we have a 5x5 grid resulting in 25 words, we should have around 250 words that are not prone to false positives. Which means we will spend a lot of time for training and fine tuning the model which might take a lot of time of the project.

Our primary goal in terms of AI integration is to provide a 3x3 grid with 9 selected words for the MediaNight that work reliable with an image classifier in the backend.

How to automatically create a website with your newest komoot tour

How to automatically create a website with your newest komoot tour

This tutorial has been updated on the 22nd of December 2025 to improve the script and fix some errors.

I wanted to include a komoot tour into a website. For this use case komoot offers to embed a tour as an iframe. Since I don't feel like updating the iframe-link on the website every time there is a new tour I decided to write a small script to automate this task. There are still some details I want to improve in the future that you can find at the end of this post. But for now this script works as intended.

Prerequisite

For this tutorial you should have:
- a linux server with its own IPV4 address
- a domain name under your control

Depending on where you want to embed your iframe it might be enought to have an IPV6 address with no domain name but I haven't tested this yet.

Installing

First you have to install some packages:

sudo apt install python3 python3-pip nginx snapd
sudo snap install certbot --classic

Then install the PyPi package komootgpx. This tool is used to get a list of all your planned tours. You can install it either in a python virtual environment or system wide like I did:

pip3 install --break-system-packages komootgpx

Then add komootgpx to your PATH so you can call the command from any directory. To do this check the output from the last command. This should show you where the package was installed to. In my case it was "/home/ubuntu/.local/bin", so I added this path to /etc/environment and sourced it:

sudoedit /etc/environment
. /etc/environment

Configure HTTPS

Next up configure HTTPS for your domain so the connection between the user and the server is encrypted.

Leaving out this step might give you trouble when defining the link in the iframe (some web hosters demand an HTTPS-Link).

  1. Point your domain name to your IPV4 address
  2. Add a dummy nginx config with your domain name as the server_name. Restart nginx with sudo systemctl restart nginx.service
  3. Enable HTTPS for this domain with sudo certbot --nginx

Now you should be able to access your server over you domain name with HTTPS.

The script

Create a new file in your desired directory and change the permission:

touch my-script.sh
chmod +x my-script.sh

Then add the following content to your script:

#!/bin/bash
export PATH=$PATH:/home/ubuntu/.local/bin
export PYTHONPATH=$PYTHONPATH:/home/ubuntu/.local/lib/python3.11/site-packages

DOMAIN="your-website.example.com"
MAIL="your-komoot-mail-address@example.com"
PASSWORD="your-komoot-password"

NEW_TOUR_ID=$(komootgpx --mail=$MAIL --pass=$PASSWORD -a -t=planned -l | grep -oP '^\d+' | head -n 1)
CURRENT_TOUR_ID=$(grep -oP 'https://www\.komoot\.com/tour/\K\d+' /etc/nginx/sites-available/forward)

if [[ $NEW_TOUR_ID == $CURRENT_TOUR_ID ]]
then
    exit 0;
else
    rm /etc/nginx/sites-available/forward
    echo "server {
        server_name $DOMAIN;
        add_header Cache-Control \"no-store, no-cache\";
    if_modified_since off;
    expires off;
    etag off;
        location / {
            return 301 https://www.komoot.com/tour/$OUTPUT/embed;
        }
        listen 443 ssl; # managed by Certbot
        ssl_certificate /etc/letsencrypt/live/$DOMAIN/fullchain.pem; # managed by Certbot
        ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem; # managed by Certbot
        include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
    }
server {
    if (\$host = $DOMAIN) {
        return 301 https://\$host\$request_uri;
    } # managed by Certbot
        listen 80;
        server_name $DOMAIN;
    return 404; # managed by Certbot
}" | tee /etc/nginx/sites-available/forward > /dev/null
    systemctl restart nginx.service
else
    exit 0;
fi

Don't forget to change the variables DOMAIN, MAIL and PASSWORD. Also change the two export statements at the top to point to your komootgpx and python installation directories.

This script does the following:

  1. Get ALL planned komoot tours of the specified user. Fetch the first ID (newest tour) and write it to the variable NEW_TOUR_ID.
  2. Fetch the CURRENT_TOUR_ID from your nginx config
  3. Test if those two IDs are the same. If no:
    1. Remove the current nginx config in /etc/nginx/sites-availabe/ that is called 'forward'
    2. Write a new config in the same place using your NEW_TOUR_ID
    3. Restart nginx.service
  4. If there is no new tour (so NEW_TOUR_ID is equal to CURRENT_TOUR_ID), then do nothing and exit script.

To test the script create a new komoot tour and execute the script. It should replace/create your nginx config and reload nginx. If you visit your website you now get redirected to your komoot-tour.

Cronjob

If the last step worked you can add the script as a cron job:

sudo crontab -e

Then add the following line after specifying the script's location:

15 1 * * * /home/ubuntu/my-script.sh

This executes the script at 1:15 UTC in the night every day. You can tweak this value to your likings.

Website integration

On your website add an iframe with the following content (replace the domain with your own):

<iframe src="https://my-website.example.com" width="100%" height="600" frameborder="0" scrolling="no"></iframe>

Your website now redirects you to the komoot-website of your newest tour.

To do

These are things I want to improve in my script in the future:
- [ ] Follow best practices for using sudo in a script
- [ ] Install the komootgpx package in virtual env instead of system wide
- [ ] Maybe use a more sophisticated way to find the newest tour instead of just choosing one from the last 1-2 days
- [ ] Private Tours might lead to an error because non-logged in users can't see them. Maybe there is a way to filter them.

How good is the Garmin Forerunner 265's heart rate sensor?

I recently bought the Garmin Forerunner 265 after I did a lot of research on what the best fitness watch is for my use case. On my way I found The Quantified Scientist, a YouTuber that does a lot of testing on fitness gadgets in a scientific way. Unfortunately he hasn't done a video on the Forerunner 265 yet, so I decided to make some tests on my own to validate how good the watch's heart rate sensor really is. These tests and the representation of the results were heavily inspired by him. To see the technical background on how I did the testing and the data analysis, take a look at the end of this post.
To sum it up I wore a really accurate heart rate chest strap and the watch at the same time and compared the measured values. For each sport I created two types of diagrams:

  1. A scatter plot. Every point represents the sensors' readings at a time. A point's x-value shows what the chest strap measured and its y-value shows what the watch measured at this point in time. If they measure the same value (so the watch is 100% accurate) the point is on the red line. If the watch measures a heart rate lower than the actual heart rate the point is to the right of the red line and if it's higher then it's to its left. So the more points are close to the red line, the more accurate the watch's sensor is. The points are also transparent so the more points there are the darker the spot gets.
    I also added the correlation (the R-Value) to the top left. To put it in simple words: The higher the value, the more accurate is the sensor. 1 is the maximum and 0 would mean that the readings are completely random.
  2. A line diagram. This diagram shows the heart rate (y-value) over the time (x-value). There are two lines: The red line is the watch's measured heart rate and the blue line is the reference heart rate from the chest strap. The more these two lines match, the better is the watch's accuracy. With this type of diagram you can see if the watch lags behind the real heart rate or if it misses spikes

I did tests with the following sports:
- Badminton
- Longboarding
- ...

Badminton

I recorded a 2 hour session of badminton.
image1
The scatter plot looks pretty good. Most of the points are really close to the red line. Some of the points are below the line, which indicates that the watch underestimated the heart rate. Those seem to be rare outliers since those aren't many points. The correlation of 0.91 is pretty decent but in comparison to other watches it's on the lower end. This YouTube-video from the Quantified Scientist shows the correlation of all the other watches he tested to have a reference. It's really not bad but there are many models that perform better.

image2
The line diagram gives us some insight on why and at which points the watch got inaccurate. This diagram is only a snapshot of the two hour session since you can't see the fine differences on such a large scale. As you can see the watch often lagged behind a little bit. This isn't too bad for this kind of sport since I'm interested in the time I spend in the heart rate zones which isn't affected by this lag. The watch had some trouble picking up some smaller changes in heart rate and measured a higher heart rate than it should have. Apart from the outliers you can see in the scatter plot (that isn't part of this snapshot) the watch's heart rate was never really lower than the chest strap's one.

I'm pretty happy with these result for badminton. I only need the heart rate in this sport for the Garmin to determine my acute load and training effect to get insight on the effects on my total training load. For this purpose the sensor is accurate enough. Especially in badminton my top priority is having fun.

To be continued

I will expand and update this post with more data on different types of sports like longboarding, cycling, running, swimming and more in the future.

Technical background

How to get the measurements

In my tests I did different types of sports while wearing both the Garmin Forerunner 265 and the Polar H10 heart rate chest strap. These measurements were recorded simultaneously using the ANT+ HRM Heart Rate Monitor data field from the Garmin Connect IQ Store. You just add it to an activity as a data field and it will record both the watch's and your chest strap's data. If you want to do this yourself, you should take a look at the app's manual. I also decided to disable the heart rate alarm that made a sound everytime my heart rate drops under zone 1 (go the app in your Connect IQ store, go to "Settings" and put a "0" in the "Low HR Alert" field of every user). After you added your data field and adjusted the settings you just record an activity like you usually would. When you finished the activity you can see a new diagram in you Garmin Connect (Web-)App: ANT Heart rate (bpm). This is your cheast strap's measured heart rate. It might be tempting to use Garmin's built in overlay feature that lays one measurement over the other. But if the minimal and maximal measurements weren't the exact same the scale will be different. So the ANT Heart Rate has a scale from 100 to 200 and the watch has one from 80 to 200. This cannnot be used to compare.

How to use the data

To use the data in an external program you can download it in the .FIT file format. Then I used the FIT SDK to convert the FIT file to a CSV file. Just download the SDK, extract it, go into the java directory, open a terminal and execute this command where the last argument is your downloded .FIT file:

java -jar FitCSVTool.jar ./19434511632_ACTIVITY.fit.

Now you have a .CSV file with a bunch of values. You could write a script that deletes all the unneccessary and invalid data. But since I won't be doing this very often I manually edited the file in LibreOffice Calc. I removed all columns except the time stamp, the watch's and the chest strap's heart rate readings. I also removed every line that has no heart rate (0bpm) or an extremely low heart rate (like every value under 40). For the scatter plot I had to use MICRO$OFT Excel since LibreOffice doesn't have an option to make the data points transparent. For the line diagram I could use LibreOffice Calc. I needed to do some fine tuning to get the diagrams just the way I wanted. To get the correlation you can use this document.

How to add Eglo smart light to the app and remote controller simultaneously

I had to do a lot of research to connect my Eglo smart light (the model ECeil_G30, not sold anymore) to both the AwoX Smart CONTROL app and the physical remote controller. To save you from the hassle to find it out yourself I wrote this short tutorial. In the end you will have all your lights inside the app and you will also be able to control them seperately with their dedicated phyiscal controller.

  1. Reset the light bulb to factory default (https://www.youtube.com/watch?v=xeEryXGCfKA&t=6)
  2. Add a new device in the app (Go to "My devices", press the plus in the top right corner and set up your lamp)
  3. Reset your remote controller (simultaneously press "On" and "color cycle" for three seconds) (https://awox.support.awox.group/kb/article/1255-06-reset-the-remote-control/)
  4. Change your remote controller into the smartphone mode (simultaneously press "On" and "blue" for three seconds) (https://youtu.be/VIYTRmc6a3s?feature=shared&t=27)
  5. Add the remote controller inside the app (Go to "My Controller", press the plus in the top right corner and set up your controller)