Activity 3: Pose Recognition

Train and Test the Pose Classifier

 

The first step is to open Google Teachable Machine in order to create a pose recognition model.

Google Teachable Machine

This is very similar to the process that you followed for image recognition. First decide how many classes you want and name them.

Next, get ready to record your poses. Make sure that you are centered in the webcam view and that you are far enough away from the webcam to see blue lines appear on your trunk and arms. This means that the Teachable Machine is tracking your skeleton. You will need to remain at about this distance from the camera when using your model, so you may want to mark the position with masking tape.

Add samples for each of your classes. You will want to add 200-300 samples for each class. Because you must be some distance away from the camera, you may need a partner to help you take the samples.

Next, train the model. You may notice that this takes longer than training the image and audio classifiers. Make sure to leave the teachable machine tab open while the model is training, even if your browser pops up a warning that the window is unresponsive.

After training, test your model in the Preview panel. Make sure when testing that you stay centered in the camera view and the same distance from the camera that you were while taking the training data. When you are happy with your model, click Export Model.

Keep all of the defaults as they are, and click Upload my model. After your model has uploaded, copy your sharable link. You will need this link to create a Python program with your model. Remember to save your model in case you want to reference or change it later.

Using the Pose Classifier inPython

 

Open brython.birdbraintechnologies.com and connect to the Finch. You should use the browser-based version of Python for this activity because it already has all of the machine learning methods that you will need.

BirdBrain Brython

First, load the machine learning model into Python. Then use MachineLearningModel.load() to import your audio recognition model. This is very similar to what you did for image and audio recognition, except that the second parameter of MachineLearningModel.load() should be “pose.”

Once you have loaded the model, you can use MachineLearningModel.getPrediction() to get the results of your machine learning model for the current pose. Print these results to the screen and run your program. Remember, it can take up to a minute for the classification to start the first time you run the script. As you use your classifier, make sure to stay centered in the webcam view and at the same distance from the camera that you were when you trained the model.

As always, MachineLearningModel.getPrediction() returns a dictionary. Each class label is a key for the dictionary, and the value that corresponds to that key is the probability that the current pose belongs to that class. 

You can use the values returned by MachineLearningModel.getPrediction() to make the Finch perform certain actions when each class is detected. For example, this code changes the color of the Finch beak based on the pose that is detected. This can be a good way to check that your pose model is working properly.

Challenge: Write a program to make the Finch respond to each of your poses. As you test your program, investigate what happens as you change your position in the camera view or move closer or further from the camera.

Back to Top