Introduction to developing using an IoT Edge device and interacting with Microsoft's Azure Services. This session is to get everyone up to speed with the process of interacting with more complex hardware and some basics around Azure and AI / Cognitive services but extending it to using dotnet core and interacting with the GPIO (General purpose input/output) directly.
To start off, we will need the to make sure the pre-requisites are in place per the previous session. In addition to the pre-requisites:
- Create an instance of IoT Hub
- Connect your bot to IoT Hub
- Log into Computer Vision and create a new Object Detection project
- Create a new project setting the values accordingly
Property | Value |
---|---|
Name | Enter a name for your project |
Resource | Select create new and create a new resource in your Azure account putting it in the same resource group as you IoT hub |
Project Type | Object Detection |
Domain | General (compact) |
Export Capability | Basic platforms |
The proctors will help with the images. Remember you need at least 15 images to train the model
- Train and test
- When you are happy that it will detect the image appropriately, export the model from the "Performance" tab by selecting "Export" and then use the "Dockerfile" format. Select "ARM (Raspberry Pi 3)" and download
- Clone the repo from the location provided by the proctors
- In the root, there is a '.env' file. Open that and update the values with your Azure Container Registry settings
- Open the zip file and from the
app
folder copy the 'labels.txt' and 'model.pb' to theImageRecognition\app
folder - Open the root folder in VSCode
- Right click on the "Modules" folder and select "Add IoT Edge Module"
- Complete adding the module with the following settings:
Property | Value |
---|---|
Type | C# module |
Name | controller |
Docker image repository | $CONTAINER_REGISTRY_ADDRESS/controller |
- You should now have a new module with a basic dotnet core application
- In the
deployment.template.json
file, find the node for your controller. It should look like this
"controller": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "${MODULES.controller}",
"createOptions": {}
}
}
- Update the image to point directly to the arm32v7 Docker file :
"image": "${MODULES.Controller.arm32v7}",
- Replace the
"createOptions":{}
node with the following:
"createOptions": {
"HostConfig": {
"Privileged": true,
"Binds": [ "/dev/gpiomem:/dev/gpiomem" ],
"Devices": [
{
"PathOnHost": "/dev/i2c-1",
"PathInContainer": "/dev/i2c-1",
"CgroupPermissions": "rwm"
},
{
"PathOnHost": "/dev/gpiomem",
"PathInContainer": "/dev/gpiomem",
"CgroupPermissions": "rwm"
}
],
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules/",
"Target": "/lib/modules/"
}
]
}
}
This will allow the new container to access the GPIO of the Raspberry Pi
11. A bit further down, find the routes
section and replace the contents with
"CameraCaptureToController": "FROM /messages/modules/camera-capture/outputs/output1 INTO BrokeredEndpoint(\"/modules/controller/inputs/input1\")"
This will direct communication between the camera-capture module and your new controller module
-
Open the controller folder, and then the "Dockerfile.arm32v7" file. Change the base images to "mcr.microsoft.com/dotnet/core/sdk:3.0-buster" for the build-env and "mcr.microsoft.com/dotnet/core/runtime:3.0-buster-slim-arm32v7". Then open the
controller.csproj
file and change the 'TargetFarmework' to 'netcoreapp3.1' -
Right click on the
deployment.template.json
file and select 'Build and Push to IoT Edge Solution'. This may take a while, so feel free to help someone around you 🙂 -
When that has completed, open up the 'config' folder in vscode, there should be a 'deployment.amd64.json' file in it. Right click on the file and select 'Create Deployment for Single Device', then select your Raspberry's device twin in the dropdown
-
Using Putty or another ssh client, connect to your Raspberry Pi
-
Type
sudo iotedge logs edgeAgent --tail 10
to see if your Pi is updating -
if you run
docker ps -a
you should see containers that are running as well. Wait for your controller container to show in the list, and then typesudo iotedge logs controller
. You should see the json predictions on images captured by the camera and then passed to your ImageRecognition object detection model, relayed back to your new dotnet based controller -
Now you can start coding to have the bot search the image that the model was trained on
- Drop down to the terminal and navigate to the location that the
controller
dotnet core project is located. Add the GPIO and Devices library by running the following commands:
dotnet add package System.Device.Gpio --source https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet5/nuget/v3/index.json
dotnet add package Iot.Device.Bindings --source https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet5/nuget/v3/index.json
-
Open up the
Program.cs
file and take a look at the code. You will notice that there is a message handlerPipeMessage
that handles the incoming messages. As you would have noticed the incoming message is in json format -
In the source code that was provided, there are some helpers and pin mappings that will help you to get up and going to move the bot around
Notes:
If you receive and errorUnhandled exception. System.IO.IOException: Error 13 initializing the Gpio driver
from the controller module. Runsudo chmod 777 /dev/gpiomem
on your Pi