From 416d84a83821ccd365d01d95b1436caf5bb581ce Mon Sep 17 00:00:00 2001
From: Dan Mccreary This is a much smaller car that has a small turning radius built around the SunFounder AI PiCar kit. The software was modified by Brennan Neuser and the hardware was purchased by Sharat Bhatra Promoting equity and innovation in AI education. The AI Racing League is a fun way to learn the concepts behind artificial intelligence! We learn AI by teaching small remote-controlled (RC) cars to drive around a track autonomously. The cars use a low-cost Raspberry Pi computer (or NVIDIA Nano) with a camera. Students drive around the track and gather a set of images to \"train\" a neural network that can be used to automatically steer the car. In addition to teaching machine learning, this course also teaches concepts like Python programming, computer vision, data science and generative AI. Our curriculum is inspired by the DonkeyCar and the CoderDojo mentoring system. We feature a wide variety of lesson plans for the 5th-grade up to college-level participants. Our secret sauce is to combine the right hardware, software, mentors and a flexible learning curriculum to create fun events that students love to participate in. Notice We are terminating our support for the NVIDIA Nano. This is driven by the fact that the Raspberry Pi 4 with 8GB is selling at $75 and the Nano 4GB is now $150. There are considerable problems with the end-of-life of the NVIDIA support for the JetPack libraries. We have several new events happening in 2024. Please contact Dan McCreary if you would like to participate. The following organizations have expressed an interest in helping out: The mission of the AI Racing League is to create and deliver educational materials that will make fun AI training accessible to everyone. We place a special focus on students from disadvantaged communities including women and minorities. We work as a sub-project of the CodeSavvy not-for-profit organization and we adhere to their guidelines for the quality and security of our students. This means that all our volunteers have background checks and we limit the student-to-mentor ratios to no more than three students per mentor. We are committed to equal-opportunity mentoring. We strive to recruit, train and retain the best mentors we can find. We are inspired by the values behind the CoderDojo mentoring system and their innovative use of flexible concept cards. We attempt to publish concept cards that provide a flexible and agile training environment for a wide variety of learners. We believe strongly in student-led initiatives and project-based learning. We feel students learn the most when they are building things together in teams. We believe our curriculum should be broad to support a wide variety of students from beginning Python to advanced AI. Rather than force students down a single path of learning, we believe our instructors should be more like travel guides to help students explore their areas of interest. Our curriculum needs to adapt to single-hour events up to multi-year mentoring. See Rhizomatic Learning for what inspires us. The AI Racing League is a fun way to learn the concepts behind artificial intelligence! We learn AI by teaching small remote-controlled (RC) cars to drive around a track autonomously. The cars use a low-cost Raspberry Pi computer (or NVIDIA Nano) with a camera. Students drive around the track and gather a set of images to \"train\" a neural network that can be used to automatically steer the car. In addition to teaching machine learning, this course also teaches concepts like Python programming, computer vision, data science and generative AI. Our curriculum is inspired by the DonkeyCar and the CoderDojo mentoring system. We feature a wide variety of lesson plans from the 5th grade up to college-level participants. Our secret sauce is to combine the right hardware, software, mentors and a flexible learning curriculum to create fun events that students love to participate in. DIY Robocars and the Donkey Cars are the communities that inspired the AI Racing League. We are grateful for all they have done and their willingness to share their knowledge through open-source software. A community of over 2,300 free, open and local programming clubs for young people 58,000 young people are being creative with technology with the help of 12,000 volunteers in 94 countries Code Savvy strives to make kids and teens more code-savvy through creative educational programs and services. We incubate and support community-based programs that bring technology and know-how to local kids and educators, all the while championing gender and ethnic diversity. Code Savvy is dedicated to ensuring the next generation of computer science professionals represents the billions of users of tomorrow\u2019s innovative technologies. Visit https://codesavvy.org Like all CoderDojo-created content, you are free to use this content in K-12 noncommercial educational settings for teaching without paying license fees. We also encourage our community to create variations and help us enlarge the curriculum. We always appreciate attribution! Details of the license terms are here: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) ChatGPT Prompt: How Minnesota Can Enhance the K-12 Curriculum to Create More Tech Workers By implementing these changes and continually evaluating the curriculum's effectiveness, Minnesota can foster a tech-savvy generation ready to take on roles in the ever-evolving tech industry. This business plan governs the AI Racing League for the calendar year 2020. We are operating as a project within CodeSavvy. CodeSavvy is a register 501C organization. CodeSavvy's mission is to promote coding skills in youth with a focus on promoting mentoring and training for girls and disavantaged youth. The AI Racing League was founded in the summary of 2019. We were inspired by the viral global DonkeyCar project. We wondered if the DonkeyCar racing events could be used to teach AI in the classroom. The goals of the founders were to promote fun events that taught AI to girls and disavantaged youth. We recieved an initial round of $9K in funding from the Optum Technology Social Responsibility and lauched our first event in August of 2019. This event was done at the International School of Minnesota and attracted members from the AI Research community, schools, educators and students. Since August we have participated in over a dozen events promoting AI instruction. We have trained and initial set of approximatly 50 mentors that are familair with the use of the DonkeyCar Many colleges have \"Senior Seminar\" courses that allow students to spend time on projects that they are interested in pursuing. This is an outline of a sample college-level project that a team of six students will spend about 60 hours each on. We can currently supply both the cars and GPU. The school will need to purchase its own track or borrow one from a local school such as Washburn. Optum also has a track that is not being used. The price for a track is around $175. Each lesson is designed to provide both theoretical knowledge and practical experience, ensuring a comprehensive understanding of the topics covered. Title: AI Racking League - 60 Hours At the end of the course, students will be asked to give a presentation and demonstration to the other students in the course. The student will be asked to create a GitHub repository and keep their documentation, notebooks and code on that site. The use of mkdocs is strongly encouraged. Evaluators will look for signs of high-quality documentation that could be reused by future students in their learning and problem-solving Students will be asked to describe the problems they had and how they overcame these challenges. They will also reflect on if they effectively used generative AI to solve problems. Students will each be asked if they would recommend working with their teammates in the future. Higher ratings will be reflected in the student's course evaluations. From Terminal, you can open the current directory in the File Manager using the This should return VideosSparkfun Autonomous Vehicle Race from 2016
Sunfounder PiCar¶
+
+
"},{"location":"#mission","title":"Mission","text":"
"},{"location":"#other-resources","title":"Other Resources","text":"
"},{"location":"about/","title":"About the AI Racing League","text":"
"},{"location":"about/#code-savvy","title":"Code Savvy","text":"
"},{"location":"ai-assisted-learning/#2-enhanced-tech-curriculum","title":"2. Enhanced Tech Curriculum","text":"
"},{"location":"ai-assisted-learning/#3-project-based-learning","title":"3. Project-Based Learning","text":"
"},{"location":"ai-assisted-learning/#4-tech-clubs-and-competitions","title":"4. Tech Clubs and Competitions","text":"
"},{"location":"ai-assisted-learning/#5-professional-development-for-teachers","title":"5. Professional Development for Teachers","text":"
"},{"location":"ai-assisted-learning/#6-partnerships-with-tech-companies","title":"6. Partnerships with Tech Companies","text":"
"},{"location":"ai-assisted-learning/#7-tech-career-awareness","title":"7. Tech Career Awareness","text":"
"},{"location":"ai-assisted-learning/#8-incorporate-soft-skills","title":"8. Incorporate Soft Skills","text":"
"},{"location":"ai-assisted-learning/#9-diversity-and-inclusion","title":"9. Diversity and Inclusion","text":"
"},{"location":"ai-assisted-learning/#10-update-infrastructure","title":"10. Update Infrastructure","text":"
"},{"location":"ai-assisted-learning/#11-hands-on-experiences","title":"11. Hands-on Experiences","text":"
"},{"location":"ai-assisted-learning/#12-online-resources","title":"12. Online Resources","text":"
"},{"location":"ai-assisted-learning/#13-continuous-evaluation","title":"13. Continuous Evaluation","text":"
"},{"location":"business-plan-2020/#goals-for-2020","title":"Goals for 2020","text":""},{"location":"business-plan-2020/#financial-goals","title":"Financial Goals","text":""},{"location":"college-project/","title":"Sample College Project","text":"
"},{"location":"college-project/#sample-lessons","title":"Sample Lessons","text":"
"},{"location":"college-project/#hands-on","title":"Hands On","text":"
"},{"location":"college-project/#lesson-2-machine-learning-basics-6-hours","title":"Lesson 2: Machine Learning Basics (6 hours)","text":""},{"location":"college-project/#theory_1","title":"Theory","text":"
"},{"location":"college-project/#hands-on_1","title":"Hands On","text":"
"},{"location":"college-project/#lesson-3-computer-vision-fundamentals-6-hours","title":"Lesson 3: Computer Vision Fundamentals (6 hours)","text":""},{"location":"college-project/#theory_2","title":"Theory","text":"
"},{"location":"college-project/#hands-on_2","title":"Hands On","text":"
"},{"location":"college-project/#lesson-4-data-analysis-with-jupyter-notebooks-6-hours","title":"Lesson 4: Data Analysis with Jupyter Notebooks (6 hours)","text":""},{"location":"college-project/#theory_3","title":"Theory","text":"
"},{"location":"college-project/#hands-on_3","title":"Hands On","text":"
"},{"location":"college-project/#lesson-5-raspberry-pi-configuration-6-hours","title":"Lesson 5: Raspberry Pi Configuration (6 hours)","text":""},{"location":"college-project/#theory_4","title":"Theory","text":"
"},{"location":"college-project/#hands-on_4","title":"Hands On","text":"
"},{"location":"college-project/#lesson-6-motors-and-servos-control-6-hours","title":"Lesson 6: Motors and Servos Control (6 hours)","text":""},{"location":"college-project/#theory_5","title":"Theory","text":"
"},{"location":"college-project/#hands-on_5","title":"Hands On","text":"
"},{"location":"college-project/#lesson-7-calibration-of-donkeycars-6-hours","title":"Lesson 7: Calibration of DonkeyCars (6 hours)","text":""},{"location":"college-project/#theory_6","title":"Theory","text":"
"},{"location":"college-project/#hands-on_6","title":"Hands On","text":"
"},{"location":"college-project/#lesson-8-unix-shell-and-system-management-6-hours","title":"Lesson 8: UNIX Shell and System Management (6 hours)","text":""},{"location":"college-project/#theory_7","title":"Theory","text":"
"},{"location":"college-project/#hands-on_7","title":"Hands On","text":"
"},{"location":"college-project/#lesson-9-gpu-configuration-and-optimization-6-hours","title":"Lesson 9: GPU Configuration and Optimization (6 hours)","text":""},{"location":"college-project/#theory_8","title":"Theory","text":"
"},{"location":"college-project/#hands-on_8","title":"Hands On","text":"
"},{"location":"college-project/#lesson-10-generative-ai-and-project-completion-6-hours","title":"Lesson 10: Generative AI and Project Completion (6 hours)","text":""},{"location":"college-project/#theory_9","title":"Theory","text":"
"},{"location":"college-project/#hands-on_9","title":"Hands On","text":"
"},{"location":"command-line-tips/#raspberry-pi-command-line-tips","title":"Raspberry Pi Command Line Tips","text":"git config --global user.name \"Joe Smith\"\ngit config --global user.email \"Joe.Smith123@gmail.com\"\ngit config --global credential.helper store\n
xdg-open
command. This is similar to the Mac open command.
"},{"location":"command-line-tips/#see-if-the-pwm-board-is-working","title":"See If the PWM Board Is Working","text":"$ xdg-open .\n
i2cdetect -l\n
i2c-1 i2c bcm2835 (i2c@7e804000) I2C adapter\n
i2cdetect -y 1\n
0 1 2 3 4 5 6 7 8 9 a b c d e f\n00: -- -- -- -- -- -- -- -- -- -- -- -- -- \n10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n40: 40 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n70: 70 -- -- -- -- -- -- -- \n
Note that the line 40 and 70 has values under column 0 (I2C bus 1) If you unplug the data you should get:
0 1 2 3 4 5 6 7 8 9 a b c d e f\n00: -- -- -- -- -- -- -- -- -- -- -- -- -- \n10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n70: -- -- -- -- -- -- -- -- \n
"},{"location":"command-line-tips/#sd-card-speed-test","title":"SD Card Speed Test","text":"Home -> Accessories -> Raspberry Pi Diagnostics
Raspberry Pi Diagnostics - version 0.9\nSat Jul 3 14:25:23 2021\n\nTest : SD Card Speed Test\nRun 1\nprepare-file;0;0;20628;40\nseq-write;0;0;21999;42\nrand-4k-write;0;0;4498;1124\nrand-4k-read;8695;2173;0;0\nSequential write speed 21999 KB/sec (target 10000) - PASS\nRandom write speed 1124 IOPS (target 500) - PASS\nRandom read speed 2173 IOPS (target 1500) - PASS\nTest PASS\n
"},{"location":"contacts/","title":"AI Racing League Contacts","text":"We have a long term vision of using an intelligent agent that will recommend the right content for each of our students based on their current knowledge and their learning goals.
"},{"location":"curriculum/#beginning-green-concepts","title":"Beginning (Green) Concepts","text":""},{"location":"curriculum/#batteries","title":"Batteries","text":""},{"location":"curriculum/#motors","title":"Motors","text":""},{"location":"curriculum/#donkey-car","title":"Donkey Car","text":""},{"location":"curriculum/#activity","title":"Activity","text":"Go to the donkey car station and look at the sample Donkey Car. Ask a mentor to show you the parts.
"},{"location":"curriculum/#questions","title":"Questions","text":"1) What are the key parts of the Donkey Car?The key parts are: * RC Car chassis * Nvidia Jetson Nano * Servo controller * Camera * Battery for the Nano
2) How do the front wheels turn?A 180 degree servo is used to steer the car
3) Can you find an electric motor?There is only a single motor in the RC chassis
4) Can you find a battery? Are their multiple batteries?There are two batteries - one for the motor and one for the Jetson Nano
5) Where is the Jetson Nano (computer)?It is right on top!
6) Where is the camera? Is it on the front or back of the car?The camera is on the top facing forward
7) What happens to the opposite wheel when you turn it?The transmission makes the wheels turn in opposite direction when one wheel is turned. - Is this correct?
8) How much does a Donkey Car cost?The car costs around $250 each. The RC chassis cost about $110.
9) Why do you think they call it a \u201cDonkey Car\u201d?They call it a \"Donkey Car\" because, like a Donkey, it is functional but not very sleek.
"},{"location":"curriculum/#intermediate-concepts","title":"Intermediate Concepts","text":""},{"location":"curriculum/#machine-learning","title":"Machine Learning","text":""},{"location":"curriculum/#activity_1","title":"Activity","text":"Go to the machine learning station and watch the demos. Ask about the difference between if-else statements and machine learning.
"},{"location":"curriculum/#questions_1","title":"Questions","text":"1) What is Machine Learning? How does it differ from traditional rule-based programming?Machine learning is a method of data analysis that automates analytical model building, based on the idea that systems can learn from data. Rule-based programming is built off of if-else statements in code, and therefore every possible situation has to be thought of in advance by the programmer. Therefore, machine learning is well suited for situations where all possible inputs may not be defined.
2) How does a computer learn?The computer learns through a process called training. Training is the process of adjusting a mathematical formula by feeding it data and adjusting the formula until it produces the desired output.
3) What are the major groups of machine learning?There are 5 major groups of algorithms within machine learning. They are: * The connectionists (Neural Networks) * The analogizers (Support Vector Machines) * The Bayesians (Bayes\u2019 Theorem) * The evolutionaries (Genetic Algorithms) * The symbolists (Inverse Deduction)
4) Applications of machine learning are everywhere, what are some examples?Some applications of machine learning are: * Voice Assistants (Siri, Alexa, etc.) * Translation * Self-Driving Cars
"},{"location":"curriculum/#blue-concepts","title":"Blue Concepts","text":""},{"location":"curriculum/#black-concepts","title":"Black Concepts","text":""},{"location":"demo/","title":"Demos","text":"Although our students love hands-on learning with our DonkeyCars, there are other aspects of Artificial Intelligence we like to discuss in our classes. Here are some demos we use in our classrooms.
"},{"location":"demo/#the-teachable-machine-by-google","title":"The Teachable Machine by Google","text":"This demo works with almost any PC that has a built-in video camera. You give it a set of images or pictures, or sounds and you build a model that predicts what a new images our sounds might be. This is called a \"classification\" model.
Much of our classroom work is centered around the hot topic of Deep Learning. But AI is much more than just Deep Learning. Here are a few other areas to consider. (Taken from the book The Master Algorithm)
"},{"location":"demo/#5-camps-of-machine-learning-demos","title":"5 Camps of Machine Learning Demos","text":""},{"location":"demo/#connectionists-neural-networks","title":"Connectionists (Neural Networks)","text":"Check Out TensorFlow Playground
"},{"location":"demo/#analogizers-support-vector-machines","title":"Analogizers (Support Vector Machines)","text":"TODO: FIND GOOD DEMO
"},{"location":"demo/#bayesians-bayes-theorem","title":"Bayesians (Bayes\u2019 Theorem)","text":"Check Out A Bayes' Theorem Example
"},{"location":"demo/#evolutionaries-genetic-algorithms","title":"Evolutionaries (Genetic Algorithms)","text":"Watch This Animation Learn to Walk
"},{"location":"demo/#symbolists-inverse-deduction","title":"Symbolists (Inverse Deduction)","text":"Look at this Decision Tree Demo
"},{"location":"faqs/","title":"AI Racing League Frequently Asked Questions","text":""},{"location":"faqs/#how-much-does-it-cost","title":"How much does it cost?","text":"All our events are free.
However, ff you want to build your own car you are welcome to bring these to the events. Parts for a DonkeyCar typically run about $250 US. See our Car Parts Lists for details.
"},{"location":"faqs/#what-do-i-have-to-know-before-i-come","title":"What do I have to know before I come?","text":"Nothing! We have material for beginners without any prior knowledge of AI.
"},{"location":"faqs/#what-car-part-hardware-do-you-use","title":"What car part hardware do you use?","text":"We use mostly NVIDIA Nano and Raspberry Pi 4 for our single board computers. We use a wide variety of RC-car engines but the [Exceed Magnet] 1/16 scale RC car is a low-cost standard. See our Car Parts Lists for details. Typical car parts cost around $250 US.
"},{"location":"faqs/#what-gpus-do-you-use-and-how-much-do-they-cost","title":"What GPUs do you use and how much do they cost?","text":"We use a standard PC chassis running Lunix with a NVIDIA GPU such as a GTX 2080. These PCs can be purchased for around $1,500. Se our GPU Parts List for details.
"},{"location":"faqs/#how-do-i-sign-up-as-a-student","title":"How do I sign up as a student?","text":"The best way to get involved is by signing up as a student at the CoderDojo Twin Cities web site:
Coderdojotc.org
"},{"location":"faqs/#how-do-i-become-a-mentor","title":"How do I become a mentor?","text":"The best way to get involved is by signing up as a mentor at the CoderDojo Twin Cities web site:
https://www.coderdojotc.org/mentor_signup/
"},{"location":"faqs/#how-do-i-start-my-own-chapter-of-the-ai-racing-league","title":"How do I start my own chapter of the AI Racing League","text":"Please connect with Dan McCreary on LinkedIn and indicate in the note you would like to start your own chapter. Be sure to include information about your leadership and technical background and any related experience working with STEM programs.
"},{"location":"faqs/#can-i-get-a-grant-to-purchase-hardware-for-our-school-or-club","title":"Can I get a grant to purchase hardware for our school or club?","text":"We are working on arranging a grant application process. The best way to start this process is to gather a small group of volunteers that can create a sustainable club. Include people that have a combination of fundraising, technology, education and marketing skills. Reach out to local school administration officials to build a community of science/math and STEM educators. Network with local companies that are trying to build local talent in AI and machine learning.
Please contact Dan McCreary on LinkedIn for details.
"},{"location":"glossary/","title":"AI Racing League Glossary of Term","text":""},{"location":"glossary/#calibration","title":"Calibration","text":"A step in setting up a DonkeyCar where around five values configuration file is created that reflect the physical aspects of the RC car. There are three parameters for the throttle and two parameters for the steering. It is important to get these five parameters correct so you can precisely drive your DonkeyCar.
"},{"location":"glossary/#catalog-file","title":"Catalog File","text":"A format of storing our image-related throttle and steering data in line-oriented file where each line contains the serialized JSON information when the image was captured.
Note that the catalog files are not pure JSON files. Only the data within each line is a valid data JSON object.
The catalog file formats have changed between DonkeyCar releases. The current version is called V2 format.
"},{"location":"glossary/#coderdojo","title":"CoderDojo","text":"An international program of over 2,300 coding clubs that uses data-driven practices to get students interested in coding. Many of the aspects of the AI Racing League uses these same principals.
Key aspects of CoderDojo are:
Their main web site is: http://coderdojo.com/
"},{"location":"glossary/#coderdojo-twin-cities-python-labs","title":"CoderDojo Twin Cities Python Labs","text":"These labs are a set of free-courses to learn Python using fun turtle graphics. There is no experience needed.
These labs are a set of free-courses to learn MicroPython. You should have a background in Python to use these labs. There are lessons in sensors, motors and robots.
This is a $25 robot that you build and program with MicroPython. If you are working on a project that will lead up to a full DonkeyCar, this is an ideal project to get you started. The robot will get you familiar with concepts like PWM, motor controllers and sensors.
Code Savvy is a not-for-profit organization with 501(c)3 status from the IRS that the AI Racing League works as a sub-project. All the AI Racing League financials are organized under a Code Savvy program. Donations to the AI Racing League should be done though Code Savvy donations. Questions about Code Savvy can be sent to kidscode@codesavvy.org
These are small laminated cards that have concepts information on them that students can learn. The idea is one-concept per card. See the CoderDojo TC Guide for Authoring Concept Cards
"},{"location":"glossary/#donkey-car","title":"Donkey Car","text":"This is a trademarked name of a car that is used at our events. The name implies \"ugly\" so you know that they are not designed to look pretty, just functional cars with a camera on the front.
Each of the AI Racing League events usually has at least one GPU server for training our models. These are typically small portable PCs with a GPU card in them. The entire GPU server cost around $1,200 each and can train a 20,000 image data set in under five minutes.
We typically suggest that clubs writing grants use a NVIDIA GEFORCE RTX 2070 8GB or similar card since it is both fast enough for 10-team events but cost effective that schools can afford them. These card are often available used on e-Bay for a few hundred dollars.
Note that we have tried to use cloud-based services at some of our events but we can't be guaranteed that there is enough WiFi bandwidth to move large datasets and models to and from the cloud. We feel that the tasks involved in setting up the GPU server is also a valuable skill for our students.
An electronic circuit that controls and regulates the speed of an electric motor. It also can reverse the direction of the motor. Our ESC
The angle our cameras need to point down to have a good view of the road ahead.
"},{"location":"glossary/#ollama","title":"Ollama","text":"A large-language model trained to answer questions in a chatbot-style interface. Ollama is unique in that is has versions that are highly compressed so they can run on a schools GPU with 12GB RAM. This can be purchased for under $2,000.
"},{"location":"glossary/#normalized","title":"Normalized","text":"Values that have been converted into a standard that can be used across many situations.
For example, we don't store the exact PWM ratios of the throttle and steering values in our catalog files. We convert these values into ranges from 0.0 to 1.0 so that all our machine learning models can share them. This is why we also need the configuration values when the drive commands are used to convert the normalized values back to the appropriate PWM ranges unique to each car.
"},{"location":"glossary/#pulse-width-modulation","title":"Pulse Width Modulation","text":"The way that we control the [Electronic Speed Controller] (ESC) and the servo by sending digital square waves with a variable ratio of the width of the positive part of the square wave.
We want our clubs to all have affordable but high-quality tracks that are easy to roll up and store. Our suggestion is to find used billboard vinyl in a dark color (black or dark blue) and then use white and yellow tape to place down the lines.
The step in DonkeyCar setup where we take approximately 20,000 small image files and the throttle and steering information with each image to build a deep neural network. The training step requires us to move the data off the DonkeyCar's SD card and transfer the data to a more powerful GPU server. Using a typical $1,200 GPU server we can build a model file in around five minutes. This file is then transferred back to the DonkeyCar for autonomous driving.
"},{"location":"glossary/#tubs","title":"Tubs","text":"This is the term that the DonkeyCar software uses to store training data. Each tub consists of a catalog of information about the drive and the images associated with that drive.
Note that the format of the tubs changes over time so old tubs formats may need to be converted to newer formats.
Each Tub is a directory (folder) has two components:
One of the two single board computers we use in our DonkeyCars. The current Nanos have 4GB RAM and a GPU for accelerating real-time inference.
The full product name is the NVIDIA Jetson Nano.
The price for a 4GB Nano is around $99 but they occasionally go on sale for $79. The Nano became available for sale in the US in April of 2019. A 2GB version has also been sold for $59 but the lack of RAM memory makes it difficult to use for many of our AI Racing League events and we don't recommend it.
Note that we do not use the Nano for training. We transfer the data to a GPU server that has more parallel cores for training.
"},{"location":"google-analytics/","title":"Google Analytics","text":"Our Google Tracking ID is: G-RL4MZ0MHZ4
You can see the activity here:
Google Dashboard
"},{"location":"google-analytics/#how-we-enabled-google-analytics","title":"How We Enabled Google Analytics","text":"mkdocs material supports Google Analysis. We only need to add four lines to our mkdocs.yml configuration file.
extra:\n analytics:\n provider: google\n property: G-RL4MZ0MHZ4\n
See our mkdocs.yml on GitHub here:
mkdocs.yml file in GitHub
The following line is placed into each HTML web page in the site branch:
<script async=\"\" src=\"https://www.googletagmanager.com/gtag/js?id=G-RL4MZ0MHZ4\"></script>\n
"},{"location":"hackathon/","title":"Hackathon Ideas","text":"Here are a set of ideas that can be used to plan a Hackathon around the DonkeyCar. Of course, if people are not familiar with the DonkeyCar, just getting it to work is a good project! These are more for teams that are extending the DonkeyCar software.
"},{"location":"hackathon/#beginner-projects-done-in-under-one-day","title":"Beginner Projects - done in under one day","text":"Make it easy to transfer DonkeyCar test data to our GPU server. Start with a UNIX shell script that compresses the tub file and puts the data on a jump drive. Then work on using SSH to copy the files to the GPU server. Then add configuration of the Avahi application and the mDNS protocols to autodiscover the ARL GPU servers and prompte the user.
"},{"location":"hackathon/#mobile-app-to-drive-the-car","title":"Mobile App to Drive The Car","text":"Most robot systems like the MIP have a simple mobile application for driving your robot around. There are two modes: A tilt mode (where you steer by tilting the phone) and a pressure mode where you can control the speed and direction by pressing on a virtual joystick. The problem we have with the current DonkeyCar 3.X system is that the web-based application is difficult to use. The tilt mode does not work on web browsers. We suggest you use a program like AppInventor for Android or Google Flutter and Dash building mobile apps.
"},{"location":"hackathon/#leaderboard-web-page","title":"Leaderboard Web Page","text":"Create a web application that tracks what teams are in the lead. The app should be a single-page application that allows team scores to be updated on a web form. The leaderboard can also be \"smart\" a look for the team config files on each DonkeyCar on the local-area network.
"},{"location":"hackathon/#oled-extension","title":"OLED Extension","text":"Add a low-cost OLED screen to each car using the SPI bus. Have the OLED screen show key parameters such as hostname, static IP address, disk space free, training data size etc. Bonus points for a mode button to cycle through screens. See Dan McCreary for the hardware.
"},{"location":"hackathon/#led-strips-for-training-server-status","title":"LED Strips for Training Server Status","text":"Add an low-cost WS-2811-B LED strip to the GPU server. Make the strip blue when idle, red when you start training an new model, and have it fade to green as the model converges. See Dan McCreary for the hardware.
"},{"location":"hackathon/#training-graph","title":"Training Graph","text":"As students walk in, give them a tablet to register. It will also ask them basic questions. It will then ask them how long they will be there. It will then suggest a set of activities and some concepts to master. The graph is a dependacy graph of all the concepts we teach at the event. Also suggest a probability they will have fun at the event.
"},{"location":"hackathon/#single-source-publishing-for-concept-cards","title":"Single Source Publishing for Concept Cards","text":"Our cards need to be authored in MarkDown but we want to disply on the web, in PPT and with PDF. To do this we want to adopt a single-source publishing pipeline.
"},{"location":"learning-strategy/","title":"AI Racing League Educational Philosophy","text":"The AI Racing League educational philosophy is founded on the following values:
Our Curriculum is based around building a series of concept cards that adhere to the \"one concept per card\" rule. Each card is a 5.5in X 8in laminated card with questions or challenges on the front and answers on the back. Concept cards have three difficulty levels with different colored borders.
Our goal is to keep the concepts as \"flat\" as possible without a deep level of dependency. We try to keep at least half of our cards mostly green beginner cards.
Students will walk into the AI Racing League and see a stack of cards. They will pick up one card or a set of cards and work on these. When they are done they return the cards and select another set of cards.
Because of our
Concept Cards in Google Docs
"},{"location":"learning-strategy/#engineering-challenges","title":"Engineering Challenges","text":"To develop a world class curriculum, we need to partner with senior engineers and curriculum developers. Here are some of the challenges we need to address.
"},{"location":"learning-strategy/#challenge-1-make-it-easy-for-short-term-learning","title":"Challenge #1: Make it easy for short term learning","text":"Engineers with experience in both hardware and software can build their own DonkeyCar from parts in a few weeks, our goal is to allow students from a wide variety of backgrounds to be able to participate in events in a flexible way. A typical CoderDojo event typically only lasts two hours and students may not have the appropriate background in hardware, Python programming or UNIX.
"},{"location":"learning-strategy/#challenge-2-on-site-traning-hardware","title":"Challenge #2: On site traning hardware","text":"Many people that are building DonkeyCars use a standard Mac or PC laptop. These systems take up to two hours to train a typical model - too long for many events. One solution would be to leverage clound-based GPUs to accelerate learning. This option typically requires transferring around 1/2 GB of images up to the clound for training the models. Models, which can typically be 10MB, then need to be transferred back from the clound to the local car. Our challenge here is that many locations may not have high-bandwith uploading and downloading services that could handle this traffic.
One solution is to acquire some robust GPUs that students can use to quickly train complex models - typically in 15 to 20 minutes. This hardware needs to be easy to use - for example we need to do folder-based drag and drops and press a single button to begin training.
"},{"location":"media/","title":"Media","text":"Ready, Set, Algorithms! Teams Learn AI by Racing Cars Morningstar, Liberty Mutual workers are coming up with business ideas after exploring machine learning via mini self-driving vehicles
"},{"location":"presentations/","title":"AI Racing League Presentations","text":"These presentations are all licensed under our creative commons share alike non-commercial with attribution licenses.
Here are some sites that are of interest:
Raspberry Pi 3, 4, the Nvidia Nano, the Nvdia DX2, and the Intel Mobius Neural Stick The base DonkeyCar today uses the Raspberry Pi 3+ which has a list price of $35. This hardware is just barly able to process images in real-time. Small changes in lighting will throw the car off the track. The new Raspberry Pi 4 with 4GB RAM is a new option.
The Nvidia Nano on the other hand has 128 CUDA core processors and has more than enough power to drive around a track in real time with varied lighting conditions. This is the hardware we have used for our first generation cars in the AI Racing League.
There are also college-level autonomous driving teams that use the more expensive Nvidia DX2 hardware.
"},{"location":"resources/#nvidia-nano","title":"Nvidia Nano","text":"Jetson Nano References
This is a sample suggested curriculum for a six week AI Racing League summer school project. The students would all meet together for two hours, once a week. There are then homework assignments. The students don't need any prior experience.
"},{"location":"six-week-curriculum/#week-1-overview-and-unboxing","title":"Week 1: Overview and Unboxing","text":"See the suggested parts list for week 1
"},{"location":"six-week-curriculum/#week-2-booting-a-raspberry-pi-unix-calibration-intro-to-python-and-raspberry-pi","title":"Week 2: Booting a Raspberry Pi, UNIX, Calibration, Intro to Python and Raspberry Pi","text":"This section show how leage administrators can create their own SD image files.
"},{"location":"admin/02-sd-image/","title":"Creating a League SD Image","text":"Many times teams will not have the time to build their own image during the time allocated for an event. It typically takes 2-4 hours to create a DonkeyCar image that is ready to drive. To get around this problem, leagues frequently create their own \"reference image\" that are given to teams.
"},{"location":"admin/02-sd-image/#checklist-for-the-league-image","title":"Checklist for the League Image","text":"python --version
## Things to Remove from your Image
Note
The AI Racing League ONLY uses this for training our models. We don't need elaborate CPU overclocking and a water cooled CPU. We don't need powerful CPU and lots of RAM. We just need to be able to train a 20K image model within around 5-10 minutes. Most GPUs can do this.
"},{"location":"admin/03-purchasing-guide/#portable-case","title":"Portable Case","text":"We wanted a small lightweight case with a handle and tempered glass sides so our teams can see what is inside. The price is around $110.00.
Lian Li TU150 Mini ITX Desktop Case
"},{"location":"admin/03-purchasing-guide/#motherboard","title":"Motherboard","text":""},{"location":"admin/03-purchasing-guide/#ram","title":"RAM","text":""},{"location":"admin/03-purchasing-guide/#gpu","title":"GPU","text":""},{"location":"admin/03-purchasing-guide/#solid-state-drive","title":"Solid State Drive","text":""},{"location":"admin/04-tools/","title":"AI Racing League Tools","text":"Here are some handy tools to help you with your cars.
"},{"location":"admin/04-tools/#usb-voltage-current-meter","title":"USB Voltage Current Meter","text":"Many USB battery packs don't maintain a voltage of 5.1v when under load. To monitor this, you need to watch the voltage level that arrives at the single-board computer input voltage. This needs to include any voltage drop over the USB power cable.
Sample item on e-Bay
This item uses a four semi-double loop integral converter ICs, with a precision bandgap reference. The vendor claims the device has:
Specifications:
Note the voltage drop when current is passing through the device. In practice, the exact voltage drop varies with the current. For a typical project drawing 300 milliamps, the voltage drop is around 100vV.
"},{"location":"admin/car-box-checklist/","title":"AI Racing League Car Box Checklist","text":""},{"location":"admin/car-box-checklist/#donkey-car-nvidia-kit","title":"Donkey Car NVIDIA Kit","text":"Car Name: ____ Mac Address: ____ Static IP Address: ___
[ ] RC Car Battery Charger (7.2v NiMh)
[ ] Pi Camera Module V2 with 3D printed chassis
[ ] Ankar 5v 6800mHA battery with charging cable - note draws 900ma when charging so use a 1ft high current USB cable.
[ ] 2.5 amp 5v barrel connector for desktop use of Nvidia
Optional Accessories (not in the box) 1. [ ] Keyboard 1. [ ] External Monitor
Nvidia Nano Serial Number: ________ Nvidia Nano Purchase Date: December 12, 2019
"},{"location":"admin/car-box-checklist/#raspberry-pi-donkeycar-kit","title":"Raspberry Pi DonkeyCar Kit","text":"We have looked at many variations of parts and decided to go with the Nvidia Nano, a TP-Link WiFi dongle and the Logitech F710 Joystick. Here are our recomended parts.
We are also looking into getting the wide-angle (160 degree) cameras but we have not tested these enough.
Part Name Description Price Link Note 128GB microSD card Samsung 128GB 100MB/s (U3) MicroSDXC Evo Select Memory Card with Adapter (MB-ME128GA/AM) $20 https://www.amazon.com/Samsung-MicroSD-Adapter-MB-ME128GA-AM/dp/B06XWZWYVP MicroCenter in St. Louis Park has these for about 1/2 the prices Camera Raspberry Pi Camera Module V2-8 Megapixel,1080p $30 https://www.amazon.com/Raspberry-Pi-Camera-Module-Megapixel/dp/B01ER2SKFS MUST be Module V2. The V1 will NOT work with the Nano. Dupont Connectors (F-F) EDGELEC 120pcs 20cm Dupont Wire Female to Female Breadboard Jumper Wires 3.9 inch 1pin-1pin 2.54mm Connector Multicolored Ribbon Cables DIY Arduino Wires 10 15 20 30 40 50 100cm Optional $8 for 120 https://www.amazon.com/EDGELEC-Breadboard-1pin-1pin-Connector-Multicolored/dp/B07GCY6CH7 Only need one of these Nvidia Nano Single Board Computer NVIDIA Jetson Nano Developer Kit $99 https://www.amazon.com/NVIDIA-Jetson-Nano-Developer-Kit/dp/B07PZHBDKT Ships in two days Power for Pi - 6700mAh Anker [Upgraded to 6700mAh] Astro E1 Candy-Bar Sized Ultra Compact Portable Charger, External Battery Power Bank, with High-Speed Charging PowerIQ Technology $24 https://www.amazon.com/Anker-Upgraded-Candy-Bar-High-Speed-Technology/dp/B06XS9RMWS I like this one but there are other variations. Some are rated at 10,000 mAh Power Supply for Nano SMAKN DC 5V/4A 20W Switching Power Supply Adapter 100-240 Ac(US) $10 https://www.amazon.com/SMAKN-Switching-Supply-Adapter-100-240/dp/B01N4HYWAM Note that this is a 4A 12V power supply. RC Car 1/16 2.4Ghz Exceed RC Magnet Electric Powered RTR Off Road Truck Stripe Blue NEW $119 https://www.ebay.com/itm/1-16-2-4Ghz-Exceed-RC-Magnet-Electric-Powered-RTR-Off-Road-Truck-Stripe-Blue-NEW/223337258165 E-Bay Wifi USB Dongle N150 USB wireless WiFi network Adapter for PC with SoftAP Mode - Nano Size, Compatible with Linux Kernal 2.6.18~4.4.3 (TL-WN725N) $7 https://www.amazon.com/TP-Link-TL-WN725N-wireless-network-Adapter/dp/B008IFXQFU/ I purchased one at Microcenter and it worked out-of-the-box on the Nano. The Ubuntu drivers are pre-loaded! Servo Module HiLetgo 2pcs PCA9685 16 Channel 12-Bit PWM Servo Motor Driver IIC Module for Arduino Robot $10 for 2 https://www.amazon.com/gp/product/B07BRS249H/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1 Note the quantity is 2 USB Power Cable Anker [4-Pack] Powerline Micro USB (1ft) - Charging Cable $10 for 4 https://www.amazon.com/gp/product/B015XR60MQ/ref=ppx_yo_dt_b_asin_title_o02_s00 Note the quantity is 4 but you only need one Replacement Battery 7.2V 1100mAh 6x 2/3A Rechargeable Ni-MH RC Battery Pack w/Small Tamiya Connector 10cmX3cmX1.5cm $9.88 + $2.39 Shipping https://www.ebay.com/i/183877810537 Takes several weeks to ship from China. We are looking for a local supplier. Some replacements (Airsoft guns) have reverse polarity."},{"location":"admin/gpu-parts/","title":"AI Racing League GPU Components","text":""},{"location":"admin/gpu-parts/#design-goals","title":"Design Goals","text":"We wanted to create a local training system that had fast training times but was portable so that we can easily carry it in a car and ship it to remote events. We can't assume any connectivity to the Internet for our events since some of them might be held in parking lots with no network access. Here are our design objectives.
We also drive to remote events and the equipment needs to be outside overnight in freezing weather. This rules out using any water-cooled hardware which gets easily damaged in freezing weather.
"},{"location":"admin/gpu-parts/#fast-training-times","title":"Fast Training Times","text":"We want students to be able to drive around a track 20 times (10 times clockwise and 10 times counterclockwise) and generate a reasonable sized data set of 20 frames per second and 224X224 images. This ends up being about 10,000 images. The sizes are a bit larger for larger tracks and slower drivers.
"},{"location":"admin/gpu-parts/#why-we-like-the-nvidia-rtx-2070","title":"Why We Like the NVIDIA RTX 2070","text":"We want to train with this data set in under five minutes. This means that we want to use a GPU card that has about 2000 CUDA cores. An example of this is the Nvidia GeForce GTX graphic cards. The RTX 2070 which currently has a list price of around $500. There are many people that are upgrading their video game systems and are selling these GPUs used on eBay and Craigslist.com for a few hundred dollars.
A higher cost option is the NVIDIA RTX 2080 which has a retail list price of around $1,200 USD. The benchmarks for image training for these two boards were done by Dr Donald Kinghorn in March of 2019. [His analysis] (https://www.pugetsystems.com/labs/hpc/TensorFlow-Performance-with-1-4-GPUs----RTX-Titan-2080Ti-2080-2070-GTX-1660Ti-1070-1080Ti-and-Titan-V-1386/) shows that a single GTX 2080 Ti can process about 293 images per second. The GTX 2070 only does about 191 images per second. But for about 1/3 of the price it is still a good value.
"},{"location":"admin/gpu-parts/#small-and-lightweight","title":"Small and Lightweight","text":"We originally were \"gifted\" a somewhat old GPU server used in a data center for training deep learning models. Although the sever was \"free\", it was over 70 pounds and had far more capability for RAM and power then we needed at events. Based in this experience we opted to build a much smaller system using a mini enclosure with a handle. We selected the Mini ITX Desktop Case and determined that we could still fit the GPU in this case.
"},{"location":"admin/gpu-parts/#rugged","title":"Rugged","text":"Must be able to take the bumps of shipping and be able to be left out in a car overnight in freezing temperatures. This was a requirement for remote events in rural Minnesota communities. We opted for a full SSD drive to keep the moving parts to a minimum.
"},{"location":"admin/gpu-parts/#easy-to-ship-to-remote-sites","title":"Easy to ship to remote sites","text":"We had to be able to put the unit is a remote shipping case. We are still looking for low-cost cases that are lightweight but protective.
"},{"location":"admin/gpu-parts/#visibility","title":"Visibility","text":"We wanted students to be able to look into the case and see the parts. There is a trend to also purchase RGB LED versions of components which we thought we could program to change from RED to Green during the training process as the model converges. We have not found a good API for the parts so a simple $5 LED strip on a Arduino Nano might be a better idea. See the Moving Rainbow project for sample designs. We create these at the IoT hackthons each year.
"},{"location":"admin/gpu-parts/#sample-parts-list","title":"Sample Parts List","text":""},{"location":"admin/gpu-parts/#2023-update","title":"2023 Update","text":"PCPartPicker Part List $769 with Monitor by Neal Kelly
Part Name Description Price Link CPU AMD Ryzen 5 3600 3.6 GHz 6-Core Processor $95.00 Amazon Motherboard MSI A520M-A PRO Micro ATX AM4 Motherboard $101.11 Amazon Memory Silicon Power SP016GBLFU320X02 16 GB (1 x 16 GB) DDR4-3200 CL22 Memory $23.99 Amazon Storage TEAMGROUP MP33 512 GB M.2-2280 PCIe 3.0 X4 NVME Solid State Drive $22.49 Amazon Video Card Asus Dual GeForce RTX 3060 V2 OC Edition GeForce RTX 3060 12GB 12 GB Video Card $299.99 Amazon Case Thermaltake Versa H18 MicroATX Mini Tower Case $49.99 Amazon Power Supply be quiet! Pure Power 11 400 W 80+ Gold Certified ATX Power Supply $89.69 Amazon Monitor Acer V227Q Abmix 21.5\" 1920 x 1080 75 Hz Monitor $87.29 Amazon Total $769.55 Part Name Description Price Link Note CPU AMD Ryzen 5 3600 3.6 GHz 6-Core Processor $189.99 Motherboard Gigabyte X570 I AORUS PRO WIFI Mini ITX AM4 $219.99 RAM Corsair Vengeance RGB Pro 32 GB (2 x 16 GB) DDR4-3200 Memory $162.99 Link Notes Storage Gigabyte AORUS NVMe Gen4 1 TB M.2-2280 NVME Solid State Drive $209.99 Link Notes Cooling Be quiet! Dark Rock Pro 4, BK022, 250W TDP $89.90 https://www.amazon.com/dp/B07BY6F8D9/ref=cm_sw_r_cp_api_i_PYp-DbFCY51CH Avoid liquid cooler GPU Card NVIDIA GeForce RTX 2070 Ti 8 GB $499.99 https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2070-super/ $500 price is a lower cost alternative Case Lian Li TU150 Mini ITX Desktop Case $109.99 Link We love the handle on this small case and the glass side panel. Power Supply Corsair SF 600W 80+ Gold SFX Power Supply $114.99 Link 600W is an overkillNote that this motherboard does come with builtin WiFi. The external antenna must be connected but it is easy to get lost in transport. You might want to get a few additional WiFi antennas like these RP-SMA Male Antenna We also think we could get buy with a smaller and lighter power supply, but the 600W model gives the system the opportunity to add external devices that might draw more power.
"},{"location":"admin/gpu-parts/#assembly","title":"Assembly","text":"There are several good videos on YouTube that show how to assemble custom systems. You can also use a search engine to find videos for each of the parts. The Liquid coolers can be tricky to install correctly if you don't have experience. We also recommend reading the user manauals for each of the parts. They are usually on line.
Jon Herke's Tiny Monster
"},{"location":"admin/gpu-parts/#installing-nvidia-drivers-on-ubuntu","title":"Installing NVIDIA Drivers on Ubuntu","text":"Installing NVIDIA drivers on Ubuntu is notoriously painful and difficult. One mis-step and you can't get to the monitor and have to ssh in to fix things. Make sure to setup ssh before you install the NVIDIA drivers.
We used the UNIX command line to install the NVIDIA drivers. The GUI tool on Ubuntu did not work for us in some settings. See NVIDIA Driver Install.
A guide to do this is here: Installation of Nvidia Drivers on Ubuntu 18
"},{"location":"admin/gpu-shell/","title":"Shell Commands for the GPU Server","text":"The following is a list of shell commands for the AI Racing League GPU Server. We have moved all the commands for setting up the NVIDIA GPU to the file NVIDIA Driver Install.
The samples below are run if you are on the GPU running the Terminal shell or you have logged on using the secure shell program.
"},{"location":"admin/gpu-shell/#secure-shell-login","title":"Secure Shell Login","text":"$ ssh arl@arl1.local\n
"},{"location":"admin/gpu-shell/#check-the-version-of-ubuntu","title":"Check the Version of Ubuntu","text":"$ lsb_release -a\n
Response:
No LSB modules are available.\nDistributor ID: Ubuntu\nDescription: Ubuntu 18.04.3 LTS\nRelease: 18.04\nCodename: bionic\n
"},{"location":"admin/gpu-shell/#list-the-cpu-information","title":"List the CPU Information","text":"lscpu\n
Response:
Architecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nCPU(s): 12\nOn-line CPU(s) list: 0-11\nThread(s) per core: 2\nCore(s) per socket: 6\nSocket(s): 1\nNUMA node(s): 1\nVendor ID: AuthenticAMD\nCPU family: 23\nModel: 113\nModel name: AMD Ryzen 5 3600 6-Core Processor\nStepping: 0\nCPU MHz: 2195.902\nCPU max MHz: 3600.0000\nCPU min MHz: 2200.0000\nBogoMIPS: 7187.07\nVirtualization: AMD-V\nL1d cache: 32K\nL1i cache: 32K\nL2 cache: 512K\nL3 cache: 16384K\nNUMA node0 CPU(s): 0-11\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca\n
The key is that we have 12 CPUs and each CPU has two threads. That means that we have 24 threads that run concurrent operations on this server. This is plenty of capacity for our GPU server.
"},{"location":"admin/gpu-shell/#ram","title":"RAM","text":"free -m\n
Response:
total used free shared buff/cache available\nMem: 32124 1627 28879 75 1618 30019\nSwap: 2047 0 2047\n
This indicates we have 32 GB RAM. The GPU server should have a minimum of 8 GB of RAM.
"},{"location":"admin/gpu-shell/#disk-space","title":"Disk Space","text":"df -h /\n
Response:
Filesystem Size Used Avail Use% Mounted on\n/dev/nvme0n1p3 229G 178G 40G 82% /\n
This shows we have a total of 229 gigabytes of RAM and we have 40 gigabytes available. We will need about 4 GB for each training set we store.
"},{"location":"admin/gpu-shell/#per-user-disk-usage","title":"Per User Disk Usage","text":"du -hs /home/* 2>/dev/null\n
Response:
8.5G /home/arl\n1.4G /home/dan\n16K /home/dan2\n155G /home/donkey\n
"},{"location":"admin/gpu-shell/#add-a-new-gpu-server-user","title":"Add A New GPU Server User","text":"adduser donkey\n
You can also allow the user to have \"sudo\" rights by using the following command: sudo usermod -aG sudo donkey\n
"},{"location":"admin/gpu-shell/#change-the-hostname","title":"Change the Hostname","text":"sudo vi /etc/hostname\n
Change the name to \"gpu-server2\" or a similar name.
"},{"location":"admin/gpu-shell/#gpu-commands","title":"GPU Commands","text":""},{"location":"admin/gpu-shell/#display-type","title":"Display Type","text":"sudo lshw -class display\n
Response:
description: VGA compatible controller\n product: TU102 [GeForce RTX 2080 Ti Rev. A]\n vendor: NVIDIA Corporation\n physical id: 0\n bus info: pci@0000:09:00.0\n version: a1\n width: 64 bits\n clock: 33MHz\n capabilities: pm msi pciexpress vga_controller bus_master cap_list rom\n configuration: driver=nvidia latency=0\n resources: irq:101 memory:f6000000-f6ffffff memory:e0000000-efffffff memory:f0000000-f1ffffff ioport:e000(size=128) memory:c0000-dffff\n
Note that line 2 lists the product of TU102 [GeForce RTX 2080 Ti Rev. A] but it does not indicate how much RAM we have on the defice.
"},{"location":"admin/gpu-shell/#nvidia-gpu-configuration","title":"NVIDIA GPU Configuration","text":"nvidia-smi\n
Result:
Mon Sep 25 14:17:46 2023 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n| | | MIG M. |\n|===============================+======================+======================|\n| 0 GeForce RTX 208... Off | 00000000:09:00.0 Off | N/A |\n| 40% 32C P8 18W / 260W | 184MiB / 11018MiB | 0% Default |\n| | | N/A |\n+-------------------------------+----------------------+----------------------+\n\n+-----------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=============================================================================|\n| 0 N/A N/A 1026 G /usr/lib/xorg/Xorg 35MiB |\n| 0 N/A N/A 2315 G /usr/lib/xorg/Xorg 90MiB |\n| 0 N/A N/A 2509 G /usr/bin/gnome-shell 10MiB |\n| 0 N/A N/A 4118 G ...eLanguageDetectionEnabled 34MiB |\n+-----------------------------------------------------------------------------+\n
This display is difficult to read. But the key item is in the second column under the Memory-Usage
area:
184MiB / 11018MiB
This says that we are using 184 megabytes and have 11.018 Gigabyte to use to run our machine learning tasks.
"},{"location":"admin/gpu-shell/#nvidia-gpu-monitor","title":"NVIDIA GPU Monitor","text":"The runs similar to the UNIX top command, but for the GPU.
watch -d -n 0.5 nvidia-smi\n
"},{"location":"admin/gpu-shell/#nvidia-gpu-utilization","title":"NVIDIA GPU Utilization","text":"This shows the GPU running at 42% utilization during the training process.
nvidia-smi\nMon Jul 26 20:24:16 2021 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n| | | MIG M. |\n|===============================+======================+======================|\n| 0 NVIDIA GeForce ... Off | 00000000:09:00.0 On | N/A |\n| 41% 49C P2 136W / 260W | 10892MiB / 11016MiB | 42% Default |\n| | | N/A |\n+-------------------------------+----------------------+----------------------+\n\n+-----------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=============================================================================|\n| 0 N/A N/A 1327 G /usr/lib/xorg/Xorg 18MiB |\n| 0 N/A N/A 1398 G /usr/bin/gnome-shell 71MiB |\n| 0 N/A N/A 1574 G /usr/lib/xorg/Xorg 155MiB |\n| 0 N/A N/A 1705 G /usr/bin/gnome-shell 32MiB |\n| 0 N/A N/A 23722 G ...AAAAAAAAA= --shared-files 25MiB |\n| 0 N/A N/A 27071 G ...AAAAAAAAA= --shared-files 9MiB |\n| 0 N/A N/A 32486 C ...a3/envs/donkey/bin/python 10571MiB |\n+-----------------------------------------------------------------------------+\n
"},{"location":"admin/joystick/","title":"Joystick","text":"Logitec F710 Game Controller for DonkeyCar
https://docs.donkeycar.com/parts/controllers/
Testing to see if the Nano Recognizes the F710 USB Dongle You can use the \"lsusb\" UNIX shell command to list all the USB devices:
$ lsusb Bus 002 Device 002: ID 0bda:0411 Realtek Semiconductor Corp. Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 004: ID 0bda:8179 Realtek Semiconductor Corp. RTL8188EUS 802.11n Wireless Network Adapter Bus 001 Device 005: ID 046d:c21f Logitech, Inc. F710 Wireless Gamepad [XInput Mode] Bus 001 Device 002: ID 0bda:5411 Realtek Semiconductor Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Note that the USB device with an ID of 046d:c21f has been found in the 4th line above. The first ID before the colon is the device manufacturer (Logiteh) and the second is the id of their device (c21f). Linux looks this number up in their system and then loads the driver for this type of device.
The driver will create a device file in the /dev/input directory called js0
$ ls -l /dev/input/js0 crw-rw-r--+ 1 root input 13, 0 Aug 16 19:30 /dev/input/js0
The \"c\" in the first letter says that this is a character I/O device.
$ sudo apt-get install evtest [sudo] password for dan: Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: apt-clone archdetect-deb bogl-bterm busybox-static cryptsetup-bin dpkg-repack gir1.2-timezonemap-1.0 gir1.2-xkl-1.0 grub-common kde-window-manager kinit kio kpackagetool5 kwayland-data kwin-common kwin-data kwin-x11 libdebian-installer4 libkdecorations2-5v5 libkdecorations2private5v5 libkf5activities5 libkf5attica5 libkf5completion-data libkf5completion5 libkf5declarative-data libkf5declarative5 libkf5doctools5 libkf5globalaccel-data libkf5globalaccel5 libkf5globalaccelprivate5 libkf5idletime5 libkf5jobwidgets-data libkf5jobwidgets5 libkf5kcmutils-data libkf5kcmutils5 libkf5kiocore5 libkf5kiontlm5 libkf5kiowidgets5 libkf5newstuff-data libkf5newstuff5 libkf5newstuffcore5 libkf5package-data libkf5package5 libkf5plasma5 libkf5quickaddons5 libkf5solid5 libkf5solid5-data libkf5sonnet5-data libkf5sonnetcore5 libkf5sonnetui5 libkf5textwidgets-data libkf5textwidgets5 libkf5waylandclient5 libkf5waylandserver5 libkf5xmlgui-bin libkf5xmlgui-data libkf5xmlgui5 libkscreenlocker5 libkwin4-effect-builtins1 libkwineffects11 libkwinglutils11 libkwinxrenderutils11 libqgsttools-p1 libqt5designer5 libqt5help5 libqt5multimedia5 libqt5multimedia5-plugins libqt5multimediaquick-p5 libqt5multimediawidgets5 libqt5opengl5 libqt5positioning5 libqt5printsupport5 libqt5qml5 libqt5quick5 libqt5quickwidgets5 libqt5sensors5 libqt5sql5 libqt5test5 libqt5webchannel5 libqt5webkit5 libxcb-composite0 libxcb-cursor0 libxcb-damage0 os-prober python3-dbus.mainloop.pyqt5 python3-icu python3-pam python3-pyqt5 python3-pyqt5.qtsvg python3-pyqt5.qtwebkit python3-sip qml-module-org-kde-kquickcontrolsaddons qml-module-qtmultimedia qml-module-qtquick2 rdate tasksel tasksel-data Use 'sudo apt autoremove' to remove them. The following additional packages will be installed: evemu-tools libevemu3 The following NEW packages will be installed: evemu-tools evtest libevemu3 0 upgraded, 3 newly installed, 0 to remove and 7 not upgraded. Need to get 38.2 kB of archives. After this operation, 191 kB of additional disk space will be used. Do you want to continue? [Y/n] y Get:1 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 libevemu3 arm64 2.6.0-0.1 [11.0 kB] Get:2 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 evemu-tools arm64 2.6.0-0.1 [12.3 kB] Get:3 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 evtest arm64 1:1.33-1build1 [14.9 kB] Fetched 38.2 kB in 1s (56.1 kB/s) debconf: delaying package configuration, since apt-utils is not installed Selecting previously unselected package libevemu3:arm64. (Reading database ... 140149 files and directories currently installed.) Preparing to unpack .../libevemu3_2.6.0-0.1_arm64.deb ... Unpacking libevemu3:arm64 (2.6.0-0.1) ... Selecting previously unselected package evemu-tools. Preparing to unpack .../evemu-tools_2.6.0-0.1_arm64.deb ... Unpacking evemu-tools (2.6.0-0.1) ... Selecting previously unselected package evtest. Preparing to unpack .../evtest_1%3a1.33-1build1_arm64.deb ... Unpacking evtest (1:1.33-1build1) ... Setting up evtest (1:1.33-1build1) ... Processing triggers for libc-bin (2.27-3ubuntu1) ... Processing triggers for man-db (2.8.3-2ubuntu0.1) ... Setting up libevemu3:arm64 (2.6.0-0.1) ... Setting up evemu-tools (2.6.0-0.1) ... Processing triggers for libc-bin (2.27-3ubuntu1) ... dan@danm-nano:~$
Now run it:
$ evtest
No device specified, trying to scan all of /dev/input/event* Not running as root, no devices may be available. Available devices: /dev/input/event2: Logitech Gamepad F710
Select the device event number [0-2]: 2
Logitech Gamepad F710 Input driver version is 1.0.1 Input device ID: bus 0x3 vendor 0x46d product 0xc21f version 0x305 Input device name: \"Logitech Gamepad F710\" Supported events: Event type 0 (EV_SYN) Event type 1 (EV_KEY) Event code 304 (BTN_SOUTH) Event code 305 (BTN_EAST) Event code 307 (BTN_NORTH) Event code 308 (BTN_WEST) Event code 310 (BTN_TL) Event code 311 (BTN_TR) Event code 314 (BTN_SELECT) Event code 315 (BTN_START) Event code 316 (BTN_MODE) Event code 317 (BTN_THUMBL) Event code 318 (BTN_THUMBR) Event type 3 (EV_ABS) Event code 0 (ABS_X) Value 128 Min -32768 Max 32767 Fuzz 16 Flat 128 Event code 1 (ABS_Y) Value -129 Min -32768 Max 32767 Fuzz 16 Flat 128 Event code 2 (ABS_Z) Value 0 Min 0 Max 255 Event code 3 (ABS_RX) Value 128 Min -32768 Max 32767 Fuzz 16 Flat 128 Event code 4 (ABS_RY) Value -129 Min -32768 Max 32767 Fuzz 16 Flat 128 Event code 5 (ABS_RZ) Value 0 Min 0 Max 255 Event code 16 (ABS_HAT0X) Value 0 Min -1 Max 1 Event code 17 (ABS_HAT0Y) Value 0 Min -1 Max 1 Properties: Testing ... (interrupt to exit)
Now as you press any key or move any joystick you will see the events.
When I press the yellow Y we see: Event: time 1566006064.962158, type 1 (EV_KEY), code 308 (BTN_WEST), value 1 Event: time 1566006064.962158, -------------- SYN_REPORT ------------ Event: time 1566006065.129981, type 1 (EV_KEY), code 308 (BTN_WEST), value 0 Event: time 1566006065.129981, -------------- SYN_REPORT ------------
Blue X Event: time 1566006110.047015, type 1 (EV_KEY), code 307 (BTN_NORTH), value 1 Event: time 1566006110.047015, -------------- SYN_REPORT ------------ Event: time 1566006110.182606, type 1 (EV_KEY), code 307 (BTN_NORTH), value 0 Event: time 1566006110.182606, -------------- SYN_REPORT ------------
Red B Event: time 1566006143.423217, type 1 (EV_KEY), code 305 (BTN_EAST), value 1 Event: time 1566006143.423217, -------------- SYN_REPORT ------------ Event: time 1566006143.499642, type 1 (EV_KEY), code 305 (BTN_EAST), value 0 Event: time 1566006143.499642, -------------- SYN_REPORT ------------
Green A Event: time 1566006184.060282, type 1 (EV_KEY), code 304 (BTN_SOUTH), value 1 Event: time 1566006184.060282, -------------- SYN_REPORT ------------ Event: time 1566006184.128408, type 1 (EV_KEY), code 304 (BTN_SOUTH), value 0 Event: time 1566006184.128408, -------------- SYN_REPORT ------------
Moving the joystick generates: Event: time 1566006255.549652, -------------- SYN_REPORT ------------ Event: time 1566006255.553650, type 3 (EV_ABS), code 1 (ABS_Y), value -10923 Event: time 1566006255.553650, -------------- SYN_REPORT ------------ Event: time 1566006255.557650, type 3 (EV_ABS), code 1 (ABS_Y), value -14264 Event: time 1566006255.557650, -------------- SYN_REPORT ------------ Event: time 1566006255.561652, type 3 (EV_ABS), code 1 (ABS_Y), value -18633
"},{"location":"admin/nvidia-driver-install/","title":"Install the NVIDIA Driver","text":"Ideally you should be able to use the Ubuntu \"Software and Updates\" tool to install the NIVIDA driver. This usually works, but if you get errors, you may need to use the unix shell.
"},{"location":"admin/nvidia-driver-install/#nvidia-card-verification","title":"NVIDIA Card Verification","text":"You can first verify that the GPU card has been installed and powered up. We can use the \"list hardware\" command with the display option:
$ sudo lshw -C display\n
*-display UNCLAIMED \n description: VGA compatible controller\n product: GV102\n vendor: NVIDIA Corporation\n physical id: 0\n bus info: pci@0000:09:00.0\n version: a1\n width: 64 bits\n clock: 33MHz\n capabilities: pm msi pciexpress vga_controller bus_master cap_list\n configuration: latency=0\n resources: memory:f6000000-f6ffffff memory:e0000000-efffffff memory:f0000000-f1ffffff ioport:e000(size=128) memory:c0000-dffff\n
This shows that there is a GPU card installed but not claimed by the display.
"},{"location":"admin/nvidia-driver-install/#nvidia-devices","title":"NVIDIA Devices","text":"You can then use the ubuntu-drivers command to see the devices.
$ ubuntu-drivers devices\n== /sys/devices/pci0000:00/0000:00:03.1/0000:09:00.0 ==\nmodalias : pci:v000010DEd00001E07sv000010DEsd000012A4bc03sc00i00\nvendor : NVIDIA Corporation\ndriver : nvidia-driver-470 - distro non-free recommended\ndriver : nvidia-driver-460-server - distro non-free\ndriver : nvidia-driver-418-server - distro non-free\ndriver : nvidia-driver-460 - distro non-free\ndriver : nvidia-driver-450-server - distro non-free\ndriver : xserver-xorg-video-nouveau - distro free builtin\n\n== /sys/devices/pci0000:00/0000:00:01.2/0000:02:00.0/0000:03:04.0/0000:05:00.0 ==\nmodalias : pci:v00008086d00002723sv00008086sd00000084bc02sc80i00\nvendor : Intel Corporation\nmanual_install: True\ndriver : backport-iwlwifi-dkms - distro free\n
"},{"location":"admin/nvidia-driver-install/#ubuntu-drivers-autoinstall","title":"Ubuntu Drivers Autoinstall","text":"sudo ubuntu-drivers autoinstall\n
This tool will tell you what drivers you need to install.
sudo apt-get install nvidia-driver-470\n
This will often generate errors but it will indicate what other libraries need to be installed for the 470 driver to work.
"},{"location":"admin/nvidia-driver-install/#final-test","title":"Final Test","text":"Now we are ready to probe the full GPU and get all the statistics of what is in the GPU.
nvidia-smi\n
Thu Jul 22 22:59:36 2021 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n| | | MIG M. |\n|===============================+======================+======================|\n| 0 NVIDIA GeForce ... Off | 00000000:09:00.0 Off | N/A |\n| 41% 36C P8 2W / 260W | 283MiB / 11016MiB | 0% Default |\n| | | N/A |\n+-------------------------------+----------------------+----------------------+\n\n+-----------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=============================================================================|\n| 0 N/A N/A 1327 G /usr/lib/xorg/Xorg 18MiB |\n| 0 N/A N/A 1398 G /usr/bin/gnome-shell 71MiB |\n| 0 N/A N/A 1574 G /usr/lib/xorg/Xorg 98MiB |\n| 0 N/A N/A 1705 G /usr/bin/gnome-shell 91MiB |\n+-----------------------------------------------------------------------------+\n
If you don't get this or a similar display, you must continue to search for installation instructions.
After you get this screen you can reboot.
"},{"location":"admin/nvidia-driver-install/#cuda-version","title":"CUDA Version","text":"nvcc --version\n
Results:
nvcc: NVIDIA (R) Cuda compiler driver\nCopyright (c) 2005-2017 NVIDIA Corporation\nBuilt on Fri_Nov__3_21:07:56_CDT_2017\nCuda compilation tools, release 9.1, V9.1.85\n
"},{"location":"admin/nvidia-driver-install/#cuda-tookkit-install-for-pytorch","title":"CUDA Tookkit Install for PyTorch","text":"conda install cudatoolkit=<CUDA Version> -c pytorch\n
conda install cudatoolkit=11 -c pytorch\n
"},{"location":"admin/tensorflow-gpu-install/","title":"Tensorflow GPU Software","text":"$ conda install tensorflow-gpu==2.2.0 Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Collecting package metadata (repodata.json): done Solving environment: done
"},{"location":"admin/tensorflow-gpu-install/#package-plan","title":"Package Plan","text":"environment location: /home/arl/miniconda3/envs/donkey
added / updated specs: - tensorflow-gpu==2.2.0
The following packages will be downloaded:
package | build\n---------------------------|-----------------\ncudatoolkit-10.1.243 | h6bb024c_0 347.4 MB\ncudnn-7.6.5 | cuda10.1_0 179.9 MB\ncupti-10.1.168 | 0 1.4 MB\ntensorflow-2.2.0 |gpu_py37h1a511ff_0 4 KB\ntensorflow-base-2.2.0 |gpu_py37h8a81be8_0 181.7 MB\ntensorflow-gpu-2.2.0 | h0d30ee6_0 3 KB\n------------------------------------------------------------\n Total: 710.4 MB\n
The following NEW packages will be INSTALLED:
cudatoolkit pkgs/main/linux-64::cudatoolkit-10.1.243-h6bb024c_0 cudnn pkgs/main/linux-64::cudnn-7.6.5-cuda10.1_0 cupti pkgs/main/linux-64::cupti-10.1.168-0 tensorflow-gpu pkgs/main/linux-64::tensorflow-gpu-2.2.0-h0d30ee6_0
The following packages will be DOWNGRADED:
_tflow_select 2.3.0-mkl --> 2.1.0-gpu tensorflow 2.2.0-mkl_py37h6e9ce2d_0 --> 2.2.0-gpu_py37h1a511ff_0 tensorflow-base 2.2.0-mkl_py37hd506778_0 --> 2.2.0-gpu_py37h8a81be8_0
Proceed ([y]/n)? Y
Downloading and Extracting Packages tensorflow-base-2.2. | 181.7 MB | ################################################################################################################################################################ | 100% cudnn-7.6.5 | 179.9 MB | ################################################################################################################################################################ | 100% cupti-10.1.168 | 1.4 MB | ################################################################################################################################################################ | 100% tensorflow-2.2.0 | 4 KB | ################################################################################################################################################################ | 100% tensorflow-gpu-2.2.0 | 3 KB | ################################################################################################################################################################ | 100% cudatoolkit-10.1.243 | 347.4 MB | ################################################################################################################################################################ | 100% Preparing transaction: done Verifying transaction: done Executing transaction: done
"},{"location":"events/","title":"Introduction to AI Racing League Events","text":""},{"location":"events/#selecting-a-venue","title":"Selecting a Venue","text":"The area should be at least a 24'x'36 room. A school gymnasium is ideal. Only about 1/2 of a full-size basketball court works well.
"},{"location":"events/#getting-a-track","title":"Getting a Track","text":"You should have a track that can support multiple teams driving at the same time.
See our Setup Track Options for details on how to create or purchase a track.
"},{"location":"events/#learning-stations","title":"Learning Stations","text":"Our events have 4-6 \"learning stations\" where participants can learn about the AI Racing League.
Typical tables include:
This table shows participants what hardware is used in the events.
Question: Have you ever used Jupyter Notebooks?
They are used by data scientists to manage data analysis projects.
Drive data is a collection of images that has the speed and steering direction associated with it.
How would you use them to analyze your drive data?
"},{"location":"events/python/","title":"Python Table","text":"At this table, we usually have a single computer running a Python IDE. The users come up to the table and are invited to change a few lines of code and see what changes the cause.
A good example is to have a Raspberry Pi running Thonny that changes colors on an LED strip. We use a Raspberry Pi Pico to demonstrate this.
We also have some information cards that talk about why Python has become the preferred language of AI and Machine Learning.
"},{"location":"events/unix/","title":"UNIX Table","text":"Here is a set of suggested lesson plans to teach a 14-week college-level course built around the AI Racing League.
This course is designed as a multi-disciplinary course that will draw on subjects in computer science, electrical engineering and social science.
"},{"location":"lesson-plans/#background-and-motivation","title":"Background and Motivation","text":"The AI Racing League project aims to provide students with hands-on experience in the fields of machine learning, computer vision, and robotic control. The project will involve developing autonomous vehicles (Donkey Cars) capable of competing in a racing environment. This initiative is motivated by the increasing relevance of AI in various aspects of technology and the need for practical, real-world application skills in our graduates.
"},{"location":"lesson-plans/#learning-objectives","title":"Learning Objectives","text":"After taking this course, students will be able to design, build, and test autonomous racing vehicles using Donkey Cars. Students will learn to apply machine learning algorithms, computer vision techniques, and hardware control to enable these vehicles to navigate a racetrack autonomously. The objective is to prepare students for careers in AI and robotics by providing a comprehensive learning experience that includes both theory and practice.
"},{"location":"lesson-plans/#student-deliverables","title":"Student Deliverables","text":"Students will work in small teams of 3 to 6 students. Ideally, each team will have their own car. Students will be graded on their teamwork.
Each team will produce a GitHub repository of their team's work. The GitHub repositories will include their configuration parameters, Jupyter notebooks and documentation on any modifications they have made to the cars.
"},{"location":"lesson-plans/#equipment-required","title":"Equipment Required","text":"The first module is a baseline to get all students up to speed using our development tools.
Hands-On Lab: Get familiar with car hardware. Assemble a new car if needed.
"},{"location":"lesson-plans/#module-2-calibration","title":"Module 2: Calibration","text":"Calibration
"},{"location":"lesson-plans/#module-3-gathering-data","title":"Module 3: Gathering Data","text":"Gathering Data
"},{"location":"lesson-plans/#module-4-building-a-model","title":"Module 4: Building a Model","text":"Building a Model
"},{"location":"lesson-plans/#module-5-using-a-model-for-automatic-drive","title":"Module 5: Using a Model for Automatic Drive","text":"Driving with a Model
"},{"location":"lesson-plans/#module-6-analyzing-data","title":"Module 6: Analyzing Data","text":"Analyzing Data
"},{"location":"lesson-plans/#module-7-tuning-performance","title":"Module 7: Tuning Performance","text":"Tuning Performance
"},{"location":"lesson-plans/#module-8-managing-the-gpu","title":"Module 8: Managing the GPU","text":"GPU Configuration
"},{"location":"lesson-plans/#module-9-purchasing-a-gpu","title":"Module 9: Purchasing a GPU","text":"Using on-line tools to configure a low-cost GPU PC tuned for machine learning training workloads.
"},{"location":"lesson-plans/#module-10-advanced-topics","title":"Module 10: Advanced Topics","text":""},{"location":"lesson-plans/#3d-printing-a-chassis","title":"3D Printing a chassis","text":""},{"location":"lesson-plans/#adding-a-lidar","title":"Adding a LIDAR","text":""},{"location":"lesson-plans/#lowering-the-cost-of-the-donkey-car","title":"Lowering the Cost of the Donkey Car","text":"Differential drives
"},{"location":"lesson-plans/building-a-model/","title":"Building A Model","text":""},{"location":"lesson-plans/calibration/","title":"Donkey Car Calibration","text":"We will be following the same steps outlined in the Donkey Car Docs
To run calibration, connect a HDMI connector to the Pi and then boot. You will also need a keyboard and mouse and a charged 7.2-volt battery for calibration.
We would like students to work in pairs to calibrate each car and record the configuration parameters in a Python configuration file. Some of the cars are in a plastic bin and these bins sometimes have a card with the right configuration parameters on them.
Our cars all use the PCA9685 I2C to PWM chip. We don't have the Donkey Car Hats.
Make sure the power LED on the PCA9685 is on. If not, check the power and ground connections to the Raspberry Pi SBC 40-pin connector.
Note
Some of the sample configuration files use an older format. Make sure you ONLY modify the configuration files that are generated with the command that generates the mycar
files.
There is an on/off switch on the Electronic Speed Control. Make sure it is in the ON position and that the motor battery is charged and connected.
When you turn the switch on you MUST hear the confirmation beep. If you don't hear it you need to check the battery and connections.
donkey calibrate --channel <your_steering_channel> --bus=<your_i2c_bus>\n
By default, our cars use channel 0 for the throttle and channel 1 for the steering. If the team that assembled the car switched the connections on the PWM card these may be reversed.
The key parameters to record are:
Throttle:
Steering
These should be placed online in a file that is associated with each car name or number.
"},{"location":"lesson-plans/calibration/#challenges-setting-reverse","title":"Challenges Setting Reverse","text":"Reverse on RC cars is a little tricky because the ESC must receive the following sequence:
To calibrate a reverse PWM setting...
Use the same technique as above
Here is the basic command to edit the config file:
nano ~/mycar/myconfig.py\n
The Raspberry Pi's also have Python IDEs installed so you can also use those editors. This lesson puts a focus on getting students familiar with the parts of the Donkey Car including:
Generating architecture diagrams can be a slow and painful process. But ChatGPT can help. here is a process of generating a block diagram using ChatGPT. We can use the Mermaid Live program to view the diagram. ChatGPT also has a diagram
plugin that will render the image within ChatGPT.
Create a mermaid block diagram for the structure of a Donkey Car. Make sure to label all the components and their connection types. Return the results in raw mermaid format. Include the following structures:\n\n1. The main computer (Raspberry Pi or NVIDIA Nano)\n2. The main computer has an I2C connection to the Servo Controller board\n3. The Servo Controller connects to the Steering Servo and the Electronic Speed Controller (ESC) via PWM connections\n4. The ESC connects to the RC Car Battery\n5. THE ESC connects to the Car Motor\n6. The 5v battery powers the main computer\n7. There are two batteries: a 5V battery and a 7V RC Car Battery.\n8. The Servo controller is a PCA9685 board.\n
"},{"location":"lesson-plans/car-structure/#chatgpt-response","title":"ChatGPT Response","text":"This code will create a block diagram where:\n\nThe \"Main Computer\" (A) connects to the \"Servo Controller\" (B) via an I2C connection.\nThe \"Servo Controller\" (B) connects to both the \"Steering Servo\" (C) and \"Electronic Speed Controller\" (D) via PWM connections.\nThe \"Electronic Speed Controller\" (D) connects to both the \"RC Car Battery\" (E) and the \"Car Motor\" (F).\nThe \"5V Battery\" (G) connects to the \"Main Computer\" (A) as well as to the \"Positive Connection\" (H) and \"Negative Connection\" (I).\nThe \"RC Car Battery\" (E) also connects to both the \"Positive Connection\" (H) and \"Negative Connection\" (I).\nPlease note that this is a simplified diagram and does not necessarily represent the physical arrangement of the components in an actual Donkey Car setup. This diagram should be taken as an abstract representation of how the components interact.\n
"},{"location":"lesson-plans/car-structure/#mermaid-diagram","title":"Mermaid Diagram","text":"graph TB\n A[\"Main Computer<br>Raspberry Pi or NVIDIA Nano\"]\n B[\"Servo Controller<br>PCA9685 board\"]\n C[\"Steering Servo\"]\n D[\"Electronic Speed Controller<br>(ESC)\"]\n E[\"RC Car Battery (7V)\"]\n F[\"Car Motor\"]\n G[\"5V Battery\"]\n\n A --I2C--> B\n B --PWM--> C\n B --PWM--> D\n D --- E\n D --- F\n G --- A\n
"},{"location":"lesson-plans/car-structure/joystick/","title":"The F710 Joystick","text":""},{"location":"lesson-plans/car-structure/joystick/#joystick-options","title":"Joystick Options","text":""},{"location":"lesson-plans/car-structure/joystick/#the-logitech-f710","title":"The Logitech F710","text":""},{"location":"lesson-plans/car-structure/joystick/#diagnostics","title":"Diagnostics","text":""},{"location":"lesson-plans/car-structure/joystick/#debugging","title":"Debugging","text":"Adding a Joystick to Your Donkey Car
"},{"location":"lesson-plans/computer-vision/","title":"AI Racing League Computer Vision Table","text":"Raspberry Pi and the NVIDIA Nano are popular systems for demonstrating various computer vision applications due to their affordability and flexibility.
"},{"location":"lesson-plans/computer-vision/#requirements","title":"Requirements","text":"For these lessons, you just need a Raspberry Pi (or Nano) and the attached Camera that we use for all our cars.
"},{"location":"lesson-plans/computer-vision/#sample-labs","title":"Sample Labs","text":"Here are several demos we show to high school students using OpenCV and Raspberry Pi:
"},{"location":"lesson-plans/computer-vision/#face-detection-and-recognition","title":"Face Detection and Recognition","text":"We can use the built-in Haar cascades in OpenCV for face and eyes detection. For the face recognition part, you can use either OpenCV's built-in algorithms or deep learning-based models such as FaceNet.
"},{"location":"lesson-plans/computer-vision/#object-detection","title":"Object Detection","text":"Use pre-trained models from OpenCV's DNN module or TensorFlow's model zoo to recognize multiple objects in real-time.
"},{"location":"lesson-plans/computer-vision/#optical-character-recognition-ocr","title":"Optical Character Recognition (OCR):","text":"Combine OpenCV for image processing and Tesseract for character recognition to demonstrate how a device can read text from images or real-time video feed.
"},{"location":"lesson-plans/computer-vision/#color-detection","title":"Color Detection","text":"Write a simple program that detects specific colors in real-time. This can be used as a stepping stone to more advanced object-tracking projects.
We can also combine this lab with our Raspberry Pi Pico color detection sensors.
"},{"location":"lesson-plans/computer-vision/#motion-detection-and-tracking","title":"Motion Detection and Tracking","text":"Implement a simple surveillance system that detects motion and tracks moving objects. This can be a good introduction to video analysis.
"},{"location":"lesson-plans/computer-vision/#augmented-reality","title":"Augmented Reality","text":"Show how to overlay graphics on a real-time video feed based on detected features. For example, you can use OpenCV's capabilities for feature detection (like SIFT, SURF, ORB) and perspective transformation to overlay 3D objects on a marker.
"},{"location":"lesson-plans/computer-vision/#hand-gesture-recognition","title":"Hand Gesture Recognition","text":"Create a program that recognizes hand gestures and associates them with commands. You could use this to control a game or navigate a user interface.
"},{"location":"lesson-plans/computer-vision/#license-plate-recognition","title":"License Plate Recognition","text":"You can implement a simple Automatic Number Plate Recognition (ANPR) system using image processing techniques in OpenCV and OCR.
"},{"location":"lesson-plans/computer-vision/#qr-code-and-barcode-scanner","title":"QR Code and Barcode Scanner","text":"Use OpenCV for real-time detection and decoding of QR codes and bar codes.
Most of these demonstrations will require additional Python libraries beyond just OpenCV, like NumPy, Pillow, or TensorFlow.
For hardware, you will need the Raspberry Pi 3 with 4GB RAM, a camera module, and potentially additional items like a monitor, mouse, and keyboard for a fully interactive setup.
"},{"location":"lesson-plans/computer-vision/face-filters/","title":"OpenCV Lab: Fun Face Filters**","text":"Objective: Learn to detect faces using OpenCV and apply fun filters like cat whiskers and ears.
"},{"location":"lesson-plans/computer-vision/face-filters/#tools-libraries","title":"Tools & Libraries","text":"Haar Cascades are a type of machine learning object detection method used to identify objects in images or video. In OpenCV, Haar Cascades are mainly used to detect faces, but they can also identify other objects like eyes, smiles, and more. There were named after Hungarian mathematician Alfr\u00e9d Haar who make key contributions to the mathematics of transformation of matrix data in the 1920s.
Imagine you have a magic magnifying glass that you move across a photo. Whenever this magnifying glass sees a face, it lights up! That's kind of what Haar Cascades does in OpenCV.
"},{"location":"lesson-plans/computer-vision/face-filters/#how-do-harr-cascades-work","title":"How Do Harr Cascades Work?","text":"Creating a Snapchat-like face filter using Haar Cascades involves two main steps:
Example Python Code:
```python import cv2
"},{"location":"lesson-plans/computer-vision/face-filters/#load-the-haar-cascade-for-face-detection","title":"Load the Haar cascade for face detection","text":"face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
"},{"location":"lesson-plans/computer-vision/face-filters/#read-the-image","title":"Read the image","text":"img = cv2.imread('your_photo.jpg') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
"},{"location":"lesson-plans/computer-vision/face-filters/#detect-faces","title":"Detect faces","text":"faces = face_cascade.detectMultiScale(gray, 1.1, 4)
for (x, y, w, h) in faces: # Here you'd typically overlay your filter, e.g., cat ears at coordinates (x, y) cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 2) # Drawing a rectangle around the detected face for now
cv2.imshow('Face Detection', img) cv2.waitKey()
Resources: - Haarcascades for face and eye detection (provided by OpenCV).
Steps:
Setup & Installation
Make sure you have Python and OpenCV installed.
bash pip install opencv-python
Face Detection
Before adding filters, students should understand face detection.
```python\nimport cv2\n\nface_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')\n\nimg = cv2.imread('path_to_image.jpg')\ngray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n\nfaces = face_cascade.detectMultiScale(gray, 1.1, 4)\nfor (x, y, w, h) in faces:\n cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 2)\n\ncv2.imshow('Detected Faces', img)\ncv2.waitKey()\n```\n
Designing the Filters
Use any graphic software (like GIMP or Photoshop) to design transparent PNG images of cat whiskers and ears.
Ensure the filter graphics are adjustable in size or make several versions to fit different face sizes.
Applying the Cat Ears Filter
Given the coordinates (x, y, w, h)
of the detected face:
```python ear_image = cv2.imread('path_to_ear_image.png', -1) # The -1 reads the alpha channel
Applying the Whiskers Filter
Similar to the cat ears, you can position whiskers on the cheeks using the (x, y, w, h)
of the face.
Real-time Filters Application
Capture video from the webcam and apply the filters in real-time.
```python cap = cv2.VideoCapture(0)
while True: ret, frame = cap.read() gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.1, 4)\nfor (x, y, w, h) in faces:\n # Apply cat ears and whiskers filter here\n\ncv2.imshow('Filters in Action', frame)\nif cv2.waitKey(1) & 0xFF == ord('q'):\n break\n
cap.release() cv2.destroyAllWindows() ```
Challenge & Extension
Assessment:
Note: This is a basic version and doesn't incorporate sophisticated technologies like deep learning which Snapchat uses. It's aimed at high school students to give a foundational understanding.
"},{"location":"lesson-plans/computer-vision/face-filters/#resize-the-cat-ear-image-to-fit-the-width-of-the-face","title":"Resize the cat ear image to fit the width of the face","text":"ear_width = w aspect_ratio = ear_image.shape[1] / float(ear_image.shape[0]) ear_height = int(ear_width / aspect_ratio) ear_image = cv2.resize(ear_image, (ear_width, ear_height))
"},{"location":"lesson-plans/computer-vision/face-filters/#region-where-we-want-to-place-the-cat-ears-taking-care-not-to-exceed-image-dimensions","title":"Region where we want to place the cat ears (taking care not to exceed image dimensions)","text":"for c in range(0, 3): img[y:y+ear_image.shape[0], x:x+ear_image.shape[1], c] = img[y:y+ear_image.shape[0], x:x+ear_image.shape[1], c] * (1 - ear_image[:, :, 3] / 255.0) + ear_image[:, :, c] * (ear_image[:, :, 3] / 255.0) ```
"},{"location":"lesson-plans/computer-vision/open-cv/","title":"OpenCV Labs","text":"Lesson Plan: Introduction to OpenCV in Python
"},{"location":"lesson-plans/computer-vision/open-cv/#setup","title":"Setup","text":"Students should have access to a computer with a webcam and Python installed. We use Rasberry Pi or NIVIDA Nanos with cameras.
Objective: By the end of the lesson, students will be able to explain the purpose and basic functions of OpenCV, and implement some basic image processing tasks.
"},{"location":"lesson-plans/computer-vision/open-cv/#1-introduction-10-minutes","title":"1. Introduction (10 minutes)","text":"Demo: How to install OpenCV using pip
.
pip install opencv-python
Quick navigation of the OpenCV documentation to encourage self-learning.
Objective: The goal of this lab is to have students use OpenCV to create basic filters for a live webcam feed.
Exercise:
In these lessons, we learn how to write some basic data analysis Python programs.
In the real world, you are often given some data and people ask us \"Tell me what insights you can give me about this data.\" This forms the basis of a field of data science called \"EDA\" for \"Electronic Data Analysis\".
For example, say you are on a project to get cars to drive using machine learning. What insights can you derive from the sample images and driving data?
"},{"location":"lesson-plans/data-analysis/#numpy-profiler","title":"Numpy Profiler","text":"TBD
"},{"location":"lesson-plans/data-analysis/01-intro/","title":"AI Racing League Data Analysis","text":""},{"location":"lesson-plans/data-analysis/01-intro/#why-analysis","title":"Why Analysis?","text":"Data analysis is a core part of building accurate models that create high quality predictions. Here are some sample analytics tasks:
import os\nfrom IPython.display import Image\n\nimage_dir = \"/home/arl/mycar/data/dans-msp/data/images\"\n\nfiles = os.listdir(image_dir)\n# last basement image is 1710\nn = 1710\nfile_n = files[n]\nfile_2 = files[n+1]\nprint(n, file_n)\nfile_path1 = image_dir + '/' + file_n\nfile_path2 = image_dir + '/' + file_2\ni1 = Image(file_path1)\ni2 = Image(file_path2)\nprint(n+1, file_2)\n\ndisplay(i1, i2)\n
"},{"location":"lesson-plans/data-analysis/02-listing-files/","title":"Working with Files","text":""},{"location":"lesson-plans/data-analysis/02-listing-files/#listing-files-with-the-os-library","title":"Listing Files with the OS library","text":"Python provides a powerful library for working with Operating System resources like file systems. We will start out with the listdir()
function that lists the files in a directory.
Here is program that lists all the tub files in our mycar/data directory:
import os\ndata_dir = \"/home/arl/mycar/data/\"\n\nfiles = os.listdir(data_dir)\nfor file in files:\n print(file)\n
returns:
a-test-tub\nmy-test-tub\njunk-tub\nproduction-run\ntub-47\n
"},{"location":"lesson-plans/data-analysis/02-listing-files/#listing-files-in-a","title":"Listing Files in a","text":"import os\nfrom IPython.display import Image\n\nimage_dir = \"/home/arl/mycar/data/dans-msp/data/images\"\n\nfiles = os.listdir(image_dir)\n# last basement image is 1710\nn = 1710\nfile_n = files[n]\nfile_2 = files[n+1]\nprint(n, file_n)\nfile_path1 = image_dir + '/' + file_n\nfile_path2 = image_dir + '/' + file_2\ni1 = Image(file_path1)\ni2 = Image(file_path2)\nprint(n+1, file_2)\n\ndisplay(i1, i2)\n
"},{"location":"lesson-plans/data-analysis/02-listing-files/#list-random-files-in-images-directory","title":"List Random Files In Images Directory","text":"import os\nimport matplotlib.pyplot as plt\nfrom IPython.display import Image\n\nimage_dir = \"/home/arl/mycar/data/dans-msp/data/images\"\nimage_file_name_list = os.listdir(image_dir)\n\nfor index in range(0,10):\n file_name = image_file_name_list[index]\n print(file_name)\n
returns:
10263_cam_image_array_.jpg\n6257_cam_image_array_.jpg\n15248_cam_image_array_.jpg\n3916_cam_image_array_.jpg\n5223_cam_image_array_.jpg\n15765_cam_image_array_.jpg\n8437_cam_image_array_.jpg\n5871_cam_image_array_.jpg\n16681_cam_image_array_.jpg\n15281_cam_image_array_.jpg\n
Note that the files are not in any specific order.
"},{"location":"lesson-plans/data-analysis/02-listing-files/#show-images-for-10-random-files","title":"Show Images for 10 Random Files","text":"import glob\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n%matplotlib inline\n\nimages = []\nfor img_path in glob.glob('/home/arl/mycar/data/dans-msp/data/images/*.jpg'):\n images.append(mpimg.imread(img_path))\n\nplt.figure(figsize=(20,10))\ncolumns = 5\nfor i, image in enumerate(images):\n plt.subplot(len(images) / columns + 1, columns, i + 1)\n plt.imshow(image)\n
"},{"location":"lesson-plans/data-analysis/02-listing-files/#sorting-images-by-file-name","title":"Sorting Images By File Name","text":"We can add an additional step if we want to sort the images by the file name:
image_file_name_list = os.listdir(image_dir)\nimage_file_name_list.sort()\n
"},{"location":"lesson-plans/data-analysis/02-listing-files/#return-images-based-on-creation-date","title":"Return Images Based On Creation Date","text":"import os\nimport matplotlib.pyplot as plt\nfrom IPython.display import Image\nfrom pathlib import Path\n\nimage_dir = \"/home/arl/mycar/data/dans-msp/data/images\"\npaths = sorted(Path(image_dir).iterdir(), key=os.path.getmtime)\n\nimages = []\n# just get the first 10 items in the list of images\nfor path in paths[:10]:\n print(path)\n images.append(mpimg.imread(path))\n\nplt.figure(figsize=(20,10))\ncolumns = 5\nfor i, image in enumerate(images):\n plt.subplot(len(images) / columns + 1, columns, i + 1)\n plt.imshow(image)\n
"},{"location":"lesson-plans/data-analysis/03-viewing-images/","title":"Viewing Images","text":""},{"location":"lesson-plans/data-analysis/03-viewing-images/#viewing-a-single-jpg-image","title":"Viewing a Single JPG Image","text":""},{"location":"lesson-plans/data-analysis/04-viewing-catalog-files/","title":"Viewing Catalog Files","text":"The data about each image, sometimes called the image \"metadata\", is stored in a file that ends with the file extension .catalog. If you open these files, you will see a simple layout that looks like the following:
{'_index': 16000, '_session_id': '21-07-20_0', '_timestamp_ms': 1626797545360, 'cam/image_array': '16000_cam_image_array_.jpg', 'user/angle': 1.0, 'user/mode': 'user', 'user/throttle': 0.5}\n{'_index': 16001, '_session_id': '21-07-20_0', '_timestamp_ms': 1626797545411, 'cam/image_array': '16001_cam_image_array_.jpg', 'user/angle': 0.37, 'user/mode': 'user', 'user/throttle': 0.7}\n{'_index': 16002, '_session_id': '21-07-20_0', '_timestamp_ms': 1626797545460, 'cam/image_array': '16002_cam_image_array_.jpg', 'user/angle': -0.23, 'user/mode': 'user', 'user/throttle': 0.25}\n
This file consists of multiple lines, each line starts and ends with curly braces \"{\" and \"}\". Within these curly braces are a set of key-value pairs where the label is a string in single quotes followed by a colon, the value and a comma. This file uses newlines to separate records and a JSON object format within each single line. Note this is NOT a full JSON file format so you can't just use a standard JSON library to read the catalog file.
Here is that format with a the key and value on separate lines to make the line easier to read.
{\n '_index': 16000,\n '_session_id': '21-07-20_0',\n '_timestamp_ms': 1626797545360,\n 'cam/image_array':\n '16000_cam_image_array_.jpg',\n 'user/angle': 1.0,\n 'user/mode': 'user',\n 'user/throttle': 0.5\n}\n
This format is very similar to a JSON file with the following exceptions:
Here is what a properly formatted JSON file would look like:
{\n \"driveData\": [\n {\n \"_index\": 16000,\n \"_session_id\": \"21-07-20_0\",\n \"_timestamp_ms\": 1626797545360,\n \"cam/image_array\": \"16000_cam_image_array_.jpg\",\n \"user/angle\": 1.0,\n \"user/mode\": \"user\",\n \"user/throttle\": 0.5\n },\n {\n \"_index\": 16001,\n \"_session_id\": \"21-07-20_0\",\n \"_timestamp_ms\": 1626797545411,\n \"cam/image_array\": \"16001_cam_image_array_.jpg\",\n \"user/angle\": 0.37,\n \"user/mode\": \"user\",\n \"user/throttle\": 0.70\n },\n {\n \"_index\": 16002,\n \"_session_id\": \"21-07-20_0\",\n \"_timestamp_ms\": 1626797545460,\n \"cam/image_array\": \"16002_cam_image_array_.jpg\",\n \"user/angle\": -0.23,\n \"user/mode\": \"user\",\n \"user/throttle\": 0.25\n }\n ]\n}\n
Here is a sample JSON file reader that would read this file:
# program to read a DonkeyCar Catalog File\nimport os,json\n\n# this program assumes that test.json is in the same directory as this script\n# get the direcotry that this script is running\nscript_dir = os.path.dirname(__file__)\n# get a relative path to the script dir\npath_to_json_file = script_dir + '/test.json'\n\n# Open the JSON test file for read only\nf = open(path_to_json_file, 'r')\n\n# returns JSON object as a dictionary\ndata = json.load(f)\n\n# Iterating through the json file for the items in the drive data dictionary\nfor i in data['driveData']:\n print(i)\n\n# Close the JSON file\nf.close()\n
Note that the open() function reads the file with the \"r\" option which indicates read-only mode.
Although this format would make reading the file simple, there are some disadvantages. The key is that individual lines in the new catalog format are atomic units of storage and the files can be easily split and joined using line-by-line tools.
"},{"location":"lesson-plans/data-analysis/04-viewing-catalog-files/#reading-catalog-lines-to-json-objects","title":"Reading Catalog Lines to JSON Objects","text":"To read in the values of the catalog file we will open using a line-oriented data structure assuming that there is a newline at the end of each record. We can then just the json library's loads()
function which will convert each line to a JSON object.
Sample Objects.json file:
{\"name\":\"Ann\",\"age\":15}\n{\"name\":\"Peggy\",\"age\":16}\n{\"name\":\"Rima\",\"age\":13}\n{\"name\":\"Sue\",\"age\":14}\n
import os, json \njson_file = \"objects.json\"\nscript_dir = os.path.dirname(__file__)\n\n# get a relative path to the script dir\npath_to_catalog_file = script_dir + '/' + json_file\n\nf = open(path_to_catalog_file)\nlines = f.readlines()\n\ncount = 0\n# Convert each line to a JSON object\nfor line in lines:\n line_in_json = json.loads(line)\n count += 1\n print(count, ' ', end='')\n print(line_in_json)\n # the result is a Python dictionary\n print(line_in_json['name'])\n print(\"Name:\", line_to_json[\"name\"] )\n print(\"Age:\", line_to_json[\"age\"] )\n
Returns
1 {'name': 'Ann', 'age': 15}\nName: Ann\nAge: 15\n2 {'name': 'Peggy', 'age': 16}\nName: Peggy\nAge: 16\n3 {'name': 'Rima', 'age': 13}\nName: Rima\nAge: 13\n4 {'name': 'Sue', 'age': 14}\nName: Sue\nAge: 14\n
"},{"location":"lesson-plans/data-analysis/04-viewing-catalog-files/#sample-catalog-reader-program","title":"Sample CataLog Reader Program","text":"# program to read a DonkeyCar Catalog File\nimport os,json\n\n# this program assumes that test.catalog is in the same directory as this script\n# get the direcotry that this script is running\nscript_dir = os.path.dirname(__file__)\n\n# get a relative path to the script dir\npath_to_catalog_file = script_dir + '/test.catalog'\n\nf = open(path_to_catalog_file)\nlines = f.readlines()\n\ncount = 0\n# Convert each line to a JSON object\nfor line in lines:\n # each line as a JSON dictionary object\n j = json.loads(line)\n count += 1\n print('\\n\\nline:', count)\n # print(j)\n print(\"Index:\", j[\"_index\"] )\n print(\"Session:\", j[\"_session_id\"] )\n print(\"Timestamp:\", j[\"_timestamp_ms\"] )\n print(\"cam/image_array:\", j[\"cam/image_array\"] )\n print(\"user/angle:\", j[\"user/angle\"] )\n print(\"user/mode:\", j[\"user/mode\"] )\n print(\"user/throttle:\", j[\"user/throttle\"] )\n
returns:
line: 1\nIndex: 16000\nSession: 21-07-20_0\nTimestamp: 1626797545360\ncam/image_array: 16000_cam_image_array_.jpg\nuser/angle: 1.0\nuser/mode: user\nuser/throttle: 0.31\n\n\nline: 2\nIndex: 16001\nSession: 21-07-20_0\nTimestamp: 1626797545411\ncam/image_array: 16001_cam_image_array_.jpg\nuser/angle: 0.3715323343607898\nuser/mode: user\nuser/throttle: 0.31\n\n\nline: 3\nIndex: 16002\nSession: 21-07-20_0\nTimestamp: 1626797545460\ncam/image_array: 16002_cam_image_array_.jpg\nuser/angle: 0.2371288186284982\nuser/mode: user\nuser/throttle: 0.31\n
"},{"location":"lesson-plans/data-analysis/04-viewing-catalog-files/#reference","title":"Reference","text":"The Python class that creates version 2 of the catalog files is here
"},{"location":"lesson-plans/data-analysis/05-catalog-statistics/","title":"Catalog Statistics","text":"Now that we know how to reach each item in the tub catalog, we can now do some simple statistics on this data. For example we can calculate the average throttle and steering angle and create some plots of the distribution of these values.
"},{"location":"lesson-plans/data-analysis/05-catalog-statistics/#calculating-average-throttle-and-angle","title":"Calculating Average Throttle and Angle","text":"When we drive around the track each image records both the throttle and steering values at the instant the image was taken by the camera. Although the values sent to the Electronic Speed Controller (ESC) and the servo are unique to every car, instead we store values that have been converted to a range between 0 and 1. Both these values are Normalized to values of between 0 and 1.
# program to read a DonkeyCar Catalog File\nimport os,json\n\n# this program assumes that test.catalog is in the same directory as this script\n# get the direcotry that this script is running\nscript_dir = os.path.dirname(__file__)\n\n# get a relative path to the script dir\npath_to_catalog_file = script_dir + '/test.catalog'\n\nf = open(path_to_catalog_file)\nlines = f.readlines()\n# create a dictionary object\ndict = {}\ncount = 0\ntotal_throttle = 0\ntotal_angle = 0\n# Add each line to our dictionary\nfor line in lines:\n # each line as a JSON dictionary object\n j = json.loads(line)\n count += 1\n dict.update(json.loads(line))\n total_throttle += j[\"user/throttle\"]\n total_angle += j[\"user/angle\"]\nprint(count, \"items in dictionary\")\nprint(\"Average throttle: \", round(total_throttle/count, 3))\nprint(\"Average angle:\", round(total_angle/count, 3))\n
returns:
100 items in dictionary\nAverage throttle: 0.31\nAverage angle: 0.53\n
These values look reasonable. Our throttle should be between 0 and 1 and our average steering should be around 0.5. If we drive in a pure circle only in a single direction the average angle will be offset from the 0.5 center value.
"},{"location":"lesson-plans/data-analysis/05-catalog-statistics/#viewing-min-and-max-values","title":"Viewing Min and Max values","text":"# program to read a DonkeyCar Catalog File\nimport os,json\n\n# this program assumes that test.catalog is in the same directory as this script\n# get the direcotry that this script is running\nscript_dir = os.path.dirname(__file__)\n\n# get a relative path to the script dir\npath_to_catalog_file = script_dir + '/test.catalog'\n\nf = open(path_to_catalog_file)\nlines = f.readlines()\n# create a dictionary object\ndict = {}\ncount = 0\n\ntotal_throttle = 0\nmin_throttle = 1\nmax_throttle = 0\n\ntotal_angle = 0\nmin_angle = 1\nmax_angle = 0\n\n# Add each line to our dictionary\nfor line in lines:\n # each line as a JSON dictionary object\n j = json.loads(line)\n count += 1\n dict.update(json.loads(line))\n total_throttle += j[\"user/throttle\"]\n total_angle += j[\"user/angle\"]\n\n # check for min and max throttle\n if j[\"user/throttle\"] < min_throttle:\n min_throttle = j[\"user/throttle\"]\n if j[\"user/throttle\"] > max_throttle:\n max_throttle = j[\"user/throttle\"]\n\n if j[\"user/angle\"] < min_angle:\n min_angle = j[\"user/angle\"]\n if j[\"user/angle\"] > max_angle:\n max_angle = j[\"user/angle\"]\n\nprint('\\n')\nprint(count, \"items in catalog\")\n\nprint(\"Min throttle:\", round(min_throttle, 3))\nprint(\"Average throttle: \", round(total_throttle/count, 3))\nprint(\"Max throttle:\", round(max_throttle, 3))\n\nprint(\"Min angle:\", round(min_throttle, 3))\nprint(\"Average angle:\", round(total_angle/count, 3))\nprint(\"Max angle:\", round(max_angle, 3))\nprint('\\n')\n
returns:
100 items in catalog\nMin throttle: -0.31\nAverage throttle: 0.308\nMax throttle: 0.5\nMin angle: -0.31\nAverage angle: 0.534\nMax angle: 1.0\n
"},{"location":"lesson-plans/data-analysis/05-catalog-statistics/#converting-the-dictionary-to-a-dataframe","title":"Converting the Dictionary to a DataFrame","text":"df = pd.DataFrame(list(dict.items()))\nprint(df)\n
returns
0 1\n0 _index 16099\n1 _session_id 21-07-20_1\n2 _timestamp_ms 1626797880229\n3 cam/image_array 16099_cam_image_array_.jpg\n4 user/angle 0.56914\n5 user/mode user\n6 user/throttle 0.0632649\n
"},{"location":"lesson-plans/data-analysis/05-catalog-statistics/#plotting-steering-distributions","title":"Plotting Steering Distributions","text":""},{"location":"lesson-plans/data-analysis/06-cleaning-datasets/","title":"Cleaning Datasets","text":"Up until now, we have only been viewing metrics and files. These are all read-only operations. Now we will write our first programs that change the tub datasets.
"},{"location":"lesson-plans/data-analysis/06-cleaning-datasets/#splitting-datasets","title":"Splitting Datasets","text":"In this lab we will assume that we want to break our data into two distinct subsets and place them in different \"tubs\", which are just directories that contain both the catalogs and images for a dataset.
You can begin by taking a single dataset in a tub and then duplicating that tub. You can then selectively remove the data from the two tubs to effectively split the tubs.
The UNIX shell command to copy an entire directly is the \"cp\" command with the \"-r\" option for recursive copy.
cp -r from-dir to-dir\n
You can also add the \"-v\" option to see what files are being copied.
"},{"location":"lesson-plans/driving-with-a-model/","title":"Driving with A Model","text":""},{"location":"lesson-plans/gathering-data/","title":"Gathering Data","text":""},{"location":"lesson-plans/gathering-data/#image-size","title":"Image Size","text":""},{"location":"lesson-plans/gathering-data/#transfer-rates-to-micro-sd-cards","title":"Transfer Rates to Micro-SD Cards","text":""},{"location":"lesson-plans/gathering-data/#tools-to-monitor-write-performance","title":"Tools to Monitor Write Performance","text":""},{"location":"lesson-plans/gathering-data/#tubs","title":"Tubs","text":""},{"location":"lesson-plans/gathering-data/#monitoring-disk-space","title":"Monitoring Disk Space","text":""},{"location":"lesson-plans/github/","title":"Learning GitHub","text":""},{"location":"lesson-plans/github/#lesson-objectives","title":"Lesson Objectives","text":"The goal of this lesson is to be able to:
In this lesson, students will understand how NIVIDA GPUs are setup and configured under Linux. We will review the process of adding a GPU to a PC chassis, power and cooling requirements and the UNIX command line tools to set up, configure and test the GPU within the PC chassis.
After the GPU is set up the students will be able to test the performance of the GPU in building a machine-learning model using the Donkey Car software.
"},{"location":"lesson-plans/gpu-config/#steps-in-gpu-config","title":"Steps in GPU Config","text":""},{"location":"lesson-plans/gpu-config/#purchasing-gpu-pc-hardware","title":"Purchasing GPU PC Hardware","text":"Using part configuraiton tools
https://pcpartpicker.com/list/mrFYPX
"},{"location":"lesson-plans/gpu-config/#motherboard","title":"Motherboard","text":""},{"location":"lesson-plans/gpu-config/#ram","title":"RAM","text":""},{"location":"lesson-plans/gpu-config/#disk-drive","title":"Disk Drive","text":""},{"location":"lesson-plans/gpu-config/#gpus","title":"GPUs","text":""},{"location":"lesson-plans/gpu-config/#powersupply","title":"PowerSupply","text":""},{"location":"lesson-plans/gpu-config/#options-wireless","title":"Options - wireless","text":""},{"location":"lesson-plans/gpu-config/#downloading-a-ubuntu-image","title":"Downloading a Ubuntu Image","text":""},{"location":"lesson-plans/gpu-config/#booting-linux","title":"Booting Linux","text":""},{"location":"lesson-plans/gpu-config/#configuring-linux","title":"Configuring Linux","text":""},{"location":"lesson-plans/gpu-config/#configuring-ssh","title":"Configuring SSH","text":""},{"location":"lesson-plans/gpu-config/#configuring-the-gpu","title":"Configuring the GPU","text":"Here are the key reference documents:
This is an optional section for students that will be assisting schools that are purchasing and configuring their own GPUs.
"},{"location":"lesson-plans/jupyter-notebooks/","title":"Introduction to Jupyter Notebooks and Basic Data Analysis","text":""},{"location":"lesson-plans/jupyter-notebooks/#learning-objectives","title":"Learning Objectives","text":"By the end of this lesson, students should be able to:
Lesson 1: Introduction to Jupyter Notebooks (2 hours)
1.1 Lecture: What is a Jupyter Notebook? (30 mins)
1.2 Hands-on Activity: Getting Started with Jupyter Notebook (90 mins)
Lesson 2: Data Loading and Inspection in Jupyter Notebooks (2 hours)
2.1 Lecture: Basics of pandas (30 mins)
2.2 Hands-on Activity: Loading and Inspecting Data (90 mins)
Lesson 3: Basic Data Analysis in Jupyter Notebooks (3 hours)
3.1 Lecture: Data Analysis with pandas (30 mins)
3.2 Hands-on Activity: Basic Data Analysis (150 mins)
Lesson 4: Data Analysis of Donkey Car Project Data (3 hours)
4.1 Recap: Overview of the Donkey Car project (30 mins)
4.2 Hands-on Activity: Donkey Car Data Analysis (150 mins)
Students' understanding will be evaluated through their participation in the hands-on activities and the insights they generate from the Donkey Car project's data analysis. An end-of-unit quiz will also be provided to assess their theoretical understanding and practical skills in Jupyter Notebooks and data analysis.
"},{"location":"lesson-plans/jupyter-notebooks/#extension-activities","title":"Extension Activities","text":"In this lesson we will learn about how to do object detection using a Raspberry Pi with a Camera.
Edje Electronics Github Repo
"},{"location":"lesson-plans/software-setup/","title":"Donkey Car Software Setup","text":""},{"location":"lesson-plans/tuning-performance/","title":"Tuning Performance","text":""},{"location":"lesson-plans/tuning-performance/#objectives","title":"Objectives","text":"At this point, your car may or may not drive around the track. Now we will learn how to adjust the configuration parameters to optimize drive performance.
"},{"location":"proposal-generator/","title":"Proposal Generator","text":"The following is a proposal generator using ChatGPT.
The prompt includes a template of the format and the content.
"},{"location":"proposal-generator/#prompt","title":"Prompt","text":"Create a document in raw markdown format.\n\nUse the following template:\n\n**Title:** Faculty Sponsored Design Project Proposal\n\n**Course Name:** EE 4951 \u2013 Senior Design\n\n**Semester:** Spring 2024\n\n**Project Name:** AI Racing League\n\n**Advisor name:** Dan McCreary\n\n**Email:** Dan.McCreary@gmail.com\n\n**Phone:** 612-625-7322\n\n## Project Motivation and Background\n\n[Single Paragraph]\n\n## Project Goal and Objective\n\n[Single Paragraph]\n\n## Project Specifications\n\n[Detailed Project Specification 1 to 2 Pages]\n\n## Expected Deliverables for Final Product\n\n[Single Paragraph and Numbered Bullet List]\n\n## Technical Skills Required\n\n[Numbered Bullet List of Technical Skills]\n\n## Expected Budget\n\n[Short Description of Costs - Bullet List]\n
"},{"location":"proposal-generator/proposal/","title":"Generative AI Project Description","text":"The following was mostly generated by ChatGPT using the content and template provided. See the dialog for details.
Title: Faculty-Sponsored Design Project Proposal
Course Name: EE 4951 \u2013 Senior Design
Semester: Spring 2024
Project Name: AI Racing League
Advisor name: Dan McCreary
Email: Dan.McCreary@gmail.com
Phone: 612-625-7322
"},{"location":"proposal-generator/proposal/#project-motivation-and-background","title":"Project Motivation and Background","text":"The AI Racing League project aims to provide students with hands-on experience in the fields of machine learning, computer vision, and robotic control. The project will involve developing autonomous vehicles (Donkey Cars) capable of competing in a racing environment. This initiative is motivated by the increasing relevance of AI in various aspects of technology and the need for practical, real-world application skills in our graduates.
"},{"location":"proposal-generator/proposal/#project-goal-and-objective","title":"Project Goal and Objective","text":"The primary goal of this project is to design, build, and test autonomous racing vehicles using Donkey Cars. Students will learn to apply machine learning algorithms, computer vision techniques, and hardware control to enable these vehicles to navigate a racetrack autonomously. The objective is to prepare students for careers in AI and robotics by providing a comprehensive learning experience that includes both theory and practice.
"},{"location":"proposal-generator/proposal/#project-specifications","title":"Project Specifications","text":"Equipment:
When you use a new GPU server at an AI Racing League event there are many questions you need to have answered about your GPU server.
Here is a handy quiz you can use to check your knowledge. The answers to the questions are listed below.
"},{"location":"quizzes/gpu-shell-commands/#questions","title":"Questions","text":"Question 1: How would you log into the GPU server using the secure shell program?
A) $ login arl@arl1.local
B) $ ssh arl@arl1.local
C) $ enter arl@arl1.local
D) $ connect arl@arl1.local
Question 2: How would you check the version of Ubuntu on the GPU server?
A) $ version -a
B) $ lsb_release -a
C) $ ubuntu_version -all
D) $ check_ubuntu -a
Question 3: What information does the lscpu
command provide?
A) It provides the CPU information. B) It lists the amount of RAM on the server. C) It checks the disk space. D) It shows per-user disk usage.
Question 4: Which command is used to check the total RAM on the GPU server?
A) $ free -m
B) $ checkram -m
C) $ listram -m
D) $ raminfo -m
Question 5: What does the command df -h /
provide?
A) It lists per user disk usage. B) It adds a new GPU server user. C) It checks the disk space. D) It monitors the NVIDIA GPU.
Question 6: How can a new GPU server user be added?
A) $ adduser <username>
B) $ newuser <username>
C) $ createuser <username>
D) $ useradd <username>
Question 7: How can you give a user \"sudo\" rights?
A) $ sudo usermod -aG sudo <username>
B) $ sudo addrights -aG sudo <username>
C) $ sudo giverights -aG sudo <username>
D) $ sudo addrules -aG sudo <username>
Question 8: How can the hostname be changed?
A) $ sudo vi hostname
B) $ sudo edit hostname
C) $ sudo change hostname
D) $ sudo alter hostname
Question 9: What does the command watch -d -n 0.5 nvidia-smi
do?
A) It adds a new GPU server user. B) It runs similar to the UNIX top command, but for the GPU. C) It checks the version of Ubuntu. D) It lists the CPU information.
Question 10: How would you check the NVIDIA GPU utilization?
A) $ checkgpu
B) $ nvidia-smi
C) $ gpu-utilization
D) $ utilization nvidia
$ ssh arl@arl1.local
$ lsb_release -a
$ free -m
$ adduser <username>
$ sudo usermod -aG sudo <username>
$ sudo vi hostname
$ nvidia-smi
A school infobot is a battery powered robot that wonders the halls of a school answering questions for students such as \"Are there any coding clubs in this school?\". They are built using the following components:
Note that the School Infobot is not designed to answer general questions, only questions about the school. The content is typically taken from the schools website with additional data in the history of the school, staff and current calendar all pulled from the school's website.
"},{"location":"school-infobot/#architecture","title":"Architecture","text":"Claud Recommended Architecture
"},{"location":"school-infobot/#streaming-tools","title":"Streaming Tools","text":"Streaming Tools
"},{"location":"school-infobot/claude-arch/","title":"Claude Architecture","text":"flowchart LR\n M[Microphone] -->|Audio Stream| W[Whisper.cpp]\n W -->|Text Stream| F[FastAPI Server]\n F -->|WebSocket| O[Ollama/Llama]\n O -->|Text Stream| F\n F -->|Text Stream| C[Coqui TTS]\n C -->|Audio Stream| S[Speaker]\n\n subgraph \"Raspberry Pi\"\n M\n W\n S\n end\n\n subgraph \"School GPU Server\"\n F\n O\n C\n end\n
"},{"location":"school-infobot/claude-arch/#tools-and-implementation","title":"Tools and Implementation","text":"Here are the recommended tools and implementation approach:
"},{"location":"school-infobot/claude-arch/#1-speech-to-text-on-raspberry-pi","title":"1. Speech-to-Text (On Raspberry Pi):","text":"whisper.cpp
for real-time speech recognitionwhisper-cpp-python
- Use `FastAPI` with WebSocket support for bidirectional streaming\n- `websockets` library for Python WebSocket client on the Raspberry Pi\n- This enables real-time streaming of both text and audio\n
"},{"location":"school-infobot/claude-arch/#3-llm-interface-gpu-server","title":"3. LLM Interface (GPU Server):","text":"- Use Ollama's Python SDK for streaming interface\n- Connect via WebSocket to enable streaming responses\n
"},{"location":"school-infobot/claude-arch/#4-text-to-speech-gpu-server","title":"4. Text-to-Speech (GPU Server):","text":"- Use Coqui TTS (formerly Mozilla TTS) for high-quality speech synthesis\n- It can run efficiently on the GPU\n- Supports streaming audio output\n
Key features of this implementation:
"},{"location":"school-infobot/claude-arch/#1-streaming-audio-processing","title":"### 1. Streaming Audio Processing:","text":"- Uses `sounddevice` for real-time audio capture\n- Processes audio in chunks for immediate response\n- Streams both input and output audio\n
"},{"location":"school-infobot/claude-arch/#2-websocket-communication","title":"2. WebSocket Communication:","text":"- Maintains persistent connection between Pi and GPU server\n- Enables bidirectional streaming of text and audio\n- Low latency for real-time conversation\n
"},{"location":"school-infobot/claude-arch/#3-asynchronous-design","title":"3. Asynchronous Design:","text":"- Uses `asyncio` for non-blocking I/O\n- Handles multiple concurrent operations efficiently\n- Maintains responsive user experience\n
"},{"location":"school-infobot/claude-arch/#python-library-installation","title":"Python Library Installation","text":"To implement this system:
"},{"location":"school-infobot/claude-arch/#on-the-raspberry-pi","title":"On the Raspberry Pi:","text":"pip install whisper-cpp-python websockets sounddevice numpy\n
"},{"location":"school-infobot/claude-arch/#on-the-gpu-server","title":"On the GPU Server:","text":"pip install fastapi uvicorn TTS ollama websockets\n
The system is designed to be modular, so you can easily:
# raspberry_pi/main.py\nimport asyncio\nimport websockets\nimport sounddevice as sd\nimport numpy as np\nfrom whisper_cpp import Whisper\nclass RobotClient:\ndef __init__(self):\nself.whisper = Whisper(\"base.en\")\nself.sample_rate = 16000\nself.ws = None\nasync def process_audio_stream(self, indata, frames, time, status):\nif status:\nprint(status)\n\n# Process audio chunk with Whisper\ntext = self.whisper.transcribe(indata)\nif text.strip():\n# Send to server\nawait self.ws.send(text)\nasync def start_streaming(self):\nself.ws = await websockets.connect('ws://gpu-server:8000/chat')\n\n# Start audio streaming\nwith sd.InputStream(callback=self.process_audio_stream,\nchannels=1,\nsamplerate=self.sample_rate):\nwhile True:\n# Receive audio response from server\naudio_data = await self.ws.recv()\n# Play audio\nsd.play(audio_data, self.sample_rate)\nsd.wait()\n
"},{"location":"school-infobot/streaming-tools/","title":"School Infobot Streaming Tools","text":""},{"location":"school-infobot/streaming-tools/#prompt","title":"Prompt","text":"Prompt
We are a group of software engineers that would like to get high-school students exposed to AI and LLMs. We would like to build a mobile robot that will wonder the halls of schools and be able to interact with students via voice. We have a brand new Raspberry Pi with an AI Hat that can do model inference faster than before. The Raspberry Pi has a high-quality 360-degree mic and speakers. We also have a wireless connection to a school's GPU which is a NVIDIA GTX 3080Ti with 12GB of RAM. It is running Ollama now. Our team has a preference for Python.
Here are the goals of the project:
What tools should I use to connect all these components together with a streaming interface?
"},{"location":"school-infobot/streaming-tools/#response","title":"Response","text":"Below is a high-level overview of how you can stitch all the components together for a real-time (i.e., \"streaming\") pipeline, along with recommended tools for each step. The approach will let your Raspberry Pi (with its AI Hat) continuously capture audio, convert speech to text, send text to Ollama/Llama running on the remote GPU, receive the response, convert it to speech, and then play it back in near real time.
"},{"location":"school-infobot/streaming-tools/#1-audio-capture-and-streaming-from-the-raspberry-pi","title":"1. Audio Capture and Streaming from the Raspberry Pi","text":"Tools and libraries:
PyAudio (Python) or PortAudio (C/C++).
GStreamer
If you want a lower-level pipeline approach (especially for complex audio routing or if you need to handle multiple input/output devices or audio filters), GStreamer provides a powerful framework with built-in elements for streaming audio over the network.
For example, you can capture audio from the Pi's 360-degree microphone and stream it out via a network protocol like RTP or TCP/UDP.
If your robot is already running ROS2, you could publish audio data to a topic and subscribe to it elsewhere. ROS2 can handle real-time communication via DDS. This might be overkill if you are not using ROS for other parts of your robot, but it's worth considering if you're building a larger robotics stack.
In many simpler cases, PyAudio plus a small Python script is enough to capture short audio segments (or continuous chunks) and feed them to your STT engine.
"},{"location":"school-infobot/streaming-tools/#2-speech-to-text-stt","title":"2. Speech-to-Text (STT)","text":"Local vs. Remote STT:
Recommended approach:
Your recognized text then needs to be sent to the remote server (running Ollama / Llama 3.3 on the NVIDIA GTX 3080 Ti).
Tools and libraries to connect to Ollama/Llama:
HTTP/REST API (with streaming)
WebSocket
If you want to implement your own streaming, you could stand up a small WebSocket server around Ollama. This is a bit more custom but allows you to send partial text to Llama, get partial responses back, and maintain an ongoing conversation state.
If Ollama supports or can be wrapped with a gRPC server, you could build a streaming gRPC endpoint.
Recommended approach:
Most out-of-the-box Ollama installations expose a local HTTP API that supports streaming tokens as they're generated. You can forward your text to that API, capture the streaming response in Python, and pass it on to the next step (TTS).
Once you receive text back from Llama, you need to turn it into speech. This can either happen locally on the Pi's AI Hat or on the remote server (and then streamed back). Often, TTS is relatively lightweight, so local TTS is common if you have a decent model (e.g., a smaller Coqui TTS or espeak).
Tools and libraries:
Local TTS Engines
Remote TTS
If you want a higher-fidelity, neural TTS model that needs the GPU, you can run something like NVIDIA Riva TTS, Coqui TTS, or another neural TTS on the GPU.
If the Pi cannot handle it, implement a streaming TTS server on the GPU with gRPC or WebSocket and send back a real-time audio stream to the Pi for playback.
Playback on the Raspberry Pi Speaker
Finally, the synthesized audio must be played on the Pi's speakers.
Tools and libraries for audio playback:
alsasink
element on Raspberry Pi.SDL or other audio libraries if you want more advanced features.
Recommended \"Glue\" for Streaming Architecture
Putting this all together, you essentially have two major \"pipeline\" paths:
Here are some good ways to implement the streaming \"glue\":
"},{"location":"school-infobot/streaming-tools/#a-python-async-io-websocketsgrpc","title":"A. Python + Async I/O + WebSockets/gRPC","text":"PyAudio
to capture raw audio frames.Below is a simplified, conceptual outline using Python async and websockets (assuming local STT and TTS on the Pi, plus a remote Ollama server that offers a streaming HTTP SSE endpoint):
import asyncio\nimport websockets\nimport requests\n\nasync def audio_capture_stt():\n \"\"\"\n Continuously capture audio from mic, run STT (local or remote).\n As soon as partial transcripts are ready, yield them.\n \"\"\"\n while True:\n # Capture audio frames from PyAudio\n # Send frames to local STT or use local inference (Vosk/Whisper).\n # If partial transcript is ready, yield it.\n yield partial_transcript\n\nasync def ask_ollama(question):\n \"\"\"\n Send `question` to remote Ollama (HTTP SSE).\n Yield partial responses (tokens) as they arrive.\n \"\"\"\n # SSE or chunked transfer from Ollama\n response_stream = requests.get(\"http://GPU_SERVER:PORT/generate\", stream=True)\n for chunk in response_stream.iter_content(chunk_size=None):\n yield chunk # partial text token from Llama\n\nasync def text_to_speech_playback(text_chunk):\n \"\"\"\n Convert text_chunk to speech (e.g. local Coqui TTS) and play immediately.\n \"\"\"\n # local TTS\n audio_data = coqui_generate_audio(text_chunk)\n # Play with PyAudio or GStreamer sink\n play_audio(audio_data)\n\nasync def main():\n async for partial_text in audio_capture_stt():\n # Once a partial or final transcript is ready,\n # send it to the remote Llama server.\n async for partial_llama_answer in ask_ollama(partial_text):\n # As text arrives, TTS and play\n await text_to_speech_playback(partial_llama_answer)\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n
This architecture ensures minimal latency because everything is happening in a streaming manner:
For Audio Capture and Playback:
For STT:
Local: Vosk / Whisper (possibly utilizing your AI Hat for acceleration).
For Connection to Ollama/Llama 3.3:
HTTP-based streaming (SSE or chunked responses) or a custom WebSocket/gRPC wrapper if needed.
For TTS:
Local: Coqui TTS, eSpeak, Festival, or any neural TTS engine that can run on the Pi's AI Hat.
For Orchestration:
Python with asyncio + websockets / requests for streaming.
With these pieces in place, you'll have a real-time pipeline where the Raspberry Pi captures audio, converts it to text, sends the text to Llama on the GPU, receives back a streaming text response, converts that response to speech, and plays it back to the student.
"},{"location":"school-infobot/streaming-tools/#workflow","title":"Workflow","text":"ChatGPT thought about it for 17 seconds.
Creating the interaction experience
I'm designing a pipeline for a mobile robot: speech to text on Raspberry Pi, text transformation for Ollama, and text to speech. Streaming interface options include WebSocket, gRPC, GStreamer, and ffmpeg for real-time audio transmission.
Evaluating options
I'm weighing Vosk, Google's TTS, Coqui TTS, espeak, NVIDIA's Riva, and cloud solutions for streaming STT to the GPU with Ollama. The Raspberry Pi's roles in audio capture and local or GPU-based processing are also considered.
Crafting the strategy
I'm mapping out the pipeline: real-time audio capture on Raspberry Pi, streaming STT to GPU with Ollama, and streaming text to speech for playback. WebSockets or gRPC are solutions I'm considering for data exchange.
Mapping data flow
I'm integrating the Raspberry Pi with PyAudio for mic audio, using a speech recognition service, and streaming text to the GPU server for Ollama's text generation, followed by TTS on the Pi or GPU.
Integrating tools
I'm pulling together GStreamer, WebSockets, gRPC, Docker, and containerization for streaming audio from the GPU server to the Raspberry Pi, ensuring smooth playback.
Integrating solutions
I'm steering towards using PyAudio or SoX for audio capture, streaming STT via Vosk or Google Speech, and passing text to Ollama through REST or WebSocket. For TTS, exploring eSpeak, gTTS, or Coqui TTS.
"},{"location":"setup/battery-options/","title":"Battery Options","text":""},{"location":"setup/battery-options/#dual-batteries","title":"Dual Batteries","text":"In general, most of our cars use two separate batteries:
Getting batteries charged before each event requires some strong organizational skills. Although the LiPo batteries retain a charge for a long time, the RC car batteries must be fully charged the night before each event.
"},{"location":"setup/battery-options/#18650-standards","title":"18650 Standards","text":"The 18650 is a standard cell size for 3.7 volt Lithium batteries (18mmX65mm). The problem is that if these batteries go below 3.2 volts they become damaged and can't be recharged. This is why you should never use these batteries without charge protection circuits.
A typical 18650 battery produces 4.2 V when fully charged with a nominal voltage of 3.7 V.
The TP4056 is a low-cost charger chip for one (1) cell. If you plan to charge multiple cells simultaneously, each charging slot for a cell needs its own TP4056.
Warning
Never connect a 5V power supply directly to a lithium cell. Even though the cells have built-in protection, they are not chargers. It is the last line of protection to prevent catastrophic failures by disconnecting the cell under abnormal conditions - unless the protection does not work.
"},{"location":"setup/battery-options/#boost-and-buck-converters","title":"Boost and Buck Converters","text":"These are low-cost DC-to-DC voltage converters that allow you to use standard 3.7-volt cells that drive a 5-volt system. Boost converters increase the voltage and buck converters lower the voltage. You can purchase basic devices for $2, but fancier versions have displays that show both the input and output voltages.
We initially used several Ankar 2,000 milliamp-hour batteries for powering the cars. The batteries would last for the entire single-day events as long as they were charged before the event and not used to power the cars when not running on the tracks. The batteries typically could be recharged over 100 times before they had to be replaced.
Many teams use larger (and heavier) 10,000 milliamp-hour Lithium batteries. These are especially useful when learning to drive around the track. Teams also loved the fact that the INIU batteries showed the percent power remaining.
The INIU Portable Charger, Slimmest 10000mAh 5V/3A Power Bank is also a popular choice since they only cost about $19.
"},{"location":"setup/battery-options/#traditional-rc-car-batteries","title":"Traditional RC Car Batteries","text":"The traditional RC car batteries are based on standard \"Safe\" battery chemistry that can be fully drained and are not a strong fire hazard. The problem is that they take a long time to charge (typically overnight) and they don't store a lot of power so they typically only last a few minutes at high speed.
"},{"location":"setup/battery-options/#camera-lipo-batteries","title":"Camera LiPo Batteries","text":"A newer option is lower-cost rechargeable camera batteries that are built around the Sony NP-F standard. These are widely regarded as safe by the camera industry and there are many low-cost smart batteries and chargers available for around $10 each. They produce a stable 7.4-volt output that is perfect for our DC motors in RC cars.
The only consideration is getting cables that convert the batteries to use the RC car standards. I have made my own cables, but the quality of the connectors on the Donkey Car still leaves some things to be desired. The ends of the connectors are easy to pull off. I think that soldering a new barrel connector on all the RC cars might be another option.
The one other challenge is trying to find a low-cost connection plate. Right now the lowest cost plate is about $15 on Adafruit. I hope we can find lower-cost options or use a 3D printer to print our own.
Adafruit Camcorder Battery Holder/Adapter for Sony NP-F750 w/ DC Jack $15
Two batteries and charger on e-Bay for $23
3D Print Thingiverse NP-F Battery Wall Mount with Soft Lock
Neewer Battery Connection Plate for $27 on Amazon
The camera industry has also developed standard 7.4-volt battery packs that enclose low-cost 18650 battery cells. These standards have electronics built in to protect from drainage. Here is a list of parts that were recommended by the NVIDIA Jetson Hacks team:
Several participants used long battery cables with a small wire gauge. These cables caused voltage drops that made the cars stop working. We told all teams to use short 8-inch battery cables and most of these problems went away. We strongly suggest you build your own cables and use 18-gauge wire.
Sample 1ft Charging Cable
"},{"location":"setup/battery-options/#connectors-and-crimping-tool","title":"Connectors and Crimping Tool","text":"You can also purchase \"raw\" 7.4-volt LiPo batteries that are designed to work with RC cars. I have not purchased these directly so I can't attest to their reliability or safety. I will say that the power connector on them appears to be incompatible with our current cars.
1500mAh 7.4V 2S 18650 Battery
"},{"location":"setup/battery-options/#references","title":"References","text":"NVIDIA Jetson on Battery Power - a nice video by
"},{"location":"setup/building-arl-image/","title":"Building an AI Racing League Image for the Car","text":"This process takes about four hours and requires close attention to detail.
"},{"location":"setup/building-arl-image/#install-the-latest-nvidia-image","title":"Install The Latest NVIDIA image","text":""},{"location":"setup/building-arl-image/#enable-ssh","title":"Enable SSH","text":"This allows you to do a remote login to the car
sudo systemctl enable ssh\nsudo systemctl start ssh\n
Test ssh:
ssh arl@arl-01\n
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 4.9.253-tegra aarch64)\n\n * Documentation: https://help.ubuntu.com\n * Management: https://landscape.canonical.com\n * Support: https://ubuntu.com/advantage\nThis system has been minimized by removing packages and content that are\nnot required on a system that users do not log into.\n\nTo restore this content, you can run the 'unminimize' command.\n\nExpanded Security Maintenance for Infrastructure is not enabled.\n\n0 updates can be applied immediately.\n\nEnable ESM Infra to receive additional future security updates.\nSee https://ubuntu.com/esm or run: sudo pro status\n
"},{"location":"setup/building-arl-image/#add-a-swap-file","title":"Add a swap file","text":"The Jetson Hacks GitHub repo contains a nice UNIX script that will add a swap file.
git clone https://github.com/JetsonHacksNano/installSwapfile\ncd installSwapfile\n./installSwapfile.sh\n
You should reboot the Nano after you add this file.
"},{"location":"setup/building-arl-image/#add-htop","title":"Add HTOP","text":"The standard UNIX top
command does not display the load across multiple cores. The much more modern htop
is preferred and has better use of color.
sudo apt-get install htop\n
"},{"location":"setup/building-arl-image/#customize-browser-add-the-bookmark-bar","title":"Customize Browser Add the Bookmark Bar","text":"Ubuntu comes with several \"Office\" applications for doing word-processing, spreadsheets and presentations. These applications take up a lot of disk space so we will remove them.
sudo apt-get remove --purge libreoffice*\nsudo apt-get clean\nsudo apt-get autoremove\n
"},{"location":"setup/building-arl-image/#upgrade-local-libraries","title":"Upgrade Local Libraries","text":""},{"location":"setup/building-arl-image/#download-donkey-car-dependencies","title":"Download Donkey Car Dependencies","text":"sudo apt-get update -y sudo apt-get upgrade -y sudo apt-get install -y libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran sudo apt-get install -y python3-dev python3-pip sudo apt-get install -y libxslt1-dev libxml2-dev libffi-dev libcurl4-openssl-dev libssl-dev libpng-dev libopenblas-dev sudo apt-get install -y git nano sudo apt-get install -y openmpi-doc openmpi-bin libopenmpi-dev libopenblas-dev
"},{"location":"setup/building-arl-image/#monitoring-activity","title":"Monitoring Activity","text":""},{"location":"setup/building-arl-image/#version-checks","title":"Version Checks","text":"pip3 --version\n
pip 21.3.1 from /home/arl/env/lib/python3.6/site-packages/pip (python 3.6)\n
"},{"location":"setup/calibrate/","title":"Calibrate","text":"$ donkey calibrate --channel 0 --bus=1\n
Result
________ ______ _________ \n___ __ \\_______________ /___________ __ __ ____/_____ ________\n__ / / / __ \\_ __ \\_ //_/ _ \\_ / / / _ / _ __ `/_ ___/\n_ /_/ // /_/ / / / / ,< / __/ /_/ / / /___ / /_/ /_ / \n/_____/ \\____//_/ /_//_/|_| \\___/_\\__, / \\____/ \\__,_/ /_/ \n /____/ \n\nusing donkey v4.2.1 ...\nsombrero enabled\ninit PCA9685 on channel 0 address 0x40 bus 1\nUsing PWM freq: 60\nTraceback (most recent call last):\n File \"/home/pi/env/bin/donkey\", line 33, in <module>\n sys.exit(load_entry_point('donkeycar', 'console_scripts', 'donkey')())\n File \"/home/pi/projects/donkeycar/donkeycar/management/base.py\", line 500, in execute_from_command_line\n c.run(args[2:])\n File \"/home/pi/projects/donkeycar/donkeycar/management/base.py\", line 219, in run\n c = PCA9685(channel, address=address, busnum=busnum, frequency=freq)\n File \"/home/pi/projects/donkeycar/donkeycar/parts/actuator.py\", line 30, in __init__\n self.pwm = Adafruit_PCA9685.PCA9685(address=address)\n File \"/home/pi/env/lib/python3.7/site-packages/Adafruit_PCA9685/PCA9685.py\", line 75, in __init__\n self.set_all_pwm(0, 0)\n File \"/home/pi/env/lib/python3.7/site-packages/Adafruit_PCA9685/PCA9685.py\", line 111, in set_all_pwm\n self._device.write8(ALL_LED_ON_L, on & 0xFF)\n File \"/home/pi/env/lib/python3.7/site-packages/Adafruit_GPIO/I2C.py\", line 114, in write8\n self._bus.write_byte_data(self._address, register, value)\n File \"/home/pi/env/lib/python3.7/site-packages/Adafruit_PureIO/smbus.py\", line 327, in write_byte_data\n self._device.write(data)\nOSError: [Errno 121] Remote I/O error\nsombrero disabled\n
"},{"location":"setup/calibrate/#diagnostics","title":"Diagnostics","text":""},{"location":"setup/calibrate/#i2c-detect","title":"I2C Detect","text":"i2cdetect -y 1\n 0 1 2 3 4 5 6 7 8 9 a b c d e f\n00: -- -- -- -- -- -- -- -- -- -- -- -- -- \n10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n40: 40 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n70: 70 -- -- -- -- -- -- -- \n
"},{"location":"setup/calibrate/#i2c-device-file","title":"I2C Device File","text":"ls -ld /dev/i2*\ncrw-rw---- 1 root i2c 89, 1 Jul 3 13:17 /dev/i2c-1\n
"},{"location":"setup/calibrate/#i2c-functions-enabled","title":"I2C Functions Enabled","text":" i2cdetect -F 1\n
returns:
Functionalities implemented by /dev/i2c-1:\nI2C yes\nSMBus Quick Command yes\nSMBus Send Byte yes\nSMBus Receive Byte yes\nSMBus Write Byte yes\nSMBus Read Byte yes\nSMBus Write Word yes\nSMBus Read Word yes\nSMBus Process Call yes\nSMBus Block Write yes\nSMBus Block Read no\nSMBus Block Process Call no\nSMBus PEC yes\nI2C Block Write yes\nI2C Block Read yes\n
Note that both SMBus Block Read and SMBus Block Process Call are set to no. The rest are yes.
"},{"location":"setup/calibrate/#upgrade-to-python-370","title":"Upgrade to Python 3.70","text":"python3 -m virtualenv -p python3.7 env --system-site-packages\n
created virtual environment CPython3.7.3.final.0-32 in 2535ms\n creator CPython3Posix(dest=/home/pi/env, clear=False, no_vcs_ignore=False, global=True)\n seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/pi/.local/share/virtualenv)\n added seed packages: pip==21.1.2, setuptools==57.0.0, wheel==0.36.2\n activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator\n
python --version\n
Python 3.7.3\n
"},{"location":"setup/camera-testing/","title":"Testing the Camera","text":"To test the camera and cable, we need a command that captures video from a CSI camera connected to an NVIDIA Jetson Nano, converts the video format and resolution, and then displays the video on the screen. We will use the GStreamer command first.
"},{"location":"setup/camera-testing/#gstreamer-test-on-the-nano","title":"GStreamer Test on the Nano","text":"gst-launch-1.0 nvarguscamerasrc ! \\\n'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12' ! \\\nnvvidconv flip-method=0 ! \\\n'video/x-raw,width=960, height=616' ! \nnvvidconv ! \\\nnvegltransform ! \\\nnveglglessink -e\n
This command is a GStreamer command used to test the functionality of a camera on a NVIDIA Jetson Nano device. GStreamer is a multimedia framework that provides a pipeline for media data. The gst-launch-1.0
utility is used to build and run basic GStreamer pipelines.
On newer systems with two cameras you can add: sensor-id=0
to make sure you are getting camera 0.
This guide explains how to clone a Raspberry Pi microSD card using the dd
command on UNIX and UNIX-like systems.
First, identify the device name for your microSD card. Use either lsblk
or df
to list devices and their mount points.
lsblk\n
Or:
df -h\n
Look for the device corresponding to your microSD card. It's generally something like /dev/sdX
or /dev/mmcblkX
, where X
is a letter.
\u26a0\ufe0f Caution: Be very careful to identify the correct device, as choosing the wrong one could result in data loss.
Before copying data, unmount the partitions of the microSD card to ensure that no data is being read or written during the cloning process.
sudo umount /dev/sdX*\n
Use the dd
command to create an image file of the microSD card.
sudo dd if=/dev/sdX of=raspberrypi_backup.img bs=4M status=progress\n
if
: Input File \u2014 the device you are copying from (your microSD card).of
: Output File \u2014 the image file you are creating.bs
: Block Size \u2014 specifies how much data should be read at each iteration. 4M
is usually a good size.status=progress
: shows the progress during the copy.To clone the image onto a new microSD card, insert the new card and identify it just like you did in the first step.
sudo dd if=raspberrypi_backup.img of=/dev/sdY bs=4M status=progress\n
Replace /dev/sdY
with the device name of your new microSD card.
\u26a0\ufe0f Caution: Again, be very careful to identify the correct device to avoid data loss.
If your new microSD card is larger than the original, you might need to expand the filesystem to use the additional space. You can do this using raspi-config
on the Raspberry Pi.
sudo raspi-config\n
Navigate to Advanced Options
> Expand Filesystem
.
After following these steps, you should have successfully cloned your Raspberry Pi's microSD card.
Note: The dd
command can be very dangerous if misused. Always double-check your device names and ensure you understand the commands you're running. ```
Here's a breakdown of the command:
nvarguscamerasrc sensor_mode=0
: This is a GStreamer plugin specific to the NVIDIA platform that provides support for the Camera Serial Interface (CSI) cameras. sensor_mode=0
indicates that the command should use the first sensor mode of the camera. The sensor mode usually defines properties such as the resolution and frame rate that the camera supports.
'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12'
: This part of the command specifies the desired output from the camera source. The properties indicate that the video should be in NV12 format, with a resolution of 3820x2464 pixels and a frame rate of 21 frames per second. NVMM refers to NVIDIA's proprietary multimedia memory.
nvvidconv flip-method=0
: This is another NVIDIA specific GStreamer plugin that converts video from one format to another. The flip-method=0
option means that no flipping operation should be performed on the frames.
'video/x-raw,width=960, height=616'
: This specifies the desired output format and resolution after the conversion. The resolution is downscaled to 960x616 pixels.
nvvidconv ! nvegltransform ! nveglglessink -e
: This part of the pipeline takes the video from the conversion, applies an EGLStream transformation (nvegltransform
) and then sends it to a EGL/GLES-based render sink (nveglglessink
). This sink displays the video on the device's screen. The -e
flag at the end of the command tells GStreamer to send an end-of-stream signal when the source stops, which will properly close down the pipeline.
In the context of a GStreamer command, the \"!\" (aka bang) character is used to connect different elements of a GStreamer pipeline together. It serves a similar role to the UNIX \"|\" (pipe) character in a regular UNIX shell command, where it's used to pipe the output from one command into another.
However, there's an important difference between the two. In a UNIX shell command, the | character sends the standard output (stdout) of one command to the standard input (stdin) of another. In a GStreamer pipeline, the ! character doesn't simply pipe data from one element to the next. Instead, it establishes a connection between two GStreamer elements, allowing them to negotiate formats, buffer management, and other details. This negotiation process can involve more complex operations like format conversion, and it happens before any data is actually transferred.
So, in summary, while | and ! might seem similar, the latter is used in GStreamer to create more complex, negotiated connections between different multimedia processing elements.
"},{"location":"setup/camera-testing/#flip-modes","title":"Flip Modes","text":"The flip-method
property of the nvvidconv
(NVIDIA Video Converter) plugin controls the orientation of the output video in the NVIDIA Jetson platform. This is useful for handling scenarios where the camera could be mounted in various orientations.
Here are the possible values for the flip-method
parameter:
0
(Identity) - No rotation, no vertical flip.1
(Counterclockwise) - Rotate counter-clockwise 90 degrees.2
(Rotate 180) - Rotate 180 degrees.3
(Clockwise) - Rotate clockwise 90 degrees.4
(Horizontal Flip) - Flip horizontally.5
(Upper Right Diagonal) - Flip across upper right/lower left diagonal.6
(Vertical Flip) - Flip vertically.7
(Upper Left Diagonal) - Flip across upper left/lower right diagonal.Each number corresponds to a specific operation on the video frames. The specific operation will be applied to each frame of the video before it's sent to the next element in the GStreamer pipeline.
"},{"location":"setup/camera-testing/#resources","title":"Resources","text":""},{"location":"setup/camera-testing/#dans-blog","title":"Dan's Blog","text":"NVIDIA CSI Camera GitHub Repo
"},{"location":"setup/camera-testing/#jetson-hacks-blog","title":"Jetson Hacks Blog","text":"https://jetsonhacks.com/2019/04/02/jetson-nano-raspberry-pi-camera/
"},{"location":"setup/clone-sd/","title":"Cloning a microSD Card using UNIXdd
Command","text":"This guide explains how to clone a microSD card using the dd
command on UNIX and UNIX-like systems.
dd
CommandFirst, identify the device name for your microSD card. Use either lsblk
or df
to list devices and their mount points.
lsblk\n
Or:
df -h\n
Look for the device corresponding to your microSD card. It's generally something like /dev/sdX
or /dev/mmcblkX
, where X
is a letter.
\u26a0\ufe0f Caution: Be very careful to identify the correct device, as choosing the wrong one could result in data loss.
Before copying data, unmount the partitions of the microSD card to ensure that no data is being read or written during the cloning process.
sudo umount /dev/sdX*\n
Use the dd
command to create an image file of the microSD card.
sudo dd if=/dev/sdX of=raspberrypi_backup.img bs=4M status=progress\n
if
: Input File \u2014 the device you are copying from (your microSD card).of
: Output File \u2014 the image file you are creating.bs
: Block Size \u2014 specifies how much data should be read at each iteration. 4M
is usually a good size.status=progress
: shows the progress during the copy.To clone the image onto a new microSD card, insert the new card and identify it just like you did in the first step.
sudo dd if=raspberrypi_backup.img of=/dev/sdY bs=4M status=progress\n
Replace /dev/sdY
with the device name of your new microSD card.
\u26a0\ufe0f Caution: Again, be very careful to identify the correct device to avoid data loss.
If your new microSD card is larger than the original, you might need to expand the filesystem to use the additional space. You can do this using raspi-config
on the Raspberry Pi.
sudo raspi-config\n
Navigate to Advanced Options
> Expand Filesystem
.
After following these steps, you should have successfully cloned your Raspberry Pi's microSD card.
Note: The dd
command can be very dangerous if misused. Always double-check your device names and ensure you understand the commands you're running.
When asked:
Do you wish the installer to prepend the Miniconda3 install location to PATH in your /root/.bashrc?
Answer: yes
cd /tmp\nwget http://repo.continuum.io/miniconda/Miniconda3-latest-Linux-armv7l.sh\nchmod 755 Miniconda3-latest-Linux-armv7l.sh\n./Miniconda3-latest-Linux-armv7l.sh\n
"},{"location":"setup/conda-pi-setup/#test-conda-in-your-path","title":"Test Conda In Your PATH","text":"which conda\n
Should return: /home/pi/miniconda3/bin/conda\n
"},{"location":"setup/conda-pi-setup/#add-the-raspberry-pi-channel-to-conda","title":"Add the Raspberry Pi Channel to Conda","text":"conda config --add channels rpi\nconda install python=3.6\n
"},{"location":"setup/conda-pi-setup/#test-python","title":"Test Python","text":"python --version\n
Python 3.6.6\n
"},{"location":"setup/conda-pi-setup/#create-a-donkeycar-conda-environment","title":"Create a DonkeyCar Conda Environment","text":"conda create --name donkey python=3\n
The following NEW packages will be INSTALLED:\n\n ca-certificates: 2018.8.24-0 rpi\n certifi: 2018.8.24-py36_1 rpi\n ncurses: 6.1-h4f752ac_1 rpi\n openssl: 1.0.2r-hdff2a78_0 rpi\n pip: 18.0-py36_1 rpi\n python: 3.6.6-hd0568c0_1 rpi\n readline: 7.0-hcb560eb_1 rpi\n setuptools: 40.2.0-py36_0 rpi\n sqlite: 3.24.0-hfcb1bcf_1 rpi\n tk: 8.6.8-h849d6a0_0 rpi\n wheel: 0.31.1-py36_1 rpi\n xz: 5.2.4-hdff2a78_1 rpi\n zlib: 1.2.11-hdff2a78_1003 rpi\n\nProceed ([y]/n)? y\n
"},{"location":"setup/conda-pi-setup/#add-the-conda-shell-to-the-end-of-our-bashrc-file","title":"Add the conda shell to the end of our .bashrc file","text":"echo \". /home/pi/miniconda3/etc/profile.d/conda.sh\" >> ~/.bashrc\n
conda activate\n
The shell prompt should now be \"base\""},{"location":"setup/conda-pi-setup/#activate-your-donkey-python-environment","title":"Activate Your Donkey Python Environment","text":"source activate donkey\n
You should see the prompt:
(donkey) pi@myhost:~ $\n
"},{"location":"setup/conda-pi-setup/#verify-git-is-installed","title":"Verify Git Is installed","text":"git --version\n
git version 2.20.1
"},{"location":"setup/conda-pi-setup/#clone-the-donkeycar-repository","title":"Clone the DonkeyCar repository","text":"git clone https://github.com/autorope/donkeycar\ncd donkeycar\ngit checkout master\n
sudo apt-get install build-essential python3 python3-dev python3-pip python3-virtualenv python3-numpy python3-picamera python3-pandas python3-rpi.gpio i2c-tools avahi-utils joystick libopenjp2-7-dev libtiff5-dev gfortran libatlas-base-dev libopenblas-dev libhdf5-serial-dev libgeos-dev git ntp\n
sudo apt-get install libilmbase-dev libopenexr-dev libgstreamer1.0-dev libjasper-dev libwebp-dev libatlas-base-dev libavcodec-dev libavformat-dev libswscale-dev libqtgui4 libqt4-test\n
"},{"location":"setup/conda-pi-setup/#clone-donkeycar-repo","title":"Clone DonkeyCar Repo","text":"pip freeze\n
certifi==2018.8.24
git clone https://github.com/autorope/donkeycar\ncd donkeycar\npip install -e .[pi]\n
"},{"location":"setup/donkey-cmd/","title":"The Donkey Command","text":"In normal configuration, the donkey
is a UNIX shell script that is located in the bin directory of your local environment. For example here is a sample path to the donkey
command if you are logged into the standard AI Racing account called arl
/home/arl/env/bin\n
#!/home/dan/env/bin/python3\n# EASY-INSTALL-ENTRY-SCRIPT: 'donkeycar','console_scripts','donkey'\n__requires__ = 'donkeycar'\nimport re\nimport sys\nfrom pkg_resources import load_entry_point\n\nif __name__ == '__main__':\n sys.argv[0] = re.sub(r'(-script\\.pyw?|\\.exe)?$', '', sys.argv[0])\n sys.exit(\n load_entry_point('donkeycar', 'console_scripts', 'donkey')()\n )\n
This file is automatically generated
This UNIX script is automatically generated and serves as an entry point for the donkeycar package. The console_scripts and donkey parameters provide more information about the entry point, usually used by package management tools like setuptools.
"},{"location":"setup/donkey-cmd/#imports","title":"Imports","text":"import re\nimport sys\nfrom pkg_resources import load_entry_point\n
On the installer for the 4.5.1, this file is not generated.
"},{"location":"setup/gpu-options/","title":"GPU Options","text":"PCPartPicker Part List: https://pcpartpicker.com/list/mrFYPX
CPU: AMD Ryzen 5 3600 3.6 GHz 6-Core Processor ($95.00 @ Amazon) Motherboard: MSI A520M-A PRO Micro ATX AM4 Motherboard ($101.11 @ Amazon) Memory: Silicon Power SP016GBLFU320X02 16 GB (1 x 16 GB) DDR4-3200 CL22 Memory ($23.99 @ Amazon) Storage: TEAMGROUP MP33 512 GB M.2-2280 PCIe 3.0 X4 NVME Solid State Drive ($22.49 @ Amazon) Video Card: Asus Dual GeForce RTX 3060 V2 OC Edition GeForce RTX 3060 12GB 12 GB Video Card ($299.99 @ Amazon) Case: Thermaltake Versa H18 MicroATX Mini Tower Case ($49.99 @ Amazon) Power Supply: be quiet! Pure Power 11 400 W 80+ Gold Certified ATX Power Supply ($89.69 @ Amazon) Monitor: *Acer V227Q Abmix 21.5\" 1920 x 1080 75 Hz Monitor ($87.29 @ Amazon) Total: $769.55
"},{"location":"setup/installation-tips/","title":"Installation Tips","text":"Installing the Donkey Car software is a complex process. It typically takes about six hours to add the required tools and Python libraries to a raw NVIDIA or Raspberry Pi OS image.
Here are a few tips.
"},{"location":"setup/installation-tips/#use-the-h-mode-when-installing-software","title":"Use the -H mode when installing software","text":"We should always use the HOME environment variables when using sudo:
sudo -H pip install package_name\n
In this command, sudo -H is used to ensure that the home environment variable is set to the home directory of the target user (root in this case), which can sometimes avoid permissions issues that arise when sudo retains your normal user's home directory environment variable. Essentially, the -H option makes sure that the operation is as clean as possible from an environment perspective, which can be important when installing software that might write configuration files or data into the user's home directory.
"},{"location":"setup/installation-tips/#remember-to-use-python3-and-pip3","title":"Remember to Use Python3 and Pip3","text":"Sometimes older Python2 tools get mixed up with the current Python 3 tools. By adding the \"3\" suffix to your commands you can guarantee that that your path will pick up the right version of Python tools.
"},{"location":"setup/installation-tips/#when-to-use-python3-and-pip3-over-python-and-pip-in-unix-shell","title":"When to Usepython3
and pip3
Over python
and pip
in UNIX Shell","text":""},{"location":"setup/installation-tips/#python-version","title":"Python Version","text":"python3
explicitly runs Python 3.x, whereas python
might run either Python 2.x or Python 3.x depending on the system configuration.Multiple Python Installations: On some systems, you may have both Python 2 and Python 3 installed. In such cases, python
usually refers to Python 2 and python3
to Python 3. Similarly, pip
might point to the package manager for Python 2, and pip3
will point to Python 3. Always use python3
and pip3
to ensure that you're working with Python 3.x.
Aliases: Some systems alias python
to python3
. This is common in more recent Linux distributions. On such systems, it may not matter if you use python
or python3
. However, using python3
is more explicit and can avoid ambiguity.
Version-Specific Code: If you're running or writing code that is specific to Python 3, use python3
. Similarly, if you're installing packages that are intended for use with Python 3, use pip3
.
Portability: If you're writing a script that you plan to share with others, it's safer to specify python3
if your code is not compatible with Python 2.
python
and pip
commands will point to the versions associated with the active environment, regardless of whether it's Python 2 or 3. So, within a Python 3 virtual environment, python
and pip
will be equivalent to python3
and pip3
.Explicit is Better: If you're in doubt, being explicit is usually better. Using python3
and pip3
makes it clear that you're using Python 3.
Check Version: If ever in doubt, you can always check which version you're running by using python --version
or python3 --version
and pip --version
or pip3 --version
.
In summary, if you want to make sure you are using Python 3 and its associated package manager, use python3
and pip3
.
Although you can boot the Raspberry Pi and NVIDIA Nano on a 16GB microSD card, it will quickly run out of room after you download the 36K Python libraries. Our suggestion is to use a minimum of a 64 GB card and ideally a 128 GB microSD card.
In the item above, the \"Class 10\" indicates it will write above 10 MB/sec.
"},{"location":"setup/microSD-write-speed/#bandwidth-calculation-for-224x224-color-images","title":"Bandwidth Calculation for 224x224 Color Images","text":"What speed microSD card should you purchase? Low-cost cards write at 10 MB/sec and more expensive cards can run up to 100 MB/sec. What one should we purchase?
The default image for the Donkey Car is 224x224 color images.
"},{"location":"setup/microSD-write-speed/#analysis","title":"Analysis","text":"Calculate the total microSD card write bandwidth needed for a camera that reads 224x224 color images and needs to write 20 images per second to the disk.
"},{"location":"setup/microSD-write-speed/#step-by-step-calculation","title":"Step-by-Step Calculation","text":""},{"location":"setup/microSD-write-speed/#step-1-calculate-the-size-of-a-single-image","title":"Step 1: Calculate the Size of a Single Image","text":"A 224x224 color image would have 224 * 224 = 50,176 pixels.
Assuming the image uses a standard RGB color scheme, each pixel would have three color channels: Red, Green, and Blue. Each channel typically requires 1 byte (or 8 bits) to store its intensity value. So, each pixel would need 3 bytes. The size of a single image would be:
Size of a single image = Number of pixels * Bytes per pixel\n\n= 50,176 pixels * 3 bytes/pixel\n= 150,528 bytes/image\n= 150 KB\napprox 0.147 MB\n
"},{"location":"setup/microSD-write-speed/#step-2-calculate-the-total-bandwidth-needed-per-second","title":"Step 2: Calculate the Total Bandwidth Needed Per Second","text":"We are writing 20 images a second to the disk, so the total bandwidth needed per second would be:
Total bandwidth per second = Size of a single image * Number of images per second\n\n= 0.147 MB * 20\n= approx 2.94 megabytes/s\n
"},{"location":"setup/microSD-write-speed/#conclusion","title":"Conclusion","text":"The microSD would need a total bandwidth of approximately 3 MB/s to write 20 224x224 color images to the disk each second. Since most microSD cards today write from 10 MB/s to 100 MB/s there is no worry about having enough write speed to the microSD card.
"},{"location":"setup/nano-docker/","title":"Docker on Nano","text":"Note
These instructions were copied from a message on the Discord Server that were posed by user naisy on at 08/31/2023 5:48 AM. They are not supported by the DonkeyCar project (yet).
Due to the complexities of building the correct environments on the Nano, one alternative is to use the Docker container system. Although this adds some additional overhead, it means that you can download a virtual machine with all the correct software installed.
You must have a minumum of a 64GB microSD card to run Docker on the NVIDIA Nano.
"},{"location":"setup/nano-docker/#step-1-download-docker-github-repo","title":"Step 1: Download Docker GitHub Repo","text":"In this step we will clone a repository that has some very small UNIX shell scripts.
mkdir ~/projects\ncd ~/projects\ngit clone https://github.com/naisy/Docker\ncd Docker\n
"},{"location":"setup/nano-docker/#step-2-run-the-permissions-shell-script","title":"Step 2: Run the Permissions Shell Script","text":"mkdir ~/docker\ncp run-jetson-jp461-donkeycar??.sh ~/docker\ncp run-jetson-jp461-gpio-permission.sh ~/docker\n\ncd ~/docker\n./run-jetson-jp461-gpio-permission.sh\n
"},{"location":"setup/nano-docker/#step-3-run-the-main-shell-script","title":"Step 3: Run the Main Shell Script","text":"Due to memory limitations, only one of either DonkeyCar 4.5 or DonkeyCar 5.0 should be activated.
"},{"location":"setup/nano-docker/#donkeycar-45","title":"DonkeyCar 4.5","text":"./run-jetson-jp461-donkeycar45.sh
"},{"location":"setup/nano-docker/#or-donkeycar-50","title":"or DonkeyCar 5.0","text":"./run-jetson-jp461-donkeycar50.sh
"},{"location":"setup/nano-docker/#login-into-the-virtual-machine","title":"Login into the Virtual Machine","text":"Log in to the jupyterlab terminal: PC Web browser for DonkeyCar 4.5
`` http://your_jetson_ip:8890/
for DonkeyCar 5.0 (dev3)\n
http://your_jetson_ip:8891/ Password: jupyterlab Launch JupyterLab Terminal ``` DonkeyCar: From this point on, it is exactly the same as a official donkeycar.
"},{"location":"setup/nano-docker/#create-mycar","title":"create mycar","text":"donkey createcar --path ~/data/mycar45 cd ~/data/mycar45 ls
"},{"location":"setup/nano-python-upgrade/","title":"How to Upgrade Python on the NVIDIA Nano","text":"The current Jetson Nano image is version 4.6.1. The Donkey Car software only runs on 4.5.1
There are several bugs with the Donkey Car system that do not run on Jetson Nano 4.5.2. So how do we get the latest good stuff without allowing the system to automatically update to 4.5.2?
Warning
Do not run these commands!
sudo apt-get update -y\nsudo apt-get upgrade -y\n
When the command sudo apt-get upgrade -y
is sent, the JetPack dependency tree tries to upgrade to Jetpack version 4.5.2 which will cause the JetPack to lock up when it tries to boot into the Ubuntu Desktop. Note, this is a bug in the NVIDIA code, not the Donkey Car code.
We should never run sudo apt-get update -y
or sudo apt-get upgrade -y
when running JP4.5.1.
The following is how we upgrade Ubuntu, but keep JP from attempting to upgrade to JP4.5.2:
Warning
Always use the \"Software Updater GUI\" to update your Ubuntu and never run sudo apt-get upgrade -y
.
After completing the above process, let the Desktop stay open for a while and the \"Software Updater\" will appear.
Let the \"Software Updater\" update Ubuntu whenever you use the Desktop.
"},{"location":"setup/nano-python-upgrade/#references","title":"References","text":"This is a checklist that is genralized for all our events. We can't assume any network connectivity at these events.
"},{"location":"setup/nano-sd-image-checklist/#required","title":"Required","text":"The DonkeyCar uses the low cost PCA9685 PWM board.
"},{"location":"setup/pwm-board/#pca9685-pwm-board","title":"PCA9685 PWM Board","text":""},{"location":"setup/pwm-board/#pi-40-pin-header","title":"Pi 40 Pin Header","text":""},{"location":"setup/pwm-board/#connections","title":"Connections","text":""},{"location":"setup/pwm-board/#references","title":"References","text":"We only use the 8GB version:
Raspberry Pi 4
"},{"location":"setup/raspberry-pi-parts/#power-supply","title":"Power Supply","text":"The Raspberry Pi 4 needs a 5V.1, 3A (15W) USB-C Power Supply. It is unclear to us if you can connect existing barrel connectors power directly to the 40-pin 5V/GND pins.
Offical Raspberry Pi Page
Digi-Key Part Number 2648-SC0445-ND - AC/DC WALL MNT ADAPTER 5.1V 15W
Amason Raspberry Pi 15W USB-C Power Supply US - White $15 - Official Raspberry Pi 4 type-c power supply Input:100-240V 50/60Hz 0.5A,Output:5.1-3.0A
"},{"location":"setup/raspberry-pi-parts/#40-pin-power-option","title":"40-Pin Power Option","text":"Apt-get is the software package installed on the Raspberry Pi OS that allows you to install application libraries.
"},{"location":"setup/software-install-notes/#donkeycar-libraries-required","title":"DonkeyCar Libraries (required)","text":"sudo apt-get install build-essential python3 python3-dev python3-pip python3-virtualenv python3-numpy python3-picamera python3-pandas python3-rpi.gpio i2c-tools avahi-utils joystick libopenjp2-7-dev libtiff5-dev gfortran libatlas-base-dev libopenblas-dev libhdf5-serial-dev libgeos-dev git ntp\n
The other libraries are mostly small support libraries used for supporting debugging.
"},{"location":"setup/software-install-notes/#opencv-optional","title":"OpenCV (optional)","text":"sudo apt-get install libilmbase-dev libopenexr-dev libgstreamer1.0-dev libjasper-dev libwebp-dev libatlas-base-dev libavcodec-dev libavformat-dev libswscale-dev libqtgui4 libqt4-test\n
"},{"location":"setup/track-options/","title":"Track Options","text":"Although you can just put tape down on a floor, that is time-consuming and is often a low-quality track. There are several other options and the prices vary from under $100 to $1,300.
"},{"location":"setup/track-options/#vinyl-tarps","title":"Vinyl Tarps","text":"In Minnesota, Billboard Tarps sells used vinyl sign material. For around $70 you can get a 16' X 25' used black billboard 9oz black vinyl sign that is ideal for creating your own track.
Here is a full-size track that is 25' X 35' for about $170:
This is what 1/2 of the 16'X'25 foot tarp looks like after delivery from Billboard Tarps and Vinyl.
Billboard Tarps and Vinyl - We suggest you get a dark color (black or dark blue) and then tape down white edges and a yellow dashed line in the center. You can also buy smaller scraps and use HH-66 vinyl cement as an adhesive to glue smaller tarps together. You can also glue a black track on a white background to make the road.
"},{"location":"setup/track-options/#picking-the-right-size","title":"Picking the Right Size","text":"The typical dimensions of a full-event track is 22 x 34 feet. These dimensions are based on the DIYRobocars Standard Track, which is a popular track for donkey car racing.
The smaller track is a good option for beginners, as it is easier to navigate and control. The larger track is a better option for experienced drivers, as it offers more challenges and opportunities for speed.
Of course, the dimensions of a donkey car track can vary depending on the specific design. However, the dimensions listed above are a good starting point for anyone who is planning to build or race a donkey car.
"},{"location":"setup/track-options/#keeping-a-standard-width","title":"Keeping A Standard Width","text":"The standard width of all the \"road\" tracks is two feet or 24 inches. This is the distance to the centerline of the white edges. The roads are typically black with a white edge and a dashed yellow line down the middle of the track. The key is to have a high contrast between the black road and the white edges.
Many people use 2\" (or 1 and 7/8\") inch wide duct tape or Gaffers tape. Gaffer's tape is often preferred for temporary events on carpet. Gaffer's tape doesn't harm the surface to which it adhered.
"},{"location":"setup/track-options/#minnesota-stem-partners-tracks","title":"Minnesota STEM Partners Tracks","text":"Below is a sample of a tarp purchased from Billboard Tarps. Note the actual track is twice this size since it is still folded in half in this photo.
Track setup in the driver training room:
Note that this track does not adhere to the 2-foot wide rule. This is sometimes done when you have many students doing practice driving on the same track.
"},{"location":"setup/track-options/#optum-track","title":"Optum Track","text":"Optum printed their own track on a local printer that specialized in printing large format signage. The custom printing cost was about $1,300.00
"},{"location":"setup/track-options/#best-buy-track","title":"Best Buy Track","text":"Best Buy also printed its own track for their events. This photo only shows about 1/3 of the track.
"},{"location":"setup/track-options/#dan-mccrearys-basement-track","title":"Dan McCreary's Basement Track","text":"This track is just a single piece of white electrical tape.
"},{"location":"setup/track-options/#interlocking-foam-mats","title":"Interlocking Foam Mats","text":"You can also purchase interlocking foam mats. These are typically two feet by two feet and cost about $30 for a package of 6. Since each package covers 24 square feet and a full track is about 24x36 feet (758 square feet) we can see the cost of 32 packages is around $960.00.
Interlocking Foam Mats From WalMart
Amazon Foam Mats
"},{"location":"setup/track-options/#references","title":"References","text":"DIYRobocars Standard Track
"},{"location":"training-logs/dans-basement/","title":"Dans Basement Training Log","text":"I have a very small track in my basement. I put down a single white line about 3/4 inch wide using white electrical tape. The background was a marble blue expoy floor with a lot of color variation. The surface was very reflective and there were lights in the ceiling with lots of glare. I drove the car around 10 times in each direction and collected around 4,500 images.
"},{"location":"training-logs/dans-basement/#catalogs","title":"Catalogs","text":"I manually edited the catlog files and then edited the manifest.json file to modify the paths:
{\"paths\": [\"catalog_3.catalog\", \"catalog_4.catalog\", \"catalog_5.catalog\", \"catalog_6.catalog\", \"catalog_7.catalog\"]\n
wc -l data/dans-basement/*.catalog\n
781 data/dans-basement/catalog_3.catalog\n 1000 data/dans-basement/catalog_4.catalog\n 1000 data/dans-basement/catalog_5.catalog\n 1000 data/dans-basement/catalog_6.catalog\n 750 data/dans-basement/catalog_7.catalog\n 4531 total\n
This matched the ls -1 ~/mycar/data/dans-basement/images | wc -l
command that counted the number of images.
I time the training time on the NIVID RTX 2080 and got the model trained in about 1.5 minutes.
$ time donkey train --tub=./data/dans-basement --model=./models/dans-basement.h5\n________ ______ _________ \n___ __ \\_______________ /___________ __ __ ____/_____ ________\n__ / / / __ \\_ __ \\_ //_/ _ \\_ / / / _ / _ __ `/_ ___/\n_ /_/ // /_/ / / / / ,< / __/ /_/ / / /___ / /_/ /_ / \n/_____/ \\____//_/ /_//_/|_| \\___/_\\__, / \\____/ \\__,_/ /_/ \n /____/ \n\nusing donkey v4.2.1 ...\nloading config file: ./config.py\nloading personal config over-rides from myconfig.py\n\"get_model_by_type\" model Type is: linear\nCreated KerasLinear\n2021-07-26 21:05:34.259364: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\n2021-07-26 21:05:34.278301: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:05:34.278898: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: \npciBusID: 0000:09:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5\ncoreClock: 1.635GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s\n2021-07-26 21:05:34.279098: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2021-07-26 21:05:34.280320: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2021-07-26 21:05:34.281822: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n2021-07-26 21:05:34.282037: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n2021-07-26 21:05:34.283140: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n2021-07-26 21:05:34.283726: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n2021-07-26 21:05:34.285524: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2021-07-26 21:05:34.285676: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:05:34.286176: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:05:34.286568: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0\n2021-07-26 21:05:34.286793: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA\n2021-07-26 21:05:34.290920: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3592950000 Hz\n2021-07-26 21:05:34.291228: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x557d8a05bbb0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n2021-07-26 21:05:34.291241: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n2021-07-26 21:05:34.291374: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:05:34.291795: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: \npciBusID: 0000:09:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5\ncoreClock: 1.635GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s\n2021-07-26 21:05:34.291830: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2021-07-26 21:05:34.291842: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2021-07-26 21:05:34.291852: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n2021-07-26 21:05:34.291862: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n2021-07-26 21:05:34.291872: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n2021-07-26 21:05:34.291881: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n2021-07-26 21:05:34.291891: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2021-07-26 21:05:34.291955: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:05:34.292398: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:05:34.292782: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0\n2021-07-26 21:05:34.292805: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2021-07-26 21:05:34.366898: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:\n2021-07-26 21:05:34.366930: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 \n2021-07-26 21:05:34.366937: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N \n2021-07-26 21:05:34.367194: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:05:34.367855: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:05:34.368446: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:05:34.368971: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9911 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:09:00.0, compute capability: 7.5)\n2021-07-26 21:05:34.370680: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x557d8bec8fa0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n2021-07-26 21:05:34.370693: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce RTX 2080 Ti, Compute Capability 7.5\nModel: \"model\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\nimg_in (InputLayer) [(None, 224, 224, 3) 0 \n__________________________________________________________________________________________________\nconv2d_1 (Conv2D) (None, 110, 110, 24) 1824 img_in[0][0] \n__________________________________________________________________________________________________\ndropout (Dropout) (None, 110, 110, 24) 0 conv2d_1[0][0] \n__________________________________________________________________________________________________\nconv2d_2 (Conv2D) (None, 53, 53, 32) 19232 dropout[0][0] \n__________________________________________________________________________________________________\ndropout_1 (Dropout) (None, 53, 53, 32) 0 conv2d_2[0][0] \n__________________________________________________________________________________________________\nconv2d_3 (Conv2D) (None, 25, 25, 64) 51264 dropout_1[0][0] \n__________________________________________________________________________________________________\ndropout_2 (Dropout) (None, 25, 25, 64) 0 conv2d_3[0][0] \n__________________________________________________________________________________________________\nconv2d_4 (Conv2D) (None, 23, 23, 64) 36928 dropout_2[0][0] \n__________________________________________________________________________________________________\ndropout_3 (Dropout) (None, 23, 23, 64) 0 conv2d_4[0][0] \n__________________________________________________________________________________________________\nconv2d_5 (Conv2D) (None, 21, 21, 64) 36928 dropout_3[0][0] \n__________________________________________________________________________________________________\ndropout_4 (Dropout) (None, 21, 21, 64) 0 conv2d_5[0][0] \n__________________________________________________________________________________________________\nflattened (Flatten) (None, 28224) 0 dropout_4[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 100) 2822500 flattened[0][0] \n__________________________________________________________________________________________________\ndropout_5 (Dropout) (None, 100) 0 dense_1[0][0] \n__________________________________________________________________________________________________\ndense_2 (Dense) (None, 50) 5050 dropout_5[0][0] \n__________________________________________________________________________________________________\ndropout_6 (Dropout) (None, 50) 0 dense_2[0][0] \n__________________________________________________________________________________________________\nn_outputs0 (Dense) (None, 1) 51 dropout_6[0][0] \n__________________________________________________________________________________________________\nn_outputs1 (Dense) (None, 1) 51 dropout_6[0][0] \n==================================================================================================\nTotal params: 2,973,828\nTrainable params: 2,973,828\nNon-trainable params: 0\n__________________________________________________________________________________________________\nNone\nUsing catalog /home/arl/mycar/data/dans-basement/catalog_7.catalog\n\nRecords # Training 3364\nRecords # Validation 842\nEpoch 1/100\n2021-07-26 21:05:35.291438: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2021-07-26 21:05:35.613762: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2021-07-26 21:05:36.322576: W tensorflow/stream_executor/gpu/asm_compiler.cc:116] *** WARNING *** You are using ptxas 9.1.108, which is older than 9.2.88. ptxas 9.x before 9.2.88 is known to miscompile XLA code, leading to incorrect results or invalid-address errors.\n\nYou do not need to update to CUDA 9.2.88; cherry-picking the ptxas binary is sufficient.\n2021-07-26 21:05:36.376195: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] Internal: ptxas exited with non-zero error code 65280, output: ptxas fatal : Value 'sm_75' is not defined for option 'gpu-name'\n\nRelying on driver to perform ptx compilation. \nModify $PATH to customize ptxas location.\nThis message will be only logged once.\n27/27 [==============================] - ETA: 0s - loss: 0.2495 - n_outputs0_loss: 0.1717 - n_outputs1_loss: 0.0778\nEpoch 00001: val_loss improved from inf to 0.14744, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 8s 301ms/step - loss: 0.2495 - n_outputs0_loss: 0.1717 - n_outputs1_loss: 0.0778 - val_loss: 0.1474 - val_n_outputs0_loss: 0.1291 - val_n_outputs1_loss: 0.0183\nEpoch 2/100\n27/27 [==============================] - ETA: 0s - loss: 0.1487 - n_outputs0_loss: 0.1265 - n_outputs1_loss: 0.0223\nEpoch 00002: val_loss improved from 0.14744 to 0.09815, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 120ms/step - loss: 0.1487 - n_outputs0_loss: 0.1265 - n_outputs1_loss: 0.0223 - val_loss: 0.0981 - val_n_outputs0_loss: 0.0777 - val_n_outputs1_loss: 0.0205\nEpoch 3/100\n27/27 [==============================] - ETA: 0s - loss: 0.1075 - n_outputs0_loss: 0.0893 - n_outputs1_loss: 0.0182\nEpoch 00003: val_loss improved from 0.09815 to 0.07897, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 117ms/step - loss: 0.1075 - n_outputs0_loss: 0.0893 - n_outputs1_loss: 0.0182 - val_loss: 0.0790 - val_n_outputs0_loss: 0.0687 - val_n_outputs1_loss: 0.0102\nEpoch 4/100\n27/27 [==============================] - ETA: 0s - loss: 0.0917 - n_outputs0_loss: 0.0759 - n_outputs1_loss: 0.0158\nEpoch 00004: val_loss improved from 0.07897 to 0.07055, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 110ms/step - loss: 0.0917 - n_outputs0_loss: 0.0759 - n_outputs1_loss: 0.0158 - val_loss: 0.0705 - val_n_outputs0_loss: 0.0610 - val_n_outputs1_loss: 0.0096\nEpoch 5/100\n27/27 [==============================] - ETA: 0s - loss: 0.0880 - n_outputs0_loss: 0.0734 - n_outputs1_loss: 0.0146\nEpoch 00005: val_loss did not improve from 0.07055\n27/27 [==============================] - 3s 105ms/step - loss: 0.0880 - n_outputs0_loss: 0.0734 - n_outputs1_loss: 0.0146 - val_loss: 0.0751 - val_n_outputs0_loss: 0.0553 - val_n_outputs1_loss: 0.0198\nEpoch 6/100\n27/27 [==============================] - ETA: 0s - loss: 0.0757 - n_outputs0_loss: 0.0629 - n_outputs1_loss: 0.0127\nEpoch 00006: val_loss improved from 0.07055 to 0.05840, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 111ms/step - loss: 0.0757 - n_outputs0_loss: 0.0629 - n_outputs1_loss: 0.0127 - val_loss: 0.0584 - val_n_outputs0_loss: 0.0485 - val_n_outputs1_loss: 0.0099\nEpoch 7/100\n27/27 [==============================] - ETA: 0s - loss: 0.0672 - n_outputs0_loss: 0.0551 - n_outputs1_loss: 0.0120\nEpoch 00007: val_loss improved from 0.05840 to 0.05028, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 110ms/step - loss: 0.0672 - n_outputs0_loss: 0.0551 - n_outputs1_loss: 0.0120 - val_loss: 0.0503 - val_n_outputs0_loss: 0.0450 - val_n_outputs1_loss: 0.0053\nEpoch 8/100\n27/27 [==============================] - ETA: 0s - loss: 0.0621 - n_outputs0_loss: 0.0510 - n_outputs1_loss: 0.0111\nEpoch 00008: val_loss improved from 0.05028 to 0.04540, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 110ms/step - loss: 0.0621 - n_outputs0_loss: 0.0510 - n_outputs1_loss: 0.0111 - val_loss: 0.0454 - val_n_outputs0_loss: 0.0385 - val_n_outputs1_loss: 0.0069\nEpoch 9/100\n27/27 [==============================] - ETA: 0s - loss: 0.0545 - n_outputs0_loss: 0.0441 - n_outputs1_loss: 0.0104\nEpoch 00009: val_loss improved from 0.04540 to 0.04351, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 107ms/step - loss: 0.0545 - n_outputs0_loss: 0.0441 - n_outputs1_loss: 0.0104 - val_loss: 0.0435 - val_n_outputs0_loss: 0.0358 - val_n_outputs1_loss: 0.0077\nEpoch 10/100\n27/27 [==============================] - ETA: 0s - loss: 0.0558 - n_outputs0_loss: 0.0458 - n_outputs1_loss: 0.0099\nEpoch 00010: val_loss improved from 0.04351 to 0.04070, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 110ms/step - loss: 0.0558 - n_outputs0_loss: 0.0458 - n_outputs1_loss: 0.0099 - val_loss: 0.0407 - val_n_outputs0_loss: 0.0357 - val_n_outputs1_loss: 0.0050\nEpoch 11/100\n27/27 [==============================] - ETA: 0s - loss: 0.0505 - n_outputs0_loss: 0.0415 - n_outputs1_loss: 0.0090\nEpoch 00011: val_loss improved from 0.04070 to 0.03935, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 109ms/step - loss: 0.0505 - n_outputs0_loss: 0.0415 - n_outputs1_loss: 0.0090 - val_loss: 0.0393 - val_n_outputs0_loss: 0.0340 - val_n_outputs1_loss: 0.0054\nEpoch 12/100\n27/27 [==============================] - ETA: 0s - loss: 0.0476 - n_outputs0_loss: 0.0388 - n_outputs1_loss: 0.0088\nEpoch 00012: val_loss improved from 0.03935 to 0.03624, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 110ms/step - loss: 0.0476 - n_outputs0_loss: 0.0388 - n_outputs1_loss: 0.0088 - val_loss: 0.0362 - val_n_outputs0_loss: 0.0298 - val_n_outputs1_loss: 0.0065\nEpoch 13/100\n27/27 [==============================] - ETA: 0s - loss: 0.0453 - n_outputs0_loss: 0.0373 - n_outputs1_loss: 0.0080\nEpoch 00013: val_loss improved from 0.03624 to 0.03507, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 108ms/step - loss: 0.0453 - n_outputs0_loss: 0.0373 - n_outputs1_loss: 0.0080 - val_loss: 0.0351 - val_n_outputs0_loss: 0.0294 - val_n_outputs1_loss: 0.0057\nEpoch 14/100\n27/27 [==============================] - ETA: 0s - loss: 0.0430 - n_outputs0_loss: 0.0352 - n_outputs1_loss: 0.0079\nEpoch 00014: val_loss improved from 0.03507 to 0.03211, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 111ms/step - loss: 0.0430 - n_outputs0_loss: 0.0352 - n_outputs1_loss: 0.0079 - val_loss: 0.0321 - val_n_outputs0_loss: 0.0265 - val_n_outputs1_loss: 0.0056\nEpoch 15/100\n27/27 [==============================] - ETA: 0s - loss: 0.0397 - n_outputs0_loss: 0.0327 - n_outputs1_loss: 0.0070\nEpoch 00015: val_loss improved from 0.03211 to 0.03208, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 110ms/step - loss: 0.0397 - n_outputs0_loss: 0.0327 - n_outputs1_loss: 0.0070 - val_loss: 0.0321 - val_n_outputs0_loss: 0.0279 - val_n_outputs1_loss: 0.0042\nEpoch 16/100\n27/27 [==============================] - ETA: 0s - loss: 0.0382 - n_outputs0_loss: 0.0316 - n_outputs1_loss: 0.0065\nEpoch 00016: val_loss improved from 0.03208 to 0.02880, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 108ms/step - loss: 0.0382 - n_outputs0_loss: 0.0316 - n_outputs1_loss: 0.0065 - val_loss: 0.0288 - val_n_outputs0_loss: 0.0243 - val_n_outputs1_loss: 0.0046\nEpoch 17/100\n27/27 [==============================] - ETA: 0s - loss: 0.0381 - n_outputs0_loss: 0.0313 - n_outputs1_loss: 0.0069\nEpoch 00017: val_loss did not improve from 0.02880\n27/27 [==============================] - 3s 104ms/step - loss: 0.0381 - n_outputs0_loss: 0.0313 - n_outputs1_loss: 0.0069 - val_loss: 0.0322 - val_n_outputs0_loss: 0.0281 - val_n_outputs1_loss: 0.0041\nEpoch 18/100\n27/27 [==============================] - ETA: 0s - loss: 0.0375 - n_outputs0_loss: 0.0310 - n_outputs1_loss: 0.0065\nEpoch 00018: val_loss did not improve from 0.02880\n27/27 [==============================] - 3s 107ms/step - loss: 0.0375 - n_outputs0_loss: 0.0310 - n_outputs1_loss: 0.0065 - val_loss: 0.0293 - val_n_outputs0_loss: 0.0257 - val_n_outputs1_loss: 0.0036\nEpoch 19/100\n27/27 [==============================] - ETA: 0s - loss: 0.0372 - n_outputs0_loss: 0.0308 - n_outputs1_loss: 0.0064\nEpoch 00019: val_loss did not improve from 0.02880\n27/27 [==============================] - 3s 108ms/step - loss: 0.0372 - n_outputs0_loss: 0.0308 - n_outputs1_loss: 0.0064 - val_loss: 0.0307 - val_n_outputs0_loss: 0.0275 - val_n_outputs1_loss: 0.0032\nEpoch 20/100\n27/27 [==============================] - ETA: 0s - loss: 0.0347 - n_outputs0_loss: 0.0285 - n_outputs1_loss: 0.0062\nEpoch 00020: val_loss did not improve from 0.02880\n27/27 [==============================] - 3s 104ms/step - loss: 0.0347 - n_outputs0_loss: 0.0285 - n_outputs1_loss: 0.0062 - val_loss: 0.0325 - val_n_outputs0_loss: 0.0283 - val_n_outputs1_loss: 0.0042\nEpoch 21/100\n27/27 [==============================] - ETA: 0s - loss: 0.0349 - n_outputs0_loss: 0.0290 - n_outputs1_loss: 0.0058\nEpoch 00021: val_loss did not improve from 0.02880\n27/27 [==============================] - 3s 107ms/step - loss: 0.0349 - n_outputs0_loss: 0.0290 - n_outputs1_loss: 0.0058 - val_loss: 0.0293 - val_n_outputs0_loss: 0.0258 - val_n_outputs1_loss: 0.0035\nWARNING: CPU random generator seem to be failing, disable hardware random number generation\nWARNING: RDRND generated: 0xffffffff 0xffffffff 0xffffffff 0xffffffff\n\nreal 1m26.930s\nuser 1m30.911s\nsys 0m42.818s\n
"},{"location":"training-logs/msp-1-cpu/","title":"Training Log for MSP Car #1","text":"After cleanup we only got about 1,500 records. But here is a log of the training. It took about 1.5 minutes.
$ donkey train --tub=./data/msp-car-1 --model=./models/msp-car-1.f5\n
_ ______ _________ ___ __ ___ /__ __ __ _/_ ______ __ / / / __ _ __ _ /// _ _ / / / _ / _ __ `/ ___/ _ // // // / / / / ,< / __/ // / / / / // /_ / // _/// ////|| _/__, / _/ _,/ /_/ /____/ using donkey v4.2.1 ... loading config file: ./config.py loading personal config over-rides from myconfig.py \"get_model_by_type\" model Type is: linear Created KerasLinear 2021-07-26 19:50:45.562205: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2021-07-26 19:50:45.565106: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3592950000 Hz 2021-07-26 19:50:45.565470: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55d85e9d19f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2021-07-26 19:50:45.565492: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2021-07-26 19:50:45.565578: I tensorflow/core/common_runtime/process_util.cc:147] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance. Model: \"model\"
"},{"location":"training-logs/msp-1-cpu/#layer-type-output-shape-param-connected-to","title":"Layer (type) Output Shape Param # Connected to","text":"img_in (InputLayer) [(None, 224, 224, 3) 0
conv2d_1 (Conv2D) (None, 110, 110, 24) 1824 img_in[0][0]
dropout (Dropout) (None, 110, 110, 24) 0 conv2d_1[0][0]
conv2d_2 (Conv2D) (None, 53, 53, 32) 19232 dropout[0][0]
dropout_1 (Dropout) (None, 53, 53, 32) 0 conv2d_2[0][0]
conv2d_3 (Conv2D) (None, 25, 25, 64) 51264 dropout_1[0][0]
dropout_2 (Dropout) (None, 25, 25, 64) 0 conv2d_3[0][0]
conv2d_4 (Conv2D) (None, 23, 23, 64) 36928 dropout_2[0][0]
dropout_3 (Dropout) (None, 23, 23, 64) 0 conv2d_4[0][0]
conv2d_5 (Conv2D) (None, 21, 21, 64) 36928 dropout_3[0][0]
dropout_4 (Dropout) (None, 21, 21, 64) 0 conv2d_5[0][0]
flattened (Flatten) (None, 28224) 0 dropout_4[0][0]
dense_1 (Dense) (None, 100) 2822500 flattened[0][0]
dropout_5 (Dropout) (None, 100) 0 dense_1[0][0]
dense_2 (Dense) (None, 50) 5050 dropout_5[0][0]
dropout_6 (Dropout) (None, 50) 0 dense_2[0][0]
n_outputs0 (Dense) (None, 1) 51 dropout_6[0][0]
"},{"location":"training-logs/msp-1-cpu/#n_outputs1-dense-none-1-51-dropout_600","title":"n_outputs1 (Dense) (None, 1) 51 dropout_6[0][0]","text":"Total params: 2,973,828 Trainable params: 2,973,828 Non-trainable params: 0
None Using catalog /home/arl/mycar/data/msp-car-1/catalog_17.catalog
Records # Training 1265 Records # Validation 317 Epoch 1/100 10/10 [==============================] - ETA: 0s - loss: 1.0885 - n_outputs0_loss: 0.5975 - n_outputs1_loss: 0.4909 Epoch 00001: val_loss improved from inf to 0.54341, saving model to ./models/msp-car-1.f5 2021-07-26 19:50:57.881390: W tensorflow/python/util/util.cc:329] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them. WARNING:tensorflow:From /home/arl/miniconda3/envs/donkey/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.init (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. 10/10 [==============================] - 11s 1s/step - loss: 1.0885 - n_outputs0_loss: 0.5975 - n_outputs1_loss: 0.4909 - val_loss: 0.5434 - val_n_outputs0_loss: 0.4668 - val_n_outputs1_loss: 0.0767 Epoch 2/100 10/10 [==============================] - ETA: 0s - loss: 0.5522 - n_outputs0_loss: 0.4640 - n_outputs1_loss: 0.0882 Epoch 00002: val_loss improved from 0.54341 to 0.53272, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 999ms/step - loss: 0.5522 - n_outputs0_loss: 0.4640 - n_outputs1_loss: 0.0882 - val_loss: 0.5327 - val_n_outputs0_loss: 0.4605 - val_n_outputs1_loss: 0.0722 Epoch 3/100 10/10 [==============================] - ETA: 0s - loss: 0.5392 - n_outputs0_loss: 0.4638 - n_outputs1_loss: 0.0754 Epoch 00003: val_loss improved from 0.53272 to 0.50775, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.5392 - n_outputs0_loss: 0.4638 - n_outputs1_loss: 0.0754 - val_loss: 0.5077 - val_n_outputs0_loss: 0.4551 - val_n_outputs1_loss: 0.0527 Epoch 4/100 10/10 [==============================] - ETA: 0s - loss: 0.5318 - n_outputs0_loss: 0.4605 - n_outputs1_loss: 0.0713 Epoch 00004: val_loss improved from 0.50775 to 0.49783, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 999ms/step - loss: 0.5318 - n_outputs0_loss: 0.4605 - n_outputs1_loss: 0.0713 - val_loss: 0.4978 - val_n_outputs0_loss: 0.4455 - val_n_outputs1_loss: 0.0523 Epoch 5/100 10/10 [==============================] - ETA: 0s - loss: 0.5333 - n_outputs0_loss: 0.4608 - n_outputs1_loss: 0.0725 Epoch 00005: val_loss improved from 0.49783 to 0.49721, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.5333 - n_outputs0_loss: 0.4608 - n_outputs1_loss: 0.0725 - val_loss: 0.4972 - val_n_outputs0_loss: 0.4451 - val_n_outputs1_loss: 0.0521 Epoch 6/100 10/10 [==============================] - ETA: 0s - loss: 0.5277 - n_outputs0_loss: 0.4619 - n_outputs1_loss: 0.0658 Epoch 00006: val_loss did not improve from 0.49721 10/10 [==============================] - 9s 934ms/step - loss: 0.5277 - n_outputs0_loss: 0.4619 - n_outputs1_loss: 0.0658 - val_loss: 0.4981 - val_n_outputs0_loss: 0.4461 - val_n_outputs1_loss: 0.0520 Epoch 7/100 10/10 [==============================] - ETA: 0s - loss: 0.5265 - n_outputs0_loss: 0.4577 - n_outputs1_loss: 0.0688 Epoch 00007: val_loss improved from 0.49721 to 0.49668, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.5265 - n_outputs0_loss: 0.4577 - n_outputs1_loss: 0.0688 - val_loss: 0.4967 - val_n_outputs0_loss: 0.4442 - val_n_outputs1_loss: 0.0525 Epoch 8/100 10/10 [==============================] - ETA: 0s - loss: 0.5138 - n_outputs0_loss: 0.4467 - n_outputs1_loss: 0.0671 Epoch 00008: val_loss improved from 0.49668 to 0.49536, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.5138 - n_outputs0_loss: 0.4467 - n_outputs1_loss: 0.0671 - val_loss: 0.4954 - val_n_outputs0_loss: 0.4408 - val_n_outputs1_loss: 0.0546 Epoch 9/100 10/10 [==============================] - ETA: 0s - loss: 0.5109 - n_outputs0_loss: 0.4468 - n_outputs1_loss: 0.0642 Epoch 00009: val_loss improved from 0.49536 to 0.48741, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.5109 - n_outputs0_loss: 0.4468 - n_outputs1_loss: 0.0642 - val_loss: 0.4874 - val_n_outputs0_loss: 0.4353 - val_n_outputs1_loss: 0.0521 Epoch 10/100 10/10 [==============================] - ETA: 0s - loss: 0.5030 - n_outputs0_loss: 0.4405 - n_outputs1_loss: 0.0625 Epoch 00010: val_loss did not improve from 0.48741 10/10 [==============================] - 9s 930ms/step - loss: 0.5030 - n_outputs0_loss: 0.4405 - n_outputs1_loss: 0.0625 - val_loss: 0.4936 - val_n_outputs0_loss: 0.4351 - val_n_outputs1_loss: 0.0585 Epoch 11/100 10/10 [==============================] - ETA: 0s - loss: 0.4974 - n_outputs0_loss: 0.4310 - n_outputs1_loss: 0.0664 Epoch 00011: val_loss improved from 0.48741 to 0.47748, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 11s 1s/step - loss: 0.4974 - n_outputs0_loss: 0.4310 - n_outputs1_loss: 0.0664 - val_loss: 0.4775 - val_n_outputs0_loss: 0.4238 - val_n_outputs1_loss: 0.0536 Epoch 12/100 10/10 [==============================] - ETA: 0s - loss: 0.4887 - n_outputs0_loss: 0.4208 - n_outputs1_loss: 0.0679 Epoch 00012: val_loss did not improve from 0.47748 10/10 [==============================] - 9s 925ms/step - loss: 0.4887 - n_outputs0_loss: 0.4208 - n_outputs1_loss: 0.0679 - val_loss: 0.4836 - val_n_outputs0_loss: 0.4148 - val_n_outputs1_loss: 0.0687 Epoch 13/100 10/10 [==============================] - ETA: 0s - loss: 0.4591 - n_outputs0_loss: 0.3927 - n_outputs1_loss: 0.0664 Epoch 00013: val_loss improved from 0.47748 to 0.40567, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.4591 - n_outputs0_loss: 0.3927 - n_outputs1_loss: 0.0664 - val_loss: 0.4057 - val_n_outputs0_loss: 0.3540 - val_n_outputs1_loss: 0.0516 Epoch 14/100 10/10 [==============================] - ETA: 0s - loss: 0.4323 - n_outputs0_loss: 0.3665 - n_outputs1_loss: 0.0658 Epoch 00014: val_loss improved from 0.40567 to 0.37099, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.4323 - n_outputs0_loss: 0.3665 - n_outputs1_loss: 0.0658 - val_loss: 0.3710 - val_n_outputs0_loss: 0.3153 - val_n_outputs1_loss: 0.0556 Epoch 15/100 10/10 [==============================] - ETA: 0s - loss: 0.3754 - n_outputs0_loss: 0.3063 - n_outputs1_loss: 0.0691 Epoch 00015: val_loss improved from 0.37099 to 0.33956, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.3754 - n_outputs0_loss: 0.3063 - n_outputs1_loss: 0.0691 - val_loss: 0.3396 - val_n_outputs0_loss: 0.2853 - val_n_outputs1_loss: 0.0542 Epoch 16/100 10/10 [==============================] - ETA: 0s - loss: 0.3314 - n_outputs0_loss: 0.2723 - n_outputs1_loss: 0.0591 Epoch 00016: val_loss improved from 0.33956 to 0.30289, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.3314 - n_outputs0_loss: 0.2723 - n_outputs1_loss: 0.0591 - val_loss: 0.3029 - val_n_outputs0_loss: 0.2524 - val_n_outputs1_loss: 0.0505 Epoch 17/100 10/10 [==============================] - ETA: 0s - loss: 0.3168 - n_outputs0_loss: 0.2591 - n_outputs1_loss: 0.0576 Epoch 00017: val_loss improved from 0.30289 to 0.28694, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.3168 - n_outputs0_loss: 0.2591 - n_outputs1_loss: 0.0576 - val_loss: 0.2869 - val_n_outputs0_loss: 0.2390 - val_n_outputs1_loss: 0.0479 Epoch 18/100 10/10 [==============================] - ETA: 0s - loss: 0.2990 - n_outputs0_loss: 0.2446 - n_outputs1_loss: 0.0544 Epoch 00018: val_loss improved from 0.28694 to 0.27270, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.2990 - n_outputs0_loss: 0.2446 - n_outputs1_loss: 0.0544 - val_loss: 0.2727 - val_n_outputs0_loss: 0.2257 - val_n_outputs1_loss: 0.0470 Epoch 19/100 10/10 [==============================] - ETA: 0s - loss: 0.2706 - n_outputs0_loss: 0.2185 - n_outputs1_loss: 0.0521 Epoch 00019: val_loss improved from 0.27270 to 0.25193, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.2706 - n_outputs0_loss: 0.2185 - n_outputs1_loss: 0.0521 - val_loss: 0.2519 - val_n_outputs0_loss: 0.2099 - val_n_outputs1_loss: 0.0421 Epoch 20/100 10/10 [==============================] - ETA: 0s - loss: 0.2602 - n_outputs0_loss: 0.2112 - n_outputs1_loss: 0.0490 Epoch 00020: val_loss improved from 0.25193 to 0.23899, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.2602 - n_outputs0_loss: 0.2112 - n_outputs1_loss: 0.0490 - val_loss: 0.2390 - val_n_outputs0_loss: 0.1974 - val_n_outputs1_loss: 0.0416 Epoch 21/100 10/10 [==============================] - ETA: 0s - loss: 0.2345 - n_outputs0_loss: 0.1866 - n_outputs1_loss: 0.0479 Epoch 00021: val_loss improved from 0.23899 to 0.23396, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.2345 - n_outputs0_loss: 0.1866 - n_outputs1_loss: 0.0479 - val_loss: 0.2340 - val_n_outputs0_loss: 0.1911 - val_n_outputs1_loss: 0.0428 Epoch 22/100 10/10 [==============================] - ETA: 0s - loss: 0.2229 - n_outputs0_loss: 0.1758 - n_outputs1_loss: 0.0471 Epoch 00022: val_loss improved from 0.23396 to 0.22651, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.2229 - n_outputs0_loss: 0.1758 - n_outputs1_loss: 0.0471 - val_loss: 0.2265 - val_n_outputs0_loss: 0.1858 - val_n_outputs1_loss: 0.0407 Epoch 23/100 10/10 [==============================] - ETA: 0s - loss: 0.2175 - n_outputs0_loss: 0.1730 - n_outputs1_loss: 0.0445 Epoch 00023: val_loss improved from 0.22651 to 0.22245, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.2175 - n_outputs0_loss: 0.1730 - n_outputs1_loss: 0.0445 - val_loss: 0.2225 - val_n_outputs0_loss: 0.1806 - val_n_outputs1_loss: 0.0419 Epoch 24/100 10/10 [==============================] - ETA: 0s - loss: 0.2084 - n_outputs0_loss: 0.1624 - n_outputs1_loss: 0.0460 Epoch 00024: val_loss improved from 0.22245 to 0.20674, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.2084 - n_outputs0_loss: 0.1624 - n_outputs1_loss: 0.0460 - val_loss: 0.2067 - val_n_outputs0_loss: 0.1694 - val_n_outputs1_loss: 0.0374 Epoch 25/100 10/10 [==============================] - ETA: 0s - loss: 0.1889 - n_outputs0_loss: 0.1457 - n_outputs1_loss: 0.0432 Epoch 00025: val_loss improved from 0.20674 to 0.20416, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.1889 - n_outputs0_loss: 0.1457 - n_outputs1_loss: 0.0432 - val_loss: 0.2042 - val_n_outputs0_loss: 0.1638 - val_n_outputs1_loss: 0.0403 Epoch 26/100 10/10 [==============================] - ETA: 0s - loss: 0.1882 - n_outputs0_loss: 0.1467 - n_outputs1_loss: 0.0414 Epoch 00026: val_loss improved from 0.20416 to 0.19422, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.1882 - n_outputs0_loss: 0.1467 - n_outputs1_loss: 0.0414 - val_loss: 0.1942 - val_n_outputs0_loss: 0.1557 - val_n_outputs1_loss: 0.0385 Epoch 27/100 10/10 [==============================] - ETA: 0s - loss: 0.1706 - n_outputs0_loss: 0.1328 - n_outputs1_loss: 0.0378 Epoch 00027: val_loss did not improve from 0.19422 10/10 [==============================] - 9s 930ms/step - loss: 0.1706 - n_outputs0_loss: 0.1328 - n_outputs1_loss: 0.0378 - val_loss: 0.2016 - val_n_outputs0_loss: 0.1615 - val_n_outputs1_loss: 0.0401 Epoch 28/100 10/10 [==============================] - ETA: 0s - loss: 0.1630 - n_outputs0_loss: 0.1248 - n_outputs1_loss: 0.0382 Epoch 00028: val_loss improved from 0.19422 to 0.18035, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.1630 - n_outputs0_loss: 0.1248 - n_outputs1_loss: 0.0382 - val_loss: 0.1803 - val_n_outputs0_loss: 0.1445 - val_n_outputs1_loss: 0.0358 Epoch 29/100 10/10 [==============================] - ETA: 0s - loss: 0.1601 - n_outputs0_loss: 0.1219 - n_outputs1_loss: 0.0382 Epoch 00029: val_loss improved from 0.18035 to 0.17528, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.1601 - n_outputs0_loss: 0.1219 - n_outputs1_loss: 0.0382 - val_loss: 0.1753 - val_n_outputs0_loss: 0.1410 - val_n_outputs1_loss: 0.0343 Epoch 30/100 10/10 [==============================] - ETA: 0s - loss: 0.1483 - n_outputs0_loss: 0.1117 - n_outputs1_loss: 0.0366 Epoch 00030: val_loss improved from 0.17528 to 0.17039, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.1483 - n_outputs0_loss: 0.1117 - n_outputs1_loss: 0.0366 - val_loss: 0.1704 - val_n_outputs0_loss: 0.1372 - val_n_outputs1_loss: 0.0332 Epoch 31/100 10/10 [==============================] - ETA: 0s - loss: 0.1481 - n_outputs0_loss: 0.1114 - n_outputs1_loss: 0.0368 Epoch 00031: val_loss did not improve from 0.17039 10/10 [==============================] - 9s 915ms/step - loss: 0.1481 - n_outputs0_loss: 0.1114 - n_outputs1_loss: 0.0368 - val_loss: 0.1783 - val_n_outputs0_loss: 0.1436 - val_n_outputs1_loss: 0.0347 Epoch 32/100 10/10 [==============================] - ETA: 0s - loss: 0.1470 - n_outputs0_loss: 0.1111 - n_outputs1_loss: 0.0358 Epoch 00032: val_loss improved from 0.17039 to 0.16278, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.1470 - n_outputs0_loss: 0.1111 - n_outputs1_loss: 0.0358 - val_loss: 0.1628 - val_n_outputs0_loss: 0.1301 - val_n_outputs1_loss: 0.0327 Epoch 33/100 10/10 [==============================] - ETA: 0s - loss: 0.1368 - n_outputs0_loss: 0.1027 - n_outputs1_loss: 0.0341 Epoch 00033: val_loss did not improve from 0.16278 10/10 [==============================] - 9s 928ms/step - loss: 0.1368 - n_outputs0_loss: 0.1027 - n_outputs1_loss: 0.0341 - val_loss: 0.1666 - val_n_outputs0_loss: 0.1345 - val_n_outputs1_loss: 0.0320 Epoch 34/100 10/10 [==============================] - ETA: 0s - loss: 0.1305 - n_outputs0_loss: 0.0971 - n_outputs1_loss: 0.0334 Epoch 00034: val_loss did not improve from 0.16278 10/10 [==============================] - 9s 929ms/step - loss: 0.1305 - n_outputs0_loss: 0.0971 - n_outputs1_loss: 0.0334 - val_loss: 0.1728 - val_n_outputs0_loss: 0.1413 - val_n_outputs1_loss: 0.0315 Epoch 35/100 10/10 [==============================] - ETA: 0s - loss: 0.1353 - n_outputs0_loss: 0.1027 - n_outputs1_loss: 0.0326 Epoch 00035: val_loss did not improve from 0.16278 10/10 [==============================] - 9s 933ms/step - loss: 0.1353 - n_outputs0_loss: 0.1027 - n_outputs1_loss: 0.0326 - val_loss: 0.1706 - val_n_outputs0_loss: 0.1391 - val_n_outputs1_loss: 0.0315 Epoch 36/100 10/10 [==============================] - ETA: 0s - loss: 0.1319 - n_outputs0_loss: 0.0989 - n_outputs1_loss: 0.0331 Epoch 00036: val_loss did not improve from 0.16278 10/10 [==============================] - 9s 936ms/step - loss: 0.1319 - n_outputs0_loss: 0.0989 - n_outputs1_loss: 0.0331 - val_loss: 0.1729 - val_n_outputs0_loss: 0.1401 - val_n_outputs1_loss: 0.0328 Epoch 37/100 10/10 [==============================] - ETA: 0s - loss: 0.1290 - n_outputs0_loss: 0.0952 - n_outputs1_loss: 0.0338 Epoch 00037: val_loss did not improve from 0.16278 10/10 [==============================] - 9s 929ms/step - loss: 0.1290 - n_outputs0_loss: 0.0952 - n_outputs1_loss: 0.0338 - val_loss: 0.1709 - val_n_outputs0_loss: 0.1381 - val_n_outputs1_loss: 0.0327 WARNING: CPU random generator seem to be failing, disable hardware random number generation WARNING: RDRND generated: 0xffffffff 0xffffffff 0xffffffff 0xffffffff (donkey) arl@arl1: ```
"},{"location":"training-logs/msp-car-1/","title":"Minnesota STEM Partners Car 1 Training Log","text":"time donkey train --tub=./data/msp-car-1 --model=./models/msp-car-1.h5\n________ ______ _________ \n___ __ \\_______________ /___________ __ __ ____/_____ ________\n__ / / / __ \\_ __ \\_ //_/ _ \\_ / / / _ / _ __ `/_ ___/\n_ /_/ // /_/ / / / / ,< / __/ /_/ / / /___ / /_/ /_ / \n/_____/ \\____//_/ /_//_/|_| \\___/_\\__, / \\____/ \\__,_/ /_/ \n /____/ \n\nusing donkey v4.2.1 ...\nloading config file: ./config.py\nloading personal config over-rides from myconfig.py\n\"get_model_by_type\" model Type is: linear\nCreated KerasLinear\n2021-07-26 21:18:57.390998: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\n2021-07-26 21:18:57.409838: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:18:57.410285: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: \npciBusID: 0000:09:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5\ncoreClock: 1.635GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s\n2021-07-26 21:18:57.410424: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2021-07-26 21:18:57.411314: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2021-07-26 21:18:57.412358: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n2021-07-26 21:18:57.412506: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n2021-07-26 21:18:57.413323: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n2021-07-26 21:18:57.413712: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n2021-07-26 21:18:57.415437: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2021-07-26 21:18:57.415619: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:18:57.416133: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:18:57.416523: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0\n2021-07-26 21:18:57.416750: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA\n2021-07-26 21:18:57.420820: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3592950000 Hz\n2021-07-26 21:18:57.421125: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5629cbee7970 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n2021-07-26 21:18:57.421136: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n2021-07-26 21:18:57.421270: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:18:57.421679: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: \npciBusID: 0000:09:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5\ncoreClock: 1.635GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s\n2021-07-26 21:18:57.421712: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2021-07-26 21:18:57.421724: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2021-07-26 21:18:57.421735: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n2021-07-26 21:18:57.421746: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n2021-07-26 21:18:57.421756: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n2021-07-26 21:18:57.421766: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n2021-07-26 21:18:57.421776: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2021-07-26 21:18:57.421840: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:18:57.422285: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:18:57.422675: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0\n2021-07-26 21:18:57.422700: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2021-07-26 21:18:57.504507: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:\n2021-07-26 21:18:57.504534: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 \n2021-07-26 21:18:57.504541: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N \n2021-07-26 21:18:57.504754: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:18:57.505207: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:18:57.505632: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:18:57.506019: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9892 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:09:00.0, compute capability: 7.5)\n2021-07-26 21:18:57.507379: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5629cdd66f30 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n2021-07-26 21:18:57.507389: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce RTX 2080 Ti, Compute Capability 7.5\nModel: \"model\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\nimg_in (InputLayer) [(None, 224, 224, 3) 0 \n__________________________________________________________________________________________________\nconv2d_1 (Conv2D) (None, 110, 110, 24) 1824 img_in[0][0] \n__________________________________________________________________________________________________\ndropout (Dropout) (None, 110, 110, 24) 0 conv2d_1[0][0] \n__________________________________________________________________________________________________\nconv2d_2 (Conv2D) (None, 53, 53, 32) 19232 dropout[0][0] \n__________________________________________________________________________________________________\ndropout_1 (Dropout) (None, 53, 53, 32) 0 conv2d_2[0][0] \n__________________________________________________________________________________________________\nconv2d_3 (Conv2D) (None, 25, 25, 64) 51264 dropout_1[0][0] \n__________________________________________________________________________________________________\ndropout_2 (Dropout) (None, 25, 25, 64) 0 conv2d_3[0][0] \n__________________________________________________________________________________________________\nconv2d_4 (Conv2D) (None, 23, 23, 64) 36928 dropout_2[0][0] \n__________________________________________________________________________________________________\ndropout_3 (Dropout) (None, 23, 23, 64) 0 conv2d_4[0][0] \n__________________________________________________________________________________________________\nconv2d_5 (Conv2D) (None, 21, 21, 64) 36928 dropout_3[0][0] \n__________________________________________________________________________________________________\ndropout_4 (Dropout) (None, 21, 21, 64) 0 conv2d_5[0][0] \n__________________________________________________________________________________________________\nflattened (Flatten) (None, 28224) 0 dropout_4[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 100) 2822500 flattened[0][0] \n__________________________________________________________________________________________________\ndropout_5 (Dropout) (None, 100) 0 dense_1[0][0] \n__________________________________________________________________________________________________\ndense_2 (Dense) (None, 50) 5050 dropout_5[0][0] \n__________________________________________________________________________________________________\ndropout_6 (Dropout) (None, 50) 0 dense_2[0][0] \n__________________________________________________________________________________________________\nn_outputs0 (Dense) (None, 1) 51 dropout_6[0][0] \n__________________________________________________________________________________________________\nn_outputs1 (Dense) (None, 1) 51 dropout_6[0][0] \n==================================================================================================\nTotal params: 2,973,828\nTrainable params: 2,973,828\nNon-trainable params: 0\n__________________________________________________________________________________________________\nNone\nUsing catalog /home/arl/mycar/data/msp-car-1/catalog_17.catalog\n\nRecords # Training 1265\nRecords # Validation 317\nEpoch 1/100\n2021-07-26 21:18:58.397797: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2021-07-26 21:18:58.705078: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2021-07-26 21:18:59.429125: W tensorflow/stream_executor/gpu/asm_compiler.cc:116] *** WARNING *** You are using ptxas 9.1.108, which is older than 9.2.88. ptxas 9.x before 9.2.88 is known to miscompile XLA code, leading to incorrect results or invalid-address errors.\n\nYou do not need to update to CUDA 9.2.88; cherry-picking the ptxas binary is sufficient.\n2021-07-26 21:18:59.481809: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] Internal: ptxas exited with non-zero error code 65280, output: ptxas fatal : Value 'sm_75' is not defined for option 'gpu-name'\n\nRelying on driver to perform ptx compilation. \nModify $PATH to customize ptxas location.\nThis message will be only logged once.\n10/10 [==============================] - ETA: 0s - loss: 0.6674 - n_outputs0_loss: 0.5162 - n_outputs1_loss: 0.1512\nEpoch 00001: val_loss improved from inf to 0.60297, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 3s 288ms/step - loss: 0.6674 - n_outputs0_loss: 0.5162 - n_outputs1_loss: 0.1512 - val_loss: 0.6030 - val_n_outputs0_loss: 0.4514 - val_n_outputs1_loss: 0.1516\nEpoch 2/100\n10/10 [==============================] - ETA: 0s - loss: 0.6050 - n_outputs0_loss: 0.5074 - n_outputs1_loss: 0.0976\nEpoch 00002: val_loss improved from 0.60297 to 0.51595, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 124ms/step - loss: 0.6050 - n_outputs0_loss: 0.5074 - n_outputs1_loss: 0.0976 - val_loss: 0.5160 - val_n_outputs0_loss: 0.4188 - val_n_outputs1_loss: 0.0972\nEpoch 3/100\n10/10 [==============================] - ETA: 0s - loss: 0.5707 - n_outputs0_loss: 0.4923 - n_outputs1_loss: 0.0784\nEpoch 00003: val_loss improved from 0.51595 to 0.50280, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 110ms/step - loss: 0.5707 - n_outputs0_loss: 0.4923 - n_outputs1_loss: 0.0784 - val_loss: 0.5028 - val_n_outputs0_loss: 0.4224 - val_n_outputs1_loss: 0.0804\nEpoch 4/100\n10/10 [==============================] - ETA: 0s - loss: 0.5615 - n_outputs0_loss: 0.4917 - n_outputs1_loss: 0.0698\nEpoch 00004: val_loss improved from 0.50280 to 0.49159, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 110ms/step - loss: 0.5615 - n_outputs0_loss: 0.4917 - n_outputs1_loss: 0.0698 - val_loss: 0.4916 - val_n_outputs0_loss: 0.4203 - val_n_outputs1_loss: 0.0713\nEpoch 5/100\n10/10 [==============================] - ETA: 0s - loss: 0.5541 - n_outputs0_loss: 0.4854 - n_outputs1_loss: 0.0687\nEpoch 00005: val_loss improved from 0.49159 to 0.48784, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 104ms/step - loss: 0.5541 - n_outputs0_loss: 0.4854 - n_outputs1_loss: 0.0687 - val_loss: 0.4878 - val_n_outputs0_loss: 0.4107 - val_n_outputs1_loss: 0.0772\nEpoch 6/100\n10/10 [==============================] - ETA: 0s - loss: 0.5527 - n_outputs0_loss: 0.4827 - n_outputs1_loss: 0.0701\nEpoch 00006: val_loss improved from 0.48784 to 0.48521, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 109ms/step - loss: 0.5527 - n_outputs0_loss: 0.4827 - n_outputs1_loss: 0.0701 - val_loss: 0.4852 - val_n_outputs0_loss: 0.4127 - val_n_outputs1_loss: 0.0725\nEpoch 7/100\n10/10 [==============================] - ETA: 0s - loss: 0.5405 - n_outputs0_loss: 0.4764 - n_outputs1_loss: 0.0641\nEpoch 00007: val_loss improved from 0.48521 to 0.48270, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 110ms/step - loss: 0.5405 - n_outputs0_loss: 0.4764 - n_outputs1_loss: 0.0641 - val_loss: 0.4827 - val_n_outputs0_loss: 0.4097 - val_n_outputs1_loss: 0.0730\nEpoch 8/100\n10/10 [==============================] - ETA: 0s - loss: 0.5383 - n_outputs0_loss: 0.4724 - n_outputs1_loss: 0.0659\nEpoch 00008: val_loss improved from 0.48270 to 0.47415, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 109ms/step - loss: 0.5383 - n_outputs0_loss: 0.4724 - n_outputs1_loss: 0.0659 - val_loss: 0.4741 - val_n_outputs0_loss: 0.4026 - val_n_outputs1_loss: 0.0715\nEpoch 9/100\n10/10 [==============================] - ETA: 0s - loss: 0.5288 - n_outputs0_loss: 0.4640 - n_outputs1_loss: 0.0648\nEpoch 00009: val_loss did not improve from 0.47415\n10/10 [==============================] - 1s 101ms/step - loss: 0.5288 - n_outputs0_loss: 0.4640 - n_outputs1_loss: 0.0648 - val_loss: 0.4780 - val_n_outputs0_loss: 0.4069 - val_n_outputs1_loss: 0.0711\nEpoch 10/100\n10/10 [==============================] - ETA: 0s - loss: 0.5344 - n_outputs0_loss: 0.4677 - n_outputs1_loss: 0.0667\nEpoch 00010: val_loss improved from 0.47415 to 0.45939, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 111ms/step - loss: 0.5344 - n_outputs0_loss: 0.4677 - n_outputs1_loss: 0.0667 - val_loss: 0.4594 - val_n_outputs0_loss: 0.3903 - val_n_outputs1_loss: 0.0691\nEpoch 11/100\n10/10 [==============================] - ETA: 0s - loss: 0.5014 - n_outputs0_loss: 0.4349 - n_outputs1_loss: 0.0666\nEpoch 00011: val_loss improved from 0.45939 to 0.44304, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 112ms/step - loss: 0.5014 - n_outputs0_loss: 0.4349 - n_outputs1_loss: 0.0666 - val_loss: 0.4430 - val_n_outputs0_loss: 0.3672 - val_n_outputs1_loss: 0.0758\nEpoch 12/100\n10/10 [==============================] - ETA: 0s - loss: 0.4585 - n_outputs0_loss: 0.3847 - n_outputs1_loss: 0.0738\nEpoch 00012: val_loss improved from 0.44304 to 0.36563, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 109ms/step - loss: 0.4585 - n_outputs0_loss: 0.3847 - n_outputs1_loss: 0.0738 - val_loss: 0.3656 - val_n_outputs0_loss: 0.2934 - val_n_outputs1_loss: 0.0723\nEpoch 13/100\n10/10 [==============================] - ETA: 0s - loss: 0.3922 - n_outputs0_loss: 0.3257 - n_outputs1_loss: 0.0664\nEpoch 00013: val_loss improved from 0.36563 to 0.30773, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 111ms/step - loss: 0.3922 - n_outputs0_loss: 0.3257 - n_outputs1_loss: 0.0664 - val_loss: 0.3077 - val_n_outputs0_loss: 0.2463 - val_n_outputs1_loss: 0.0614\nEpoch 14/100\n10/10 [==============================] - ETA: 0s - loss: 0.3662 - n_outputs0_loss: 0.3052 - n_outputs1_loss: 0.0610\nEpoch 00014: val_loss improved from 0.30773 to 0.27574, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 107ms/step - loss: 0.3662 - n_outputs0_loss: 0.3052 - n_outputs1_loss: 0.0610 - val_loss: 0.2757 - val_n_outputs0_loss: 0.2294 - val_n_outputs1_loss: 0.0463\nEpoch 15/100\n10/10 [==============================] - ETA: 0s - loss: 0.3233 - n_outputs0_loss: 0.2626 - n_outputs1_loss: 0.0607\nEpoch 00015: val_loss improved from 0.27574 to 0.24205, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 110ms/step - loss: 0.3233 - n_outputs0_loss: 0.2626 - n_outputs1_loss: 0.0607 - val_loss: 0.2421 - val_n_outputs0_loss: 0.1966 - val_n_outputs1_loss: 0.0454\nEpoch 16/100\n10/10 [==============================] - ETA: 0s - loss: 0.3078 - n_outputs0_loss: 0.2500 - n_outputs1_loss: 0.0577\nEpoch 00016: val_loss did not improve from 0.24205\n10/10 [==============================] - 1s 100ms/step - loss: 0.3078 - n_outputs0_loss: 0.2500 - n_outputs1_loss: 0.0577 - val_loss: 0.2473 - val_n_outputs0_loss: 0.2023 - val_n_outputs1_loss: 0.0450\nEpoch 17/100\n10/10 [==============================] - ETA: 0s - loss: 0.2959 - n_outputs0_loss: 0.2404 - n_outputs1_loss: 0.0555\nEpoch 00017: val_loss improved from 0.24205 to 0.22809, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 113ms/step - loss: 0.2959 - n_outputs0_loss: 0.2404 - n_outputs1_loss: 0.0555 - val_loss: 0.2281 - val_n_outputs0_loss: 0.1842 - val_n_outputs1_loss: 0.0438\nEpoch 18/100\n10/10 [==============================] - ETA: 0s - loss: 0.2820 - n_outputs0_loss: 0.2280 - n_outputs1_loss: 0.0540\nEpoch 00018: val_loss improved from 0.22809 to 0.21671, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 107ms/step - loss: 0.2820 - n_outputs0_loss: 0.2280 - n_outputs1_loss: 0.0540 - val_loss: 0.2167 - val_n_outputs0_loss: 0.1768 - val_n_outputs1_loss: 0.0400\nEpoch 19/100\n10/10 [==============================] - ETA: 0s - loss: 0.2568 - n_outputs0_loss: 0.2044 - n_outputs1_loss: 0.0524\nEpoch 00019: val_loss did not improve from 0.21671\n10/10 [==============================] - 1s 99ms/step - loss: 0.2568 - n_outputs0_loss: 0.2044 - n_outputs1_loss: 0.0524 - val_loss: 0.2190 - val_n_outputs0_loss: 0.1788 - val_n_outputs1_loss: 0.0402\nEpoch 20/100\n10/10 [==============================] - ETA: 0s - loss: 0.2621 - n_outputs0_loss: 0.2123 - n_outputs1_loss: 0.0499\nEpoch 00020: val_loss improved from 0.21671 to 0.21046, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 113ms/step - loss: 0.2621 - n_outputs0_loss: 0.2123 - n_outputs1_loss: 0.0499 - val_loss: 0.2105 - val_n_outputs0_loss: 0.1718 - val_n_outputs1_loss: 0.0386\nEpoch 21/100\n10/10 [==============================] - ETA: 0s - loss: 0.2521 - n_outputs0_loss: 0.2052 - n_outputs1_loss: 0.0469\nEpoch 00021: val_loss improved from 0.21046 to 0.20605, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 111ms/step - loss: 0.2521 - n_outputs0_loss: 0.2052 - n_outputs1_loss: 0.0469 - val_loss: 0.2060 - val_n_outputs0_loss: 0.1675 - val_n_outputs1_loss: 0.0385\nEpoch 22/100\n10/10 [==============================] - ETA: 0s - loss: 0.2261 - n_outputs0_loss: 0.1781 - n_outputs1_loss: 0.0480\nEpoch 00022: val_loss improved from 0.20605 to 0.20553, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 106ms/step - loss: 0.2261 - n_outputs0_loss: 0.1781 - n_outputs1_loss: 0.0480 - val_loss: 0.2055 - val_n_outputs0_loss: 0.1711 - val_n_outputs1_loss: 0.0344\nEpoch 23/100\n10/10 [==============================] - ETA: 0s - loss: 0.2222 - n_outputs0_loss: 0.1794 - n_outputs1_loss: 0.0429\nEpoch 00023: val_loss improved from 0.20553 to 0.20273, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 109ms/step - loss: 0.2222 - n_outputs0_loss: 0.1794 - n_outputs1_loss: 0.0429 - val_loss: 0.2027 - val_n_outputs0_loss: 0.1697 - val_n_outputs1_loss: 0.0331\nEpoch 24/100\n10/10 [==============================] - ETA: 0s - loss: 0.2126 - n_outputs0_loss: 0.1698 - n_outputs1_loss: 0.0428\nEpoch 00024: val_loss improved from 0.20273 to 0.19049, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 105ms/step - loss: 0.2126 - n_outputs0_loss: 0.1698 - n_outputs1_loss: 0.0428 - val_loss: 0.1905 - val_n_outputs0_loss: 0.1562 - val_n_outputs1_loss: 0.0343\nEpoch 25/100\n10/10 [==============================] - ETA: 0s - loss: 0.2062 - n_outputs0_loss: 0.1658 - n_outputs1_loss: 0.0404\nEpoch 00025: val_loss improved from 0.19049 to 0.18404, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 109ms/step - loss: 0.2062 - n_outputs0_loss: 0.1658 - n_outputs1_loss: 0.0404 - val_loss: 0.1840 - val_n_outputs0_loss: 0.1488 - val_n_outputs1_loss: 0.0352\nEpoch 26/100\n10/10 [==============================] - ETA: 0s - loss: 0.1928 - n_outputs0_loss: 0.1555 - n_outputs1_loss: 0.0372\nEpoch 00026: val_loss did not improve from 0.18404\n10/10 [==============================] - 1s 102ms/step - loss: 0.1928 - n_outputs0_loss: 0.1555 - n_outputs1_loss: 0.0372 - val_loss: 0.1907 - val_n_outputs0_loss: 0.1563 - val_n_outputs1_loss: 0.0344\nEpoch 27/100\n10/10 [==============================] - ETA: 0s - loss: 0.1834 - n_outputs0_loss: 0.1428 - n_outputs1_loss: 0.0406\nEpoch 00027: val_loss did not improve from 0.18404\n10/10 [==============================] - 1s 103ms/step - loss: 0.1834 - n_outputs0_loss: 0.1428 - n_outputs1_loss: 0.0406 - val_loss: 0.1922 - val_n_outputs0_loss: 0.1527 - val_n_outputs1_loss: 0.0396\nEpoch 28/100\n10/10 [==============================] - ETA: 0s - loss: 0.1668 - n_outputs0_loss: 0.1282 - n_outputs1_loss: 0.0386\nEpoch 00028: val_loss improved from 0.18404 to 0.17462, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 113ms/step - loss: 0.1668 - n_outputs0_loss: 0.1282 - n_outputs1_loss: 0.0386 - val_loss: 0.1746 - val_n_outputs0_loss: 0.1436 - val_n_outputs1_loss: 0.0311\nEpoch 29/100\n10/10 [==============================] - ETA: 0s - loss: 0.1654 - n_outputs0_loss: 0.1282 - n_outputs1_loss: 0.0372\nEpoch 00029: val_loss improved from 0.17462 to 0.17365, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 107ms/step - loss: 0.1654 - n_outputs0_loss: 0.1282 - n_outputs1_loss: 0.0372 - val_loss: 0.1736 - val_n_outputs0_loss: 0.1432 - val_n_outputs1_loss: 0.0305\nEpoch 30/100\n10/10 [==============================] - ETA: 0s - loss: 0.1615 - n_outputs0_loss: 0.1250 - n_outputs1_loss: 0.0364\nEpoch 00030: val_loss did not improve from 0.17365\n10/10 [==============================] - 1s 96ms/step - loss: 0.1615 - n_outputs0_loss: 0.1250 - n_outputs1_loss: 0.0364 - val_loss: 0.1799 - val_n_outputs0_loss: 0.1493 - val_n_outputs1_loss: 0.0306\nEpoch 31/100\n10/10 [==============================] - ETA: 0s - loss: 0.1495 - n_outputs0_loss: 0.1162 - n_outputs1_loss: 0.0332\nEpoch 00031: val_loss improved from 0.17365 to 0.17255, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 112ms/step - loss: 0.1495 - n_outputs0_loss: 0.1162 - n_outputs1_loss: 0.0332 - val_loss: 0.1726 - val_n_outputs0_loss: 0.1383 - val_n_outputs1_loss: 0.0342\nEpoch 32/100\n10/10 [==============================] - ETA: 0s - loss: 0.1453 - n_outputs0_loss: 0.1121 - n_outputs1_loss: 0.0333\nEpoch 00032: val_loss did not improve from 0.17255\n10/10 [==============================] - 1s 104ms/step - loss: 0.1453 - n_outputs0_loss: 0.1121 - n_outputs1_loss: 0.0333 - val_loss: 0.1764 - val_n_outputs0_loss: 0.1456 - val_n_outputs1_loss: 0.0308\nEpoch 33/100\n10/10 [==============================] - ETA: 0s - loss: 0.1346 - n_outputs0_loss: 0.1043 - n_outputs1_loss: 0.0303\nEpoch 00033: val_loss improved from 0.17255 to 0.17092, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 115ms/step - loss: 0.1346 - n_outputs0_loss: 0.1043 - n_outputs1_loss: 0.0303 - val_loss: 0.1709 - val_n_outputs0_loss: 0.1395 - val_n_outputs1_loss: 0.0315\nEpoch 34/100\n10/10 [==============================] - ETA: 0s - loss: 0.1293 - n_outputs0_loss: 0.0991 - n_outputs1_loss: 0.0302\nEpoch 00034: val_loss improved from 0.17092 to 0.16704, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 109ms/step - loss: 0.1293 - n_outputs0_loss: 0.0991 - n_outputs1_loss: 0.0302 - val_loss: 0.1670 - val_n_outputs0_loss: 0.1342 - val_n_outputs1_loss: 0.0329\nEpoch 35/100\n10/10 [==============================] - ETA: 0s - loss: 0.1196 - n_outputs0_loss: 0.0890 - n_outputs1_loss: 0.0306\nEpoch 00035: val_loss improved from 0.16704 to 0.15917, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 110ms/step - loss: 0.1196 - n_outputs0_loss: 0.0890 - n_outputs1_loss: 0.0306 - val_loss: 0.1592 - val_n_outputs0_loss: 0.1280 - val_n_outputs1_loss: 0.0311\nEpoch 36/100\n10/10 [==============================] - ETA: 0s - loss: 0.1086 - n_outputs0_loss: 0.0805 - n_outputs1_loss: 0.0281\nEpoch 00036: val_loss improved from 0.15917 to 0.15774, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 114ms/step - loss: 0.1086 - n_outputs0_loss: 0.0805 - n_outputs1_loss: 0.0281 - val_loss: 0.1577 - val_n_outputs0_loss: 0.1264 - val_n_outputs1_loss: 0.0313\nEpoch 37/100\n10/10 [==============================] - ETA: 0s - loss: 0.1032 - n_outputs0_loss: 0.0753 - n_outputs1_loss: 0.0279\nEpoch 00037: val_loss did not improve from 0.15774\n10/10 [==============================] - 1s 99ms/step - loss: 0.1032 - n_outputs0_loss: 0.0753 - n_outputs1_loss: 0.0279 - val_loss: 0.1598 - val_n_outputs0_loss: 0.1281 - val_n_outputs1_loss: 0.0317\nEpoch 38/100\n10/10 [==============================] - ETA: 0s - loss: 0.1050 - n_outputs0_loss: 0.0783 - n_outputs1_loss: 0.0266\nEpoch 00038: val_loss did not improve from 0.15774\n10/10 [==============================] - 1s 105ms/step - loss: 0.1050 - n_outputs0_loss: 0.0783 - n_outputs1_loss: 0.0266 - val_loss: 0.1586 - val_n_outputs0_loss: 0.1269 - val_n_outputs1_loss: 0.0317\nEpoch 39/100\n10/10 [==============================] - ETA: 0s - loss: 0.0983 - n_outputs0_loss: 0.0722 - n_outputs1_loss: 0.0261\nEpoch 00039: val_loss improved from 0.15774 to 0.15441, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 111ms/step - loss: 0.0983 - n_outputs0_loss: 0.0722 - n_outputs1_loss: 0.0261 - val_loss: 0.1544 - val_n_outputs0_loss: 0.1243 - val_n_outputs1_loss: 0.0301\nEpoch 40/100\n10/10 [==============================] - ETA: 0s - loss: 0.0967 - n_outputs0_loss: 0.0703 - n_outputs1_loss: 0.0265\nEpoch 00040: val_loss did not improve from 0.15441\n10/10 [==============================] - 1s 103ms/step - loss: 0.0967 - n_outputs0_loss: 0.0703 - n_outputs1_loss: 0.0265 - val_loss: 0.1588 - val_n_outputs0_loss: 0.1275 - val_n_outputs1_loss: 0.0313\nEpoch 41/100\n10/10 [==============================] - ETA: 0s - loss: 0.0989 - n_outputs0_loss: 0.0736 - n_outputs1_loss: 0.0253\nEpoch 00041: val_loss did not improve from 0.15441\n10/10 [==============================] - 1s 104ms/step - loss: 0.0989 - n_outputs0_loss: 0.0736 - n_outputs1_loss: 0.0253 - val_loss: 0.1580 - val_n_outputs0_loss: 0.1271 - val_n_outputs1_loss: 0.0308\nEpoch 42/100\n10/10 [==============================] - ETA: 0s - loss: 0.1010 - n_outputs0_loss: 0.0758 - n_outputs1_loss: 0.0253\nEpoch 00042: val_loss did not improve from 0.15441\n10/10 [==============================] - 1s 107ms/step - loss: 0.1010 - n_outputs0_loss: 0.0758 - n_outputs1_loss: 0.0253 - val_loss: 0.1614 - val_n_outputs0_loss: 0.1315 - val_n_outputs1_loss: 0.0299\nEpoch 43/100\n10/10 [==============================] - ETA: 0s - loss: 0.0923 - n_outputs0_loss: 0.0680 - n_outputs1_loss: 0.0243\nEpoch 00043: val_loss did not improve from 0.15441\n10/10 [==============================] - 1s 101ms/step - loss: 0.0923 - n_outputs0_loss: 0.0680 - n_outputs1_loss: 0.0243 - val_loss: 0.1587 - val_n_outputs0_loss: 0.1298 - val_n_outputs1_loss: 0.0288\nEpoch 44/100\n10/10 [==============================] - ETA: 0s - loss: 0.0870 - n_outputs0_loss: 0.0629 - n_outputs1_loss: 0.0242\nEpoch 00044: val_loss did not improve from 0.15441\n10/10 [==============================] - 1s 105ms/step - loss: 0.0870 - n_outputs0_loss: 0.0629 - n_outputs1_loss: 0.0242 - val_loss: 0.1601 - val_n_outputs0_loss: 0.1304 - val_n_outputs1_loss: 0.0296\nWARNING: CPU random generator seem to be failing, disable hardware random number generation\nWARNING: RDRND generated: 0xffffffff 0xffffffff 0xffffffff 0xffffffff\n\nreal 1m10.563s\nuser 1m11.485s\nsys 0m39.110s\n
"},{"location":"training-logs/msp-car-2/","title":"Training run for Minneapolis STEM Partners","text":"Car #2 had 15045 images
wc -l ~/mycar/data/msp-car-2/*.catalog 15045
"},{"location":"training-logs/msp-car-2/#ls-1-mycardatamsp-car-2images-wc-l","title":"ls -1 ~/mycar/data/msp-car-2/images | wc -l","text":"15045
(donkey) arl@arl1:~/mycar$ donkey train --tub=./data/msp-car-2 --model=./models/msp-car-2.h5\n________ ______ _________ \n___ __ \\_______________ /___________ __ __ ____/_____ ________\n__ / / / __ \\_ __ \\_ //_/ _ \\_ / / / _ / _ __ `/_ ___/\n_ /_/ // /_/ / / / / ,< / __/ /_/ / / /___ / /_/ /_ / \n/_____/ \\____//_/ /_//_/|_| \\___/_\\__, / \\____/ \\__,_/ /_/ \n /____/ \n\nusing donkey v4.2.1 ...\nloading config file: ./config.py\nloading personal config over-rides from myconfig.py\n\"get_model_by_type\" model Type is: linear\nCreated KerasLinear\n2021-07-26 20:22:54.320076: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\n2021-07-26 20:22:54.338339: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 20:22:54.338783: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: \npciBusID: 0000:09:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5\ncoreClock: 1.635GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s\n2021-07-26 20:22:54.338925: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2021-07-26 20:22:54.339823: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2021-07-26 20:22:54.340834: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n2021-07-26 20:22:54.340981: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n2021-07-26 20:22:54.341775: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n2021-07-26 20:22:54.342170: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n2021-07-26 20:22:54.343898: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2021-07-26 20:22:54.344043: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 20:22:54.344546: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 20:22:54.344933: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0\n2021-07-26 20:22:54.345163: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA\n2021-07-26 20:22:54.349277: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3592950000 Hz\n2021-07-26 20:22:54.349572: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5575e341a9f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n2021-07-26 20:22:54.349585: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n2021-07-26 20:22:54.349717: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 20:22:54.350124: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: \npciBusID: 0000:09:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5\ncoreClock: 1.635GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s\n2021-07-26 20:22:54.350160: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2021-07-26 20:22:54.350171: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2021-07-26 20:22:54.350180: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n2021-07-26 20:22:54.350191: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n2021-07-26 20:22:54.350200: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n2021-07-26 20:22:54.350210: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n2021-07-26 20:22:54.350220: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2021-07-26 20:22:54.350282: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 20:22:54.350723: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 20:22:54.351106: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0\n2021-07-26 20:22:54.351127: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2021-07-26 20:22:54.423106: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:\n2021-07-26 20:22:54.423133: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 \n2021-07-26 20:22:54.423138: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N \n2021-07-26 20:22:54.423354: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 20:22:54.423819: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 20:22:54.424248: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 20:22:54.424632: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9890 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:09:00.0, compute capability: 7.5)\n2021-07-26 20:22:54.425999: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5575e52b18b0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n2021-07-26 20:22:54.426009: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce RTX 2080 Ti, Compute Capability 7.5\nModel: \"model\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\nimg_in (InputLayer) [(None, 224, 224, 3) 0 \n__________________________________________________________________________________________________\nconv2d_1 (Conv2D) (None, 110, 110, 24) 1824 img_in[0][0] \n__________________________________________________________________________________________________\ndropout (Dropout) (None, 110, 110, 24) 0 conv2d_1[0][0] \n__________________________________________________________________________________________________\nconv2d_2 (Conv2D) (None, 53, 53, 32) 19232 dropout[0][0] \n__________________________________________________________________________________________________\ndropout_1 (Dropout) (None, 53, 53, 32) 0 conv2d_2[0][0] \n__________________________________________________________________________________________________\nconv2d_3 (Conv2D) (None, 25, 25, 64) 51264 dropout_1[0][0] \n__________________________________________________________________________________________________\ndropout_2 (Dropout) (None, 25, 25, 64) 0 conv2d_3[0][0] \n__________________________________________________________________________________________________\nconv2d_4 (Conv2D) (None, 23, 23, 64) 36928 dropout_2[0][0] \n__________________________________________________________________________________________________\ndropout_3 (Dropout) (None, 23, 23, 64) 0 conv2d_4[0][0] \n__________________________________________________________________________________________________\nconv2d_5 (Conv2D) (None, 21, 21, 64) 36928 dropout_3[0][0] \n__________________________________________________________________________________________________\ndropout_4 (Dropout) (None, 21, 21, 64) 0 conv2d_5[0][0] \n__________________________________________________________________________________________________\nflattened (Flatten) (None, 28224) 0 dropout_4[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 100) 2822500 flattened[0][0] \n__________________________________________________________________________________________________\ndropout_5 (Dropout) (None, 100) 0 dense_1[0][0] \n__________________________________________________________________________________________________\ndense_2 (Dense) (None, 50) 5050 dropout_5[0][0] \n__________________________________________________________________________________________________\ndropout_6 (Dropout) (None, 50) 0 dense_2[0][0] \n__________________________________________________________________________________________________\nn_outputs0 (Dense) (None, 1) 51 dropout_6[0][0] \n__________________________________________________________________________________________________\nn_outputs1 (Dense) (None, 1) 51 dropout_6[0][0] \n==================================================================================================\nTotal params: 2,973,828\nTrainable params: 2,973,828\nNon-trainable params: 0\n__________________________________________________________________________________________________\nNone\nUsing catalog /home/arl/mycar/data/msp-car-2/catalog_22.catalog\n\nRecords # Training 11696\nRecords # Validation 2924\nEpoch 1/100\n2021-07-26 20:22:55.471623: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2021-07-26 20:22:55.802565: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2021-07-26 20:22:56.515413: W tensorflow/stream_executor/gpu/asm_compiler.cc:116] *** WARNING *** You are using ptxas 9.1.108, which is older than 9.2.88. ptxas 9.x before 9.2.88 is known to miscompile XLA code, leading to incorrect results or invalid-address errors.\n\nYou do not need to update to CUDA 9.2.88; cherry-picking the ptxas binary is sufficient.\n2021-07-26 20:22:56.559204: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] Internal: ptxas exited with non-zero error code 65280, output: ptxas fatal : Value 'sm_75' is not defined for option 'gpu-name'\n\nRelying on driver to perform ptx compilation. \nModify $PATH to customize ptxas location.\nThis message will be only logged once.\n92/92 [==============================] - ETA: 0s - loss: 0.6304 - n_outputs0_loss: 0.3238 - n_outputs1_loss: 0.3066 \nEpoch 00001: val_loss improved from inf to 0.59133, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 29s 310ms/step - loss: 0.6304 - n_outputs0_loss: 0.3238 - n_outputs1_loss: 0.3066 - val_loss: 0.5913 - val_n_outputs0_loss: 0.3121 - val_n_outputs1_loss: 0.2793\nEpoch 2/100\n92/92 [==============================] - ETA: 0s - loss: 0.5150 - n_outputs0_loss: 0.2730 - n_outputs1_loss: 0.2419\nEpoch 00002: val_loss improved from 0.59133 to 0.39368, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 11s 117ms/step - loss: 0.5150 - n_outputs0_loss: 0.2730 - n_outputs1_loss: 0.2419 - val_loss: 0.3937 - val_n_outputs0_loss: 0.2108 - val_n_outputs1_loss: 0.1828\nEpoch 3/100\n92/92 [==============================] - ETA: 0s - loss: 0.3885 - n_outputs0_loss: 0.2088 - n_outputs1_loss: 0.1797\nEpoch 00003: val_loss improved from 0.39368 to 0.34087, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 11s 117ms/step - loss: 0.3885 - n_outputs0_loss: 0.2088 - n_outputs1_loss: 0.1797 - val_loss: 0.3409 - val_n_outputs0_loss: 0.1923 - val_n_outputs1_loss: 0.1486\nEpoch 4/100\n92/92 [==============================] - ETA: 0s - loss: 0.3449 - n_outputs0_loss: 0.1870 - n_outputs1_loss: 0.1579\nEpoch 00004: val_loss improved from 0.34087 to 0.30588, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 11s 119ms/step - loss: 0.3449 - n_outputs0_loss: 0.1870 - n_outputs1_loss: 0.1579 - val_loss: 0.3059 - val_n_outputs0_loss: 0.1771 - val_n_outputs1_loss: 0.1288\nEpoch 5/100\n92/92 [==============================] - ETA: 0s - loss: 0.3161 - n_outputs0_loss: 0.1763 - n_outputs1_loss: 0.1397\nEpoch 00005: val_loss improved from 0.30588 to 0.28650, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 113ms/step - loss: 0.3161 - n_outputs0_loss: 0.1763 - n_outputs1_loss: 0.1397 - val_loss: 0.2865 - val_n_outputs0_loss: 0.1722 - val_n_outputs1_loss: 0.1143\nEpoch 6/100\n92/92 [==============================] - ETA: 0s - loss: 0.2876 - n_outputs0_loss: 0.1633 - n_outputs1_loss: 0.1243\nEpoch 00006: val_loss improved from 0.28650 to 0.26754, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 110ms/step - loss: 0.2876 - n_outputs0_loss: 0.1633 - n_outputs1_loss: 0.1243 - val_loss: 0.2675 - val_n_outputs0_loss: 0.1623 - val_n_outputs1_loss: 0.1053\nEpoch 7/100\n92/92 [==============================] - ETA: 0s - loss: 0.2612 - n_outputs0_loss: 0.1508 - n_outputs1_loss: 0.1103\nEpoch 00007: val_loss improved from 0.26754 to 0.25034, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 111ms/step - loss: 0.2612 - n_outputs0_loss: 0.1508 - n_outputs1_loss: 0.1103 - val_loss: 0.2503 - val_n_outputs0_loss: 0.1551 - val_n_outputs1_loss: 0.0952\nEpoch 8/100\n92/92 [==============================] - ETA: 0s - loss: 0.2476 - n_outputs0_loss: 0.1435 - n_outputs1_loss: 0.1041\nEpoch 00008: val_loss improved from 0.25034 to 0.24291, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 109ms/step - loss: 0.2476 - n_outputs0_loss: 0.1435 - n_outputs1_loss: 0.1041 - val_loss: 0.2429 - val_n_outputs0_loss: 0.1524 - val_n_outputs1_loss: 0.0905\nEpoch 9/100\n92/92 [==============================] - ETA: 0s - loss: 0.2283 - n_outputs0_loss: 0.1323 - n_outputs1_loss: 0.0960\nEpoch 00009: val_loss improved from 0.24291 to 0.22718, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 110ms/step - loss: 0.2283 - n_outputs0_loss: 0.1323 - n_outputs1_loss: 0.0960 - val_loss: 0.2272 - val_n_outputs0_loss: 0.1450 - val_n_outputs1_loss: 0.0821\nEpoch 10/100\n92/92 [==============================] - ETA: 0s - loss: 0.2183 - n_outputs0_loss: 0.1267 - n_outputs1_loss: 0.0916\nEpoch 00010: val_loss did not improve from 0.22718\n92/92 [==============================] - 10s 109ms/step - loss: 0.2183 - n_outputs0_loss: 0.1267 - n_outputs1_loss: 0.0916 - val_loss: 0.2305 - val_n_outputs0_loss: 0.1471 - val_n_outputs1_loss: 0.0834\nEpoch 11/100\n92/92 [==============================] - ETA: 0s - loss: 0.2022 - n_outputs0_loss: 0.1187 - n_outputs1_loss: 0.0835\nEpoch 00011: val_loss improved from 0.22718 to 0.21581, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 110ms/step - loss: 0.2022 - n_outputs0_loss: 0.1187 - n_outputs1_loss: 0.0835 - val_loss: 0.2158 - val_n_outputs0_loss: 0.1375 - val_n_outputs1_loss: 0.0783\nEpoch 12/100\n92/92 [==============================] - ETA: 0s - loss: 0.1921 - n_outputs0_loss: 0.1085 - n_outputs1_loss: 0.0836\nEpoch 00012: val_loss did not improve from 0.21581\n92/92 [==============================] - 10s 110ms/step - loss: 0.1921 - n_outputs0_loss: 0.1085 - n_outputs1_loss: 0.0836 - val_loss: 0.2185 - val_n_outputs0_loss: 0.1382 - val_n_outputs1_loss: 0.0802\nEpoch 13/100\n92/92 [==============================] - ETA: 0s - loss: 0.1826 - n_outputs0_loss: 0.1056 - n_outputs1_loss: 0.0770\nEpoch 00013: val_loss did not improve from 0.21581\n92/92 [==============================] - 10s 110ms/step - loss: 0.1826 - n_outputs0_loss: 0.1056 - n_outputs1_loss: 0.0770 - val_loss: 0.2198 - val_n_outputs0_loss: 0.1394 - val_n_outputs1_loss: 0.0804\nEpoch 14/100\n92/92 [==============================] - ETA: 0s - loss: 0.1771 - n_outputs0_loss: 0.1009 - n_outputs1_loss: 0.0762\nEpoch 00014: val_loss did not improve from 0.21581\n92/92 [==============================] - 10s 111ms/step - loss: 0.1771 - n_outputs0_loss: 0.1009 - n_outputs1_loss: 0.0762 - val_loss: 0.2167 - val_n_outputs0_loss: 0.1389 - val_n_outputs1_loss: 0.0778\nEpoch 15/100\n92/92 [==============================] - ETA: 0s - loss: 0.1676 - n_outputs0_loss: 0.0959 - n_outputs1_loss: 0.0718\nEpoch 00015: val_loss improved from 0.21581 to 0.20899, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 112ms/step - loss: 0.1676 - n_outputs0_loss: 0.0959 - n_outputs1_loss: 0.0718 - val_loss: 0.2090 - val_n_outputs0_loss: 0.1345 - val_n_outputs1_loss: 0.0745\nEpoch 16/100\n92/92 [==============================] - ETA: 0s - loss: 0.1608 - n_outputs0_loss: 0.0910 - n_outputs1_loss: 0.0698\nEpoch 00016: val_loss did not improve from 0.20899\n92/92 [==============================] - 10s 110ms/step - loss: 0.1608 - n_outputs0_loss: 0.0910 - n_outputs1_loss: 0.0698 - val_loss: 0.2097 - val_n_outputs0_loss: 0.1348 - val_n_outputs1_loss: 0.0748\nEpoch 17/100\n92/92 [==============================] - ETA: 0s - loss: 0.1534 - n_outputs0_loss: 0.0870 - n_outputs1_loss: 0.0664\nEpoch 00017: val_loss improved from 0.20899 to 0.20324, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 112ms/step - loss: 0.1534 - n_outputs0_loss: 0.0870 - n_outputs1_loss: 0.0664 - val_loss: 0.2032 - val_n_outputs0_loss: 0.1329 - val_n_outputs1_loss: 0.0703\nEpoch 18/100\n92/92 [==============================] - ETA: 0s - loss: 0.1490 - n_outputs0_loss: 0.0846 - n_outputs1_loss: 0.0644\nEpoch 00018: val_loss improved from 0.20324 to 0.19965, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 111ms/step - loss: 0.1490 - n_outputs0_loss: 0.0846 - n_outputs1_loss: 0.0644 - val_loss: 0.1997 - val_n_outputs0_loss: 0.1309 - val_n_outputs1_loss: 0.0688\nEpoch 19/100\n92/92 [==============================] - ETA: 0s - loss: 0.1452 - n_outputs0_loss: 0.0828 - n_outputs1_loss: 0.0624\nEpoch 00019: val_loss improved from 0.19965 to 0.19877, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 111ms/step - loss: 0.1452 - n_outputs0_loss: 0.0828 - n_outputs1_loss: 0.0624 - val_loss: 0.1988 - val_n_outputs0_loss: 0.1294 - val_n_outputs1_loss: 0.0694\nEpoch 20/100\n92/92 [==============================] - ETA: 0s - loss: 0.1353 - n_outputs0_loss: 0.0747 - n_outputs1_loss: 0.0606\nEpoch 00020: val_loss did not improve from 0.19877\n92/92 [==============================] - 10s 110ms/step - loss: 0.1353 - n_outputs0_loss: 0.0747 - n_outputs1_loss: 0.0606 - val_loss: 0.2004 - val_n_outputs0_loss: 0.1312 - val_n_outputs1_loss: 0.0692\nEpoch 21/100\n92/92 [==============================] - ETA: 0s - loss: 0.1319 - n_outputs0_loss: 0.0731 - n_outputs1_loss: 0.0588\nEpoch 00021: val_loss improved from 0.19877 to 0.19564, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 111ms/step - loss: 0.1319 - n_outputs0_loss: 0.0731 - n_outputs1_loss: 0.0588 - val_loss: 0.1956 - val_n_outputs0_loss: 0.1252 - val_n_outputs1_loss: 0.0704\nEpoch 22/100\n92/92 [==============================] - ETA: 0s - loss: 0.1299 - n_outputs0_loss: 0.0713 - n_outputs1_loss: 0.0585\nEpoch 00022: val_loss improved from 0.19564 to 0.19422, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 111ms/step - loss: 0.1299 - n_outputs0_loss: 0.0713 - n_outputs1_loss: 0.0585 - val_loss: 0.1942 - val_n_outputs0_loss: 0.1259 - val_n_outputs1_loss: 0.0683\nEpoch 23/100\n92/92 [==============================] - ETA: 0s - loss: 0.1231 - n_outputs0_loss: 0.0684 - n_outputs1_loss: 0.0548\nEpoch 00023: val_loss improved from 0.19422 to 0.19270, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 111ms/step - loss: 0.1231 - n_outputs0_loss: 0.0684 - n_outputs1_loss: 0.0548 - val_loss: 0.1927 - val_n_outputs0_loss: 0.1245 - val_n_outputs1_loss: 0.0682\nEpoch 24/100\n92/92 [==============================] - ETA: 0s - loss: 0.1239 - n_outputs0_loss: 0.0673 - n_outputs1_loss: 0.0566\nEpoch 00024: val_loss did not improve from 0.19270\n92/92 [==============================] - 10s 110ms/step - loss: 0.1239 - n_outputs0_loss: 0.0673 - n_outputs1_loss: 0.0566 - val_loss: 0.1969 - val_n_outputs0_loss: 0.1283 - val_n_outputs1_loss: 0.0686\nEpoch 25/100\n92/92 [==============================] - ETA: 0s - loss: 0.1200 - n_outputs0_loss: 0.0650 - n_outputs1_loss: 0.0550\nEpoch 00025: val_loss did not improve from 0.19270\n92/92 [==============================] - 10s 111ms/step - loss: 0.1200 - n_outputs0_loss: 0.0650 - n_outputs1_loss: 0.0550 - val_loss: 0.1990 - val_n_outputs0_loss: 0.1284 - val_n_outputs1_loss: 0.0706\nEpoch 26/100\n92/92 [==============================] - ETA: 0s - loss: 0.1171 - n_outputs0_loss: 0.0636 - n_outputs1_loss: 0.0535\nEpoch 00026: val_loss did not improve from 0.19270\n92/92 [==============================] - 10s 111ms/step - loss: 0.1171 - n_outputs0_loss: 0.0636 - n_outputs1_loss: 0.0535 - val_loss: 0.1929 - val_n_outputs0_loss: 0.1250 - val_n_outputs1_loss: 0.0678\nEpoch 27/100\n92/92 [==============================] - ETA: 0s - loss: 0.1167 - n_outputs0_loss: 0.0638 - n_outputs1_loss: 0.0529\nEpoch 00027: val_loss did not improve from 0.19270\n92/92 [==============================] - 10s 112ms/step - loss: 0.1167 - n_outputs0_loss: 0.0638 - n_outputs1_loss: 0.0529 - val_loss: 0.1937 - val_n_outputs0_loss: 0.1269 - val_n_outputs1_loss: 0.0668\nEpoch 28/100\n92/92 [==============================] - ETA: 0s - loss: 0.1123 - n_outputs0_loss: 0.0610 - n_outputs1_loss: 0.0513\nEpoch 00028: val_loss improved from 0.19270 to 0.19161, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 112ms/step - loss: 0.1123 - n_outputs0_loss: 0.0610 - n_outputs1_loss: 0.0513 - val_loss: 0.1916 - val_n_outputs0_loss: 0.1230 - val_n_outputs1_loss: 0.0686\nEpoch 29/100\n92/92 [==============================] - ETA: 0s - loss: 0.1086 - n_outputs0_loss: 0.0584 - n_outputs1_loss: 0.0501\nEpoch 00029: val_loss improved from 0.19161 to 0.18655, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 110ms/step - loss: 0.1086 - n_outputs0_loss: 0.0584 - n_outputs1_loss: 0.0501 - val_loss: 0.1865 - val_n_outputs0_loss: 0.1216 - val_n_outputs1_loss: 0.0650\nEpoch 30/100\n92/92 [==============================] - ETA: 0s - loss: 0.1093 - n_outputs0_loss: 0.0593 - n_outputs1_loss: 0.0500\nEpoch 00030: val_loss did not improve from 0.18655\n92/92 [==============================] - 10s 109ms/step - loss: 0.1093 - n_outputs0_loss: 0.0593 - n_outputs1_loss: 0.0500 - val_loss: 0.1936 - val_n_outputs0_loss: 0.1240 - val_n_outputs1_loss: 0.0696\nEpoch 31/100\n92/92 [==============================] - ETA: 0s - loss: 0.1077 - n_outputs0_loss: 0.0578 - n_outputs1_loss: 0.0499\nEpoch 00031: val_loss did not improve from 0.18655\n92/92 [==============================] - 10s 110ms/step - loss: 0.1077 - n_outputs0_loss: 0.0578 - n_outputs1_loss: 0.0499 - val_loss: 0.1889 - val_n_outputs0_loss: 0.1222 - val_n_outputs1_loss: 0.0667\nEpoch 32/100\n92/92 [==============================] - ETA: 0s - loss: 0.1026 - n_outputs0_loss: 0.0551 - n_outputs1_loss: 0.0475\nEpoch 00032: val_loss improved from 0.18655 to 0.18343, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 111ms/step - loss: 0.1026 - n_outputs0_loss: 0.0551 - n_outputs1_loss: 0.0475 - val_loss: 0.1834 - val_n_outputs0_loss: 0.1206 - val_n_outputs1_loss: 0.0629\nEpoch 33/100\n92/92 [==============================] - ETA: 0s - loss: 0.1022 - n_outputs0_loss: 0.0545 - n_outputs1_loss: 0.0477\nEpoch 00033: val_loss did not improve from 0.18343\n92/92 [==============================] - 10s 110ms/step - loss: 0.1022 - n_outputs0_loss: 0.0545 - n_outputs1_loss: 0.0477 - val_loss: 0.1843 - val_n_outputs0_loss: 0.1191 - val_n_outputs1_loss: 0.0652\nEpoch 34/100\n92/92 [==============================] - ETA: 0s - loss: 0.0995 - n_outputs0_loss: 0.0529 - n_outputs1_loss: 0.0466\nEpoch 00034: val_loss improved from 0.18343 to 0.18117, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 110ms/step - loss: 0.0995 - n_outputs0_loss: 0.0529 - n_outputs1_loss: 0.0466 - val_loss: 0.1812 - val_n_outputs0_loss: 0.1166 - val_n_outputs1_loss: 0.0646\nEpoch 35/100\n92/92 [==============================] - ETA: 0s - loss: 0.0989 - n_outputs0_loss: 0.0526 - n_outputs1_loss: 0.0463\nEpoch 00035: val_loss did not improve from 0.18117\n92/92 [==============================] - 10s 110ms/step - loss: 0.0989 - n_outputs0_loss: 0.0526 - n_outputs1_loss: 0.0463 - val_loss: 0.1835 - val_n_outputs0_loss: 0.1177 - val_n_outputs1_loss: 0.0657\nEpoch 36/100\n92/92 [==============================] - ETA: 0s - loss: 0.0972 - n_outputs0_loss: 0.0514 - n_outputs1_loss: 0.0458\nEpoch 00036: val_loss did not improve from 0.18117\n92/92 [==============================] - 10s 111ms/step - loss: 0.0972 - n_outputs0_loss: 0.0514 - n_outputs1_loss: 0.0458 - val_loss: 0.1838 - val_n_outputs0_loss: 0.1198 - val_n_outputs1_loss: 0.0641\nEpoch 37/100\n92/92 [==============================] - ETA: 0s - loss: 0.0959 - n_outputs0_loss: 0.0509 - n_outputs1_loss: 0.0450\nEpoch 00037: val_loss did not improve from 0.18117\n92/92 [==============================] - 10s 109ms/step - loss: 0.0959 - n_outputs0_loss: 0.0509 - n_outputs1_loss: 0.0450 - val_loss: 0.1830 - val_n_outputs0_loss: 0.1191 - val_n_outputs1_loss: 0.0639\nEpoch 38/100\n92/92 [==============================] - ETA: 0s - loss: 0.0934 - n_outputs0_loss: 0.0496 - n_outputs1_loss: 0.0438\nEpoch 00038: val_loss did not improve from 0.18117\n92/92 [==============================] - 10s 110ms/step - loss: 0.0934 - n_outputs0_loss: 0.0496 - n_outputs1_loss: 0.0438 - val_loss: 0.1845 - val_n_outputs0_loss: 0.1185 - val_n_outputs1_loss: 0.0660\nEpoch 39/100\n92/92 [==============================] - ETA: 0s - loss: 0.0923 - n_outputs0_loss: 0.0477 - n_outputs1_loss: 0.0446\nEpoch 00039: val_loss did not improve from 0.18117\n92/92 [==============================] - 10s 110ms/step - loss: 0.0923 - n_outputs0_loss: 0.0477 - n_outputs1_loss: 0.0446 - val_loss: 0.1818 - val_n_outputs0_loss: 0.1186 - val_n_outputs1_loss: 0.0632\nWARNING: CPU random generator seem to be failing, disable hardware random number generation\nWARNING: RDRND generated: 0xffffffff 0xffffffff 0xffffffff 0xffffffff\n(donkey) arl@arl1:~/mycar$\n
"},{"location":"training-logs/msp-car-2/#checking-the-models","title":"Checking the models","text":"ls -l models/*\n
returns
ls -l models/*\n-rw-r--r-- 1 arl arl 32317 Jul 26 20:30 models/database.json\n-rw-r--r-- 1 arl arl 35773936 Jul 26 20:17 models/msp-car-1-gpu.h5\n-rw-r--r-- 1 arl arl 27506 Jul 26 20:17 models/msp-car-1-gpu.png\n-rw-r--r-- 1 arl arl 23659 Jul 26 19:57 models/msp-car-1.png\n-rw-r--r-- 1 arl arl 35773936 Jul 26 20:29 models/msp-car-2.h5\n-rw-r--r-- 1 arl arl 25670 Jul 26 20:30 models/msp-car-2.png\n-rw-r--r-- 1 arl arl 22616 Feb 2 2020 models/mypilot.h5_loss_acc_0.040245.png\n-rw-r--r-- 1 arl arl 26687 Feb 2 2020 models/mypilot.h5_loss_acc_0.042222.png\n-rw-r--r-- 1 arl arl 11939744 Feb 2 2020 models/ref-model.h5\n
"},{"location":"umn-curriculum/attribution/","title":"Attribution","text":"University of Minnesota - Twin Cities
Department of Electrical and Computer Engineering
Senior Design - Spring 2024
Authors:
Before we start, let's clarify what we're doing. We've created a program (or \"model\") that can control a car to drive by itself, kind of like a video game character that knows how to navigate obstacles without you touching the controls. Now, we want to take this program from our main computer (let's call it the \"Host PC\") and put it onto the smaller computer inside the car (we'll use \"Raspberry Pi\" for this). The Raspberry Pi is like the car's brain, and we're going to teach it how to drive on autopilot.
"},{"location":"umn-curriculum/autopilot/#step-1-connect-both-computers-to-the-same-network","title":"Step 1: Connect Both Computers to the Same Network","text":"Why? - For the Host PC to talk to the Raspberry Pi, they need to be on the same Wi-Fi network, like how two phones need to be on the same network to share photos.
How? - Follow the instructions in the \"Networking\" section to connect both devices to the same Wi-Fi. This is similar to how you connect your phone or laptop to your home Wi-Fi.
How? - Open the \"Terminal\" on the Host PC. The Terminal is a place where you can type commands for your computer to follow.
What to Type: - Enter ping <hostname>.local
but replace <hostname>
; with your Raspberry Pi's hostname. This is like asking, \"Hey, are you there?\" to the Raspberry Pi.
Expected Outcome: - If everything is set up right, your computer will reply back with messages that show it's connected to the Raspberry Pi.
What We're Doing: - We're going to send the driving program (the model) from the Host PC to the Raspberry Pi.
How? - We use a command that packages up the model and sends it over the Wi-Fi to the Raspberry Pi.
The Command: - rsync -rv --progress --partial <file path to model on Host PC> <username>@<hostname>.local:<(path to models/ folder on Pi)>
Breakdown:
rsync is the tool we're using to send the file.
-rv --progress --partial
are options that make rsync show us what's happening and not quit halfway if there's a problem.
<file path to model on Host PC>
is where your model file is on your computer. You have to replace this with the actual path.
<username>@<hostname>.local
is the address of your Raspberry Pi on the network. Replace <username>
; with your Raspberry Pi's username and <hostname>
with its hostname.
<(path to models/ folder on Pi)>
tells rsync where on the Raspberry Pi to put the model. You'll replace this part with the actual location.
rsync -rv --progress --partial <file path to model on Host PC> <username>@<hostname>.local:<(path to models/ folder on Pi)>\n
Important Note:
Example Command:
Here's an example of what the command might look like:
rsync -rv --progress --partial /home/arl/mycar/models/jon_model_1.tflite arl@donkeypi.local:~/mycar/models/
In this example, we're sending a model named jon_model_1.tflite
from the Host PC to the Raspberry Pi, which is named donkeypi.
By following these steps, you'll transfer your autopilot model from your Host PC to the Raspberry Pi, getting your model car ready to drive itself!
"},{"location":"umn-curriculum/autopilot/#driving-autonomously","title":"Driving Autonomously","text":"Now that the models are on the Raspberry Pi, we are able to use them to autonomously drive the Donkey Car. To do this, we follow steps similar to the collection of the training data. However, we need to supply the path to the model and the type of model to the Python program. Hence we follow these steps:
"},{"location":"umn-curriculum/autopilot/#step-1-connecting-the-devices-to-the-same-wi-fi-network","title":"Step 1: Connecting the Devices to the Same Wi-Fi Network","text":"Why is this important? Just like how your smartphone needs to connect to the internet to send pictures to your friends, the computer (which we'll call the Host PC) needs to be on the same Wi-Fi network as the Raspberry Pi to communicate with it.
How to do it: Check the section titled \"Networking\" in your materials. It will guide you on how to connect both your Raspberry Pi and Host PC to the same Wi-Fi network, similar to how you connect any device to your home internet.
How to check: Open a program called \"Terminal\" on your Host PC. It's a tool where you can type commands to talk to your computer.
What to do: Type the command ping <hostname>.local
, but replace <hostname>
with your Raspberry Pi's unique name. This command is like saying, \"Hey Raspberry Pi, can you hear me?\".
What to expect: If everything is correct, you'll see messages that confirm your Host PC is talking to the Raspberry Pi.
What to do: On your Raspberry Pi, open its Terminal and enter this command: python manage.py drive --js --model <path to your model> --type <model type>
You'll need to replace <path to your model>
with the location of the AI model file you're using (usually found in the models/
folder). Replace <model type>
with the kind of AI model you have, such as \"Linear\" or \"Categorical\".
python manage.py drive --js --model <path to your model> --type <model type>\n
For example: python manage.py drive --model ~/mycar/models/mypilot.tflite --type tflite_linear
How to access: On your Host PC, open a web browser and go to http://<hostname>.local:8887
\u2014 make sure to substitute <hostname>
with your Raspberry Pi's name.
Choosing the driving mode: You'll see options like \"User\" for manual control, \"Auto Steer\" for the AI to only steer, and \"Full Auto\" for the AI to drive completely on its own. Select one and click \u201cStart\u201d to begin. If you're using a joystick, you can also start by pressing the start button.
Safety first: Before starting, ensure the Donkey Car is on its track with no obstacles around to prevent any accidents.
By following these steps, you'll be able to watch your Donkey Car navigate its surroundings all by itself, thanks to the AI model you've installed on the Raspberry Pi. It's a great way to see AI and machine learning in action!
"},{"location":"umn-curriculum/donkey-car/","title":"Getting Familiar with Donkey Car","text":"Welcome to Donkey Car! The first section of this curriculum will familiarize you with each of the components of the pre-assembled car and how they interact, and prepare you for the minimal hardware setup that may be required.
"},{"location":"umn-curriculum/donkey-car/#donkey-car-parts","title":"Donkey Car Parts","text":"There are four main components to be concerned with.
"},{"location":"umn-curriculum/donkey-car/#the-raspberry-pi","title":"The Raspberry Pi","text":"The Raspberry Pi 4 Model B is mounted to the top of the car, and it acts as the brain of the car. It is responsible for communicating between itself, the controller, and your devices. It also sends driving signals to the PWM board, which we will explore next. It records data through the camera, and it\u2019s where you will eventually upload your trained model for the car to drive itself!
"},{"location":"umn-curriculum/donkey-car/#the-pulse-width-modulation-board-pwm","title":"The Pulse Width Modulation Board (PWM)","text":"The Adafruit PCA9685 is responsible for communication between the Raspberry Pi and the electronic speed controller (ESC) and steering servo. The Pi communicates with the Adafruit board using a protocol called I2C. From there, the Adafruit board sends bursts of square waves with varying duty cycles (called a PWM signal) to the ESC and servo.
"},{"location":"umn-curriculum/donkey-car/#the-motor-and-servo","title":"The Motor and Servo","text":"
The motor and servo are fairly simple components which shouldn't need much setup (remember to flip the \"ON\" switch on the ESC!). The motor spins the wheels at a rate according to the PWM signal it received. The steering servo does the same, and rotates an arm to turn the wheels.
"},{"location":"umn-curriculum/donkey-car/#the-power-system","title":"The Power System","text":"Two separate batteries are powering different items in the car. Since the Raspberry Pi operates at 5 volts, a small portable battery affixed to the top of the car powers it via a USB Type C cable.
The second battery powers the servo and motors, which operate at 7.2 volts. This battery is plugged into the car and placed to the side in a small section designed for it to sit.
IMPORTANT NOTE: Unlike the portable battery, it matters which way you plug this battery in. The red wire should connect to the red wire, and the black wire will connect to the black wire. The wire clips generally restrict you from connecting it the wrong way, but it can still be connected with enough force. This will cause irreversible damage to the battery and potentially damage the motor and servos. This is especially important when plugging in the battery to charge.
"},{"location":"umn-curriculum/donkey-car/#hardware-setup","title":"Hardware Setup","text":"There should be little to no hardware setup, save from connecting a few wires if they are not already. The wires will connect the PWM and the Raspberry PI
"},{"location":"umn-curriculum/donkey-car/#wiring-diagram","title":"Wiring Diagram","text":"Sometimes the cars given to you will be correctly wired and set up for you. If, for some reason, it isn\u2019t, or if you\u2019d like to change it, here's how.
Typically, the motor is connected to PWM Pin 0, or the far left red-white-black pins. The servo is most often connected to PWM Pin 1, one column to the right of Pin 0. These pins can be specified in myconfig.py under:
PWM_THROTTLE_PIN
PWM_STEERING_PIN
By default, these are set to pins 0 and 1, respectively
PCA9685.1:40.0
PCA9685.1:40.1
If they are moved to any other pin, simply read the number above the desired column of pins on the PWM and adjust the value accordingly, i.e. if the motor is connected to the farthest right column of pins, column #15, you would update myconfig.py
the following way:
PWM_THROTTLE_PIN
\": \"PCA9685.1:40.15
\"You must also connect the GND, SDA, SCL, and V+ and lines of the PWM
Figure 1: PCA 9685 16-Channel Servo Driver
Most cars are set for the throttle wire output to be in PWM output 0 and the steering output to be in PWM output 1. If this is not the case look at the PWM
donkey calibrate --pwm-pin=PCA9685.1:40.0
(throttle)
donkey calibrate --pwm-pin=PCA9685.1:40.1
(steering)
These pins are also the default in myconfig.py
under PWM_STEERING_THROTTLE
Set USE_JOYSTICK_AS_DEFAULT=True
, don\u2019t need to pass --js
Imagine your donkey car is a small, smart robot car that you can control with a remote. To get it moving and learn how to switch between different driving modes, you'll start with a simple command on your computer. Here's how you do it:
Start the Car: Open your computer's command line tool, type python manage.py drive --js
, and press Enter. This wakes up your donkey car and gets it ready to follow your commands.
python manage.py drive --js\n
Understanding the Controls: Right after you run the command, you'll see instructions pop up on your screen. These tell you which buttons on your controller do what. It's like learning the controls of a new video game. Make sure to jot these down or take a picture with your phone - you'll need to refer back to them!
Switching Driving Modes: Your donkey car has a cool feature - it can drive in different ways! There's a special button (usually the start button) that lets you switch between:
Fully Remote-Controlled Mode: You control everything, just like driving an RC car.
Fully Autonomous Mode: The car drives all by itself, making its own decisions on turning and speed.
We'll focus on these two modes. If your controller doesn't seem to respond, hitting the start button is a good first troubleshooting step.
Now that your car can move around, it's time to teach it how to drive on its own. This is done by collecting data - basically, you drive the car around and it remembers how you did it. Here's how to gather good learning material for your car:
Drive Around: You'll need to drive your car around the track in both directions. Aim for about 10 laps each way. This gives your car a variety of examples to learn from.
It's Okay to Make Mistakes: Try to keep the car within the track lines, but don't worry about staying perfectly centered all the time. In fact, it's good for your car to see how to recover from being off-center. This helps it learn to correct itself and makes it smarter at handling different situations.
Remember, the goal isn't to collect flawless data but to give your car a rich learning experience, full of different scenarios and recoveries. This way, your car becomes more adaptable and can handle the track like a pro, even when things don't go exactly as planned.
"},{"location":"umn-curriculum/networking/","title":"Networking","text":"A Raspberry Pi is like a mini-computer that's both affordable and powerful, perfect for learning how to code and for creative projects, such as making a robot car move. To use a Raspberry Pi, it and your main computer need to be able to \"talk\" to each other, which means they have to be on the same digital network.
"},{"location":"umn-curriculum/networking/#how-to-connect-your-raspberry-pi-and-computer","title":"How to Connect Your Raspberry Pi and Computer","text":""},{"location":"umn-curriculum/networking/#option-1-joining-the-same-wi-fi-network","title":"Option 1: Joining the Same Wi-Fi Network","text":"The Basics: The simplest way to link your computer with the Raspberry Pi is by connecting both to the same Wi-Fi network. This can be the Wi-Fi at your home, school, or anywhere else you have access.
Troubleshooting Tips: If you're at school or work, you might not see your Raspberry Pi on the network due to extra security measures. If this happens (known as not being able to \"ping\" the Raspberry Pi), don't worry; just try the next solution.
Creating Your Network: If the first method doesn't work, you can use your smartphone or computer to create a Wi-Fi network through a feature known as a hotspot.
For Windows Users: Look up \"How to set up a mobile hotspot in Windows 10/11\" for detailed instructions.
For Linux Users:
Go to the system menu at the top right.
Click the Wi-Fi symbol, then \"All Networks\".
Click the menu in the top-right and choose \"Turn On Wi-Fi Hotspot\u2026\".
If asked to disconnect from your current Wi-Fi, click \"Turn On\" to proceed.
Pro Tip: If your Raspberry Pi struggles to connect, change the hotspot to use the 2.4GHz band for better device compatibility.
For Setup Only: You can connect your computer to the Raspberry Pi with an Ethernet cable, perfect for initial setups. However, Wi-Fi is needed for projects like controlling a robot car.
Steps:
Connect one end of the Ethernet cable to your computer and the other to the Raspberry Pi.
Use a command line or terminal to type ping donkeypi.local
(substitute \"donkeypi.local\" with your Raspberry Pi's actual name).
If possible, enable Internet Connection Sharing (ICS) on Windows or Ubuntu. This will allow you to share your WiFi connection from your Host PC to your Raspberry Pi over ethernet.
Once connected to the same network, you can control your Raspberry Pi from your computer, using either the command line (SSH) or a graphical interface (VNC).
"},{"location":"umn-curriculum/networking/#ssh-secure-shell","title":"SSH (Secure Shell)","text":"What It Is: SSH lets you send commands to your Raspberry Pi from your computer's terminal.
How to Use It:
Make sure both devices are on the same network.
Open a terminal and type ssh <username>@<hostname>.local
(replace with your details, like ssh donkey@donkeypi.local
).
What It Is: VNC allows you to view and interact with your Raspberry Pi's desktop from your computer, making graphical tasks easier.
Setting It Up:
Download and install RealVNC viewer from their official website, selecting the version for your operating system.
When installing, you can skip the sign-in step.
Open VNC Viewer, create a new connection and type your Raspberry Pi's IP address or hostname. Use the password you set on your Raspberry Pi if prompted.
Remember: The hostname is a unique name you give your Raspberry Pi during setup. It makes finding and connecting to your Raspberry Pi easier on a network.
By following these simplified steps, you're now ready to embark on exciting projects with your Raspberry Pi, from programming to building and controlling your own robot car!
"},{"location":"umn-curriculum/networking/#changing-wifi-settings","title":"Changing WiFi Settings","text":"There are two main ways to adjust the network settings: through a graphical interface (like the desktop you're accustomed to) or the command line. We'll focus on the command line for its simplicity and ease of use remotely via SSH (Secure Shell), which lets you control your Pi from another computer.
"},{"location":"umn-curriculum/networking/#basic-networking-commands","title":"Basic Networking Commands","text":"Here's a straightforward guide to some basic networking commands. Remember, the command line takes what you type literally, so precision is key.
Check NetworkManager Status
The command systemctl status NetworkManager
is essentially asking, \"Is the NetworkManager service active?\" NetworkManager is crucial as it handles all your network connections.
systemctl status NetworkManager\n
Start NetworkManager
Using sudo systemctl start NetworkManager
starts NetworkManager if it's not already running. The sudo signifies you're requesting to perform an action that requires administrator rights.
sudo systemctl start NetworkManager\n
Restart NetworkManager
Sometimes, network connections can be finicky. Issuing sudo systemctl restart NetworkManager
can help by resetting your network connections, akin to toggling WiFi on your smartphone.
sudo systemctl restart NetworkManager\n
List Available WiFi Networks
The command nmcli dev wifi
prompts your device to list all WiFi networks within range, useful for spotting available connections.
nmcli dev wifi\n
Connect to a WiFi Network
With sudo nmcli dev wifi connect \"SSID\" password \"PASSWORD\"
, you can connect to a specific WiFi by replacing \"SSID\" with the network's name and \"PASSWORD\" with the network's password. Keep the quotation marks if your WiFi's name or password includes spaces or special characters.
sudo nmcli dev wifi connect \"SSID\" password \"PASSWORD\"\n
Check Your Connection Status
sudo systemctl restart NetworkManager
lets you verify your network connection's status, ensuring everything is functioning as intended.
sudo systemctl restart NetworkManager\n
A Raspberry Pi 4 or 5 device.
A high-speed Micro SD card with a capacity of 64GB or more.
Speed Class: Class U3 Recommended.
Sandisk Extreme/Extreme Pro/Extreme Plus
Samsung Evo Plus/Evo Select
Kingston Canvas Select Plus/Canvas Go! Plus
Capacity: A minimum capacity of 64GB is preferred. 128GB is ideal
A computer with an SD card reader or SD card to USB adapter.
Download the Raspberry Pi Imager: This software helps you set up the Raspberry Pi OS onto your SD card. You can find it by searching \"Raspberry Pi Imager\" online. Follow the link from the official Raspberry Pi website to download and install it.
Prepare Your SD Card: Insert your Micro SD card into your computer's SD card reader slot.
Launch the Raspberry Pi Imager: Open the application you just installed.
Edit Settings Before Installation:
Click \"Next,\" then \"Edit Settings.\"
Hostname: This is your Raspberry Pi's name on a network, helping you identify it. Create a unique name by adding your initials or a group name to \"donkeycar-\" (e.g., \"donkeycar-JD\"). Write this name down as you'll need it later.
Username and Password: Create a username and password you'll remember. Example: Username: donkeycar, Password: raspberry. Note these down.
Wireless LAN: Enter your WiFi's name (SSID) and password here.
Local Settings: Your time zone and keyboard layout are usually set automatically, but double-check to make sure they're correct.
SSH: Enable SSH and select \"Use password authentication.\" SSH allows you to connect to your Raspberry Pi remotely.
Insert the Micro SD Card into Your Raspberry Pi:
Look for the Micro SD card slot on the Raspberry Pi. It's usually on the underside of the board.
Gently insert the Micro SD card into the slot. Make sure the metal contacts on the card face downwards and towards the Raspberry Pi. You should feel a slight click when it's in place.
Powering On Your Raspberry Pi:
Grab a USB-C cable. This is the Raspberry Pi's power cord, similar to a charger for many modern smartphones.
Connect one end of the USB-C cable to your Raspberry Pi and the other end to a power source (like a USB plug adapter or a powered USB port on a computer).
Once connected, your Raspberry Pi will start to boot up. You won\u2019t need to press any buttons to turn it on, just giving it power is enough!
To interact with your Raspberry Pi, you have two main options: connecting through SSH (Secure Shell) for remote access or hooking it up directly with a monitor, keyboard, and mouse. Here's how to do both:
"},{"location":"umn-curriculum/raspberry-pi/#option-1-direct-connection","title":"Option 1: Direct Connection","text":"What is SSH?: SSH stands for Secure Shell. It's a secure method to remotely access another computer's command line interface. This is useful if you can't or don't want to connect a monitor and peripherals directly to the Pi.
Setting Up SSH:
Enable SSH on Your Raspberry Pi: This can usually be done from the Raspberry Pi's configuration settings. You might need to do this with a monitor and keyboard attached if you're setting up the Pi for the first time.
Find Your Pi's Address: You need to know your Raspberry Pi's IP address or hostname to connect to it. If you're on the same network, the hostname is often raspberrypi.local, unless you've changed it.
Connect from Your Computer:
Open the Terminal application on your computer. This is Command Prompt on Windows or Terminal on macOS and Linux.
Type the SSH command: ssh username@hostname.local
, replacing username with your Raspberry Pi's username (the default is pi) and hostname with your Pi's actual hostname or IP address. For example, if your username is pi and your hostname hasn't been changed, you'd type ssh pi@raspberrypi.local
. SSH Command
ssh username@hostname.local\n
Press Enter. The first time you connect, you'll be asked if you trust the connection. Type yes, then enter your Raspberry Pi's password when prompted.
In this step, you're going to make sure your Raspberry Pi has the latest software. This is a bit like ensuring your smartphone installs the latest updates to run smoothly and securely. First, you'll need to connect your Raspberry Pi to a monitor and keyboard (or access it remotely if you know how). Once that's done, you'll use something called the command line, which is a way to interact with your Raspberry Pi by typing commands. Here\u2019s how to get your Raspberry Pi updated:
Open the Terminal: This is where you'll type your commands. It's a window where you can tell your Raspberry Pi what to do without using the mouse.
Type the Update Commands: You will need to type two commands into the terminal. Here's what they are and what they do:
sudo apt update
: This command checks for updates. It's like asking, \"Is there anything new that I should know about?\" It doesn\u2019t install the updates; it just looks for them and prepares a list. Update Version History
sudo apt update\n
sudo apt upgrade
: After checking for updates, this command installs them. It\u2019s like saying, \"Okay, now that we know what\u2019s new, let\u2019s get it installed.\" Update Packages
sudo apt upgrade\n
Use the command sudo raspi-config
to open the configuration utility.
sudo raspi-config\n
System Options: In the System Options menu, you'll find settings that affect the whole system. Here are a few key ones you might want to adjust:
Change Password: It's a good idea to change the default password to keep your Raspberry Pi secure.
Hostname: This is your Raspberry Pi's name on a network. You can change it to something unique to easily identify your device.
WiFi Network: Here, you can switch the WiFi network your Raspberry Pi connects to. This is useful if you're moving your device to a different location or if you need to connect it to a different network.
Interface Options:
Enable VNC: Turning this on allows you to control your Raspberry Pi from another computer as if you were sitting right in front of it. It's like having a remote desktop.
Enable I2C: This option turns on the ability for your Raspberry Pi to talk to certain other hardware pieces, like a PWM (Pulse Width Modulation) chip. PWM chips are used to control motors and servos, letting your projects move.
Advanced Options:
Reboot your Raspberry Pi as prompted
sudo reboot
into the terminal. This makes sure all your settings take effect. Reboot Raspberry Pisudo reboot\n
In this step, we'll create a special workspace on your computer where you can write and run Python code. This workspace is known as a Virtual Environment. Think of it as your personal lab where you can experiment with Python projects without affecting the rest of your computer's setup.
"},{"location":"umn-curriculum/raspberry-pi/#create-a-virtual-environment","title":"Create a Virtual Environment","text":"A Virtual Environment is like a sandbox for your project. It keeps all the tools and materials (in our case, code and libraries) you need separate from other projects. This way, if one project needs a specific tool version, it won't clash with another project's needs.Here\u2019s how you set it up:
Open your Terminal or Command Prompt: This is where you type commands to talk directly to your computer.
Create the Virtual Environment: Type the following command and press Enter:
Create the enviroment. Create Virtual Enviroment
python3 -m venv donkey --system-site-packages\n
You can replace donkey with any name you like, but using donkey here makes it easier to follow along with later parts of this course that use a system called Conda on Linux computers.
The --system-site-packages
option allows this environment to access some Python packages already installed on your computer globally, saving space and time.
Activate Your Environment: To start using your newly created environment, you need to activate it. Type the commands below:
Activate Virtual Enviromentecho \"source ~/donkey/bin/activate\" >> ~/.bashrc\nsource ~/.bashrc\n
This tells your Terminal to always use your donkey environment by default, making things simpler for future projects.
Before you can start building projects, you need some tools and materials, known in Python as libraries.
Install System Libraries: Some Python libraries depend on non-Python code. To prepare for this, install the required system library by typing:
Install Dependencies
sudo apt install libcap-dev\n
Note: This command might ask for your password. It's the same one you use to log into your computer. Install Python Packages: Now, let's get the specific Python packages you'll need.
For most users, type:
Install Donkey Car Softwarepip install donkeycar[pi]\n
For developers, there's a more detailed setup. Please see Appendix 2 in our course materials.
Let's make sure everything is set up correctly by checking if a key library, TensorFlow, is installed. TensorFlow is a powerful tool for machine learning projects. Type the following command and press Enter:
Check Tensorflow Installationpython -c \"import tensorflow; print(tensorflow.version)\"\n
Note: If you see a version number appear, congratulations! You're all set up and ready to dive into Python and machine learning projects.
"},{"location":"umn-curriculum/setup/","title":"Setting Up and Calibrating Your Donkey Car","text":""},{"location":"umn-curriculum/setup/#step-1-connecting-to-your-car","title":"Step 1: Connecting to Your Car","text":"Objective: Our first task is to create a digital conduit through which your computer will communicate with your miniature autonomous vehicle, facilitating commands and data exchange.
"},{"location":"umn-curriculum/setup/#procedure","title":"Procedure","text":"Secure Connection Establishment: Utilize SSH (Secure Shell) to establish a secure command line interface with your vehicle's computing unit, the Raspberry Pi. Detailed instructions for this process can be found under the \"Networking\" section.
Creating the Project Directory:
Execute the command: donkey createcar --path ~/mycar
Create Donkey Car Project Directory
donkey createcar --path ~/mycar\n
This action constructs a dedicated project directory for your vehicle, organizing essential files and configurations necessary for its operation.
Upon completion, the project directory will comprise several critical components:
mycar/
: The root directory.
data/
: Contains captured imagery and metadata from driving sessions, acting as the vehicle's experiential memory.
models/
: Stores machine learning models that enable autonomous navigation, essentially forming the vehicle's \"cognitive\" abilities.
logs/
: A repository for operational logs, useful for troubleshooting and performance analysis.
config.py
& myconfig.py
: Configuration files for project defaults and personalized settings, respectively. While config.py remains standard, myconfig.py is where specific adjustments and preferences are defined.
calibrate.py
, manage.py
, train.py
: Specialized scripts for calibration, management, and training processes, though they are not directly interacted with at this stage.
Customization Note: While mycar is the suggested name for simplicity, you may choose any directory name that suits your project's identity.
Power Requirements: Ensure that the vehicle's Raspberry Pi is adequately powered via a portable battery and that the car itself is powered by an appropriate RC battery. Verify that any switches on the electronic speed controller (ESC) or servo are in the 'on' position to enable operation.
"},{"location":"umn-curriculum/setup/#step-2-preparing-for-calibration","title":"Step 2: Preparing for Calibration","text":"Calibration is a fancy term for making sure your car understands how to move correctly when you tell it to. Before we start, place your car on a box or a similar object so the wheels are in the air. This way, it won't drive away during calibration.
About Your Car's Controls:
Your car uses something called a PCA 9685 Servo Driver to control its wheels. Imagine it as the car's steering wheel and gas pedal. Normally, the \"gas pedal\" is connected to output 0, and the \"steering wheel\" to output 1. If yours is different, you'll need to adjust the instructions accordingly.
"},{"location":"umn-curriculum/setup/#step-3-connecting-a-joystick-controller-to-the-donkey-car","title":"Step 3: Connecting A Joystick Controller to the Donkey Car","text":"Why use a joystick? If you've ever played video games, you know how intuitive a joystick can be for controlling movement. We can apply this same intuitive control to our Donkey Car project, allowing for more precise and responsive control compared to using a web interface.
Choosing the right controller: Not all controllers are created equal, and the steps to connect them can vary. However, most controllers that connect via a USB adapter will follow a similar setup process. We'll be using the Logitech F710 as our example, but if you have a different controller, check the Donkey Car documentation for specific instructions.
"},{"location":"umn-curriculum/setup/#getting-started-with-the-logitech-f710-controller","title":"Getting Started with the Logitech F710 Controller","text":"Prepare the Controller:
Ensure your Logitech F710 controller has fresh/charged batteries installed.
If it comes with a USB adapter (a small plug that connects to your computer or Raspberry Pi), make sure it's on hand.
Connect to the Raspberry Pi:
Configure the Settings:
Locate the file named myconfig.py
on your Raspberry Pi. This file contains various settings for your Donkey Car.
Find the section labeled \"Joystick\" and make the following changes:
Uncomment (remove the # symbol from the start of) the line USE_JOYSTICK_AS_DEFAULT = True
. This tells the system to use the joystick as the primary control method.
Change CONTROLLER_TYPE
to F710
to specify which controller you're using.
Pairing the Controller (If Needed):
Some users have reported issues when first trying to pair their F710 controller with the USB dongle. If this happens, you'll need to use Logitech's Pairing Utility to fix the connection. Here's how:
Attach the USB dongle to your computer (not the Raspberry Pi for now).
Download and open Logitech's Pairing Utility software.
Follow the on-screen instructions to pair the dongle with the controller.
Once paired successfully, reconnect the dongle to the Raspberry Pi.
In this step, we're going to learn how to control the car's speed and direction. This might sound complex, but it's really about telling the car how fast to go and where to turn. We'll use some special commands and adjustments to do this, like tuning a musical instrument to get the right notes.
"},{"location":"umn-curriculum/setup/#speed-control-making-your-car-move","title":"Speed Control: Making Your Car Move","text":"Starting the Calibration Process:
First, we need to start the calibration for your car's speed. On your computer, type the command donkey calibrate --pwm-pin=PCA9685.1:40.0
. This command wakes up your car's speed control system and gets it ready for tuning.
donkey calibrate --pwm-pin=PCA9685.1:40.0\n
Finding the Neutral Speed:
Testing Movement:
Fine-Tuning Speed:
Saving Your Settings:
~/mycar/myconfig.py
on your computer. Look for the section PWM_STEERING_THROTTLE
and add your numbers for moving forward, staying still, and reversing. You'll remove the # symbol from in front of these lines to make sure the car uses these new settings.Starting Steering Calibration:
Similar to speed, we begin calibrating steering by typing donkey calibrate --pwm-pin=PCA9685.1:40.1
. This gets your car's steering system ready for tuning.
donkey calibrate --pwm-pin=PCA9685.1:40.1\n
Adjusting Steering:
Saving Steering Settings:
~/mycar/myconfig.py
file with your steering adjustments. You'll specify values for turning left and right. Remember to remove the # symbol from these lines to activate them.Through these steps, you're not just learning how to make a car move and turn. You're learning about calibration, which is a key concept in many areas of technology and engineering. It's all about making small adjustments to get things working just right, whether it's a toy car or a real-life project in computer science or robotics.
"},{"location":"umn-curriculum/setup/#step-5-final-adjustments","title":"Step 5: Final Adjustments","text":"Now, it's time to see how well your car drives:
Place the Donkey Car on level ground and make sure it's got space to move.
Run python manage.py drive \u2013js
and open http://<hostname>.local:8887
on a browser after the terminal prompts you to see through your car's camera.
python manage.py drive \u2013js\n
Start with pressing up on the joystick to move forward. If the car veers off straight, adjust the PWM values as instructed, then test again until it moves straight.
Ensure USE_JOYSTICK_AS_DEFAULT
is uncommented and set to True
in your myconfig.py
file for easier control in the future.
And that's it! You've now calibrated your car. Remember, this is a process of trial and error, so take your time and enjoy the learning experience.
"},{"location":"umn-curriculum/training/","title":"Training","text":""},{"location":"umn-curriculum/training/#installing-donkey-car-software-on-the-host-pc","title":"Installing Donkey Car Software on the Host PC","text":"In this module, we'll guide you through the process of setting up your computer to work with Donkey Car, a powerful suite of tools that let you dive into AI, computer science, and machine learning.
"},{"location":"umn-curriculum/training/#option-1-windows","title":"Option 1: Windows","text":"We'll start by preparing your Windows computer to run Linux, a preferred operating system for many developers and AI enthusiasts. This is because Linux offers great support for programming and AI tools, including Donkey Car.
Enable Linux on Windows: We'll use something called the Windows Subsystem for Linux (WSL). This lets your Windows machine run Linux applications without needing a separate computer. Follow the instructions provided by Microsoft to install WSL. Just a heads-up, the graphical interface (GUI) of Donkey Car might not work perfectly with WSL, so we suggest using Ubuntu Linux directly if you can.
Check Your Graphics Card Drivers: If you're using an Nvidia graphics card, ensure you have the latest drivers installed. Drivers are software that lets your computer's hardware work properly. You can find and install the latest drivers from Nvidia's website.
Ubuntu is a popular, user-friendly version of Linux. Installing it will let you run the Donkey Car software.
Get Ubuntu: Go to the Microsoft Store on your Windows computer and download \"Ubuntu 22.04.3 LTS\". LTS stands for Long Term Support, which means it gets updates for 5 years, making it a stable choice for your projects.
Set Up Ubuntu: Once Ubuntu is installed, launch the app. The first time you open it, it'll ask you to set up a few things like creating a username and a password. Remember these details as you'll need them later!
Update Ubuntu: Before we install the Donkey Car software, let's make sure your Ubuntu system is up-to-date. Open the Ubuntu terminal (a place where you type commands) and enter the following:
To update your list of available software: sudo apt update
sudo apt update\n
To upgrade the software on your system: sudo apt upgrade
sudo apt upgrade\n
Install Necessary Software: Now, we'll install some programs that Donkey Car needs to run smoothly. Type these commands into the Ubuntu terminal:
For Python (the programming language we'll use): sudo apt install python3-pip
sudo apt install python3-pip\n
For other required tools: sudo apt install libmtdev1 libgl1 xclip
sudo apt install libmtdev1 libgl1 xclip\n
To fix a common error with the graphical interface, add a line to a configuration file: echo \u2018LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libstdc++.so.6\u2019 >> ~/.bashrc
echo \u2018LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libstdc++.so.6\u2019 >> ~/.bashrc\n
After you've completed these steps, you're ready to install the Donkey Car software itself. For that, please follow the instructions provided in the Ubuntu installation section of our curriculum.
"},{"location":"umn-curriculum/training/#option-2-ubuntu-linux","title":"Option 2: Ubuntu Linux","text":""},{"location":"umn-curriculum/training/#step-1-setting-up-your-computer-with-ubuntu","title":"Step 1: Setting Up Your Computer with Ubuntu","text":"What is Ubuntu? Ubuntu is a popular, user-friendly version of Linux, great for beginners and pros alike.
How to Install Ubuntu: Follow the instructions on the official Ubuntu website here. After installation, make sure your Ubuntu system is running smoothly.
Why Update? Updating ensures your system has the latest features and security improvements.
How to Update:
Open the Terminal application in Ubuntu.
Type sudo apt update
and press Enter to check for updates.
sudo apt update\n
Then, type sudo apt upgrade
and press Enter to apply the updates. Confirm if asked.
sudo apt upgrade\n
What is MiniConda? MiniConda is a smaller version of Anaconda, a toolkit for scientific programming. It helps manage different versions of Python and packages.
Installation Steps:
Create a folder for MiniConda by typing \"mkdir -p ~/miniconda3\" in the Terminal.
Download MiniConda for Python 3.11 by running: wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
Install MiniConda with: bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
Clean up the installer by running rm -rf ~/miniconda3/miniconda.sh
.
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh\nbash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3\nrm -rf ~/miniconda3/miniconda.sh\n
What\u2019s a Conda Environment? It's like a separate workspace for each project, with its own set of Python and packages, avoiding conflicts.
Creating Your Environment:
Choose a name for your environment; we'll use \"donkey\" here.
Create it with conda create -n donkey python=3.11
.
Activate your new environment by typing conda activate donkey
.
conda create -n donkey python=3.11\nconda activate donkey\n
About Donkey Car Software: This software suite allows you to control and customize your Donkey Car, a DIY autonomous driving project.
Installation:
For a standard user installation, type pip install donkeycar[pc]
in the Terminal.
pip install donkeycar[pc]\n
Updating Existing Installation:
If you've installed it before and need an update:
Update Conda with conda update -n base -c defaults conda
.
Remove the old environment with conda env remove -n donkey
.
Reinstall the Donkey Car software as described above.
conda update -n base -c defaults conda\nconda env remove -n donkey\n
Choosing Between PyTorch and TensorFlow: If you prefer using PyTorch over TensorFlow for your projects:
Install PyTorch and its necessary toolkit with conda install cudatoolkit=11 -c pytorch
.
conda install cudatoolkit=11 -c pytorch\n
Note: For those interested in more advanced setups or contributing to the Donkey Car software, see Appendix 2 for developer installation instructions.
"},{"location":"umn-curriculum/training/#preparing-for-training-gathering-data-from-your-donkey-car","title":"Preparing for Training: Gathering Data from Your Donkey Car","text":"To train your artificial intelligence (AI) model, we need specific files from your Donkey Car\u2014a miniature car powered by a Raspberry Pi (a small, affordable computer). These files include images (what the car sees), metadata (information about the images), and configuration settings. We'll transfer these files from the Donkey Car to a larger computer (we'll call this the \"Host PC\") where the training will happen.
"},{"location":"umn-curriculum/training/#step-1-ensure-both-devices-can-communicate","title":"Step 1: Ensure Both Devices Can Communicate","text":"Connect both the Donkey Car and the Host PC to the same Wi-Fi network. This could be your home network or any mobile hotspot. This step is crucial for the two devices to 'talk' to each other.
On your Host PC, open a program called 'Terminal'. Think of the Terminal as a way to talk to your computer using text commands instead of clicking on icons.
Type the command ping <hostname>.local
and press Enter.
Replace <hostname>
with the name of your Raspberry Pi. This command checks if your Host PC can communicate with the Donkey Car.
If you see replies coming back, congrats! Your PC and the Donkey Car are on speaking terms.
In the Terminal, type mkdir <dest_dir>
and press Enter.
<dest_dir>
with any name you like for your folder.Now, type the command rsync -rv --progress --partial <userrname>@<hostname>.local:<car_dir> <dest_dir>
and press Enter.
Here, you'll replace <userrname>
with the username of your Raspberry Pi, <hostname>
with its hostname, and <car_dir>
with the path to the Donkey Car project folder (usually it's ~/mycar
).
This command may look complex, but it's essentially instructing your computer to carefully copy all the necessary files from the Donkey Car into the folder you just created.
rsync -rv --progress --partial <userrname>@<hostname>.local:<car_dir> <dest_dir>\n
To ensure our AI models learn effectively and quickly, preparing our training data is crucial. Properly prepared data improves the model's learning efficiency and its eventual performance and accuracy. Preparing the data involves removing irrelevant or problematic elements such as images and metadata that don't contribute to the learning objective, are captured during non-active moments (like breaks), or are simply errors.
Methodologies for Data Preparation: There are two main ways to prepare your data:
"},{"location":"umn-curriculum/training/#option-1-graphic-interface-gui","title":"Option 1: Graphic Interface (GUI)","text":"For those preferring a visual approach, utilizing the GUI is recommended. This method is particularly effective when working directly on a Ubuntu system. Working on Windows via the Windows Subsystem for Linux (WSL) might not offer the same smooth experience.
For individuals comfortable with command line operations, the Donkey Car Software provides tools for data cleaning through a command line and web interface process. Here's how to do it:
Step 1: Open the terminal on your computer and execute the command to clean your data. Type: donkey tubclean <Donkey Car folder>
followed by the path to your Donkey Car project folder. Remember to replace <Donkey Car folder>
with the actual path to your project folder.
donkey tubclean <Donkey Car folder>\n
Step 2: This command sets up a local web server accessible through your browser. Open your preferred web browser and visit http://localhost:8886
.
Step 3: Navigate to the data folder by clicking on data/
. You'll see an interface where you can review your images.
Step 4: In this interface, you can play through your images, organize them into different categories or tracks, and delete any that aren't needed.
Step 5: After you've selected and deleted unnecessary images, return to the terminal and press Ctrl+C to stop the web server and conclude the data cleaning process.
Remember, taking the time to prepare your data properly can significantly impact the success of your AI project. Choose the method that best suits your setup and preferences.
"},{"location":"umn-curriculum/training/#training-the-model","title":"Training the Model","text":"After preparing our data by cleaning it, we're ready to take the next exciting step: training our artificial intelligence (AI) model. This process teaches the AI how to make decisions based on the data we've given it. We can use two methods to train our model, depending on what's more comfortable for you: using a graphical user interface (GUI) or typing commands into the command line.
"},{"location":"umn-curriculum/training/#option-1-using-the-gui-graphical-user-interface","title":"Option 1: Using the GUI (Graphical User Interface)","text":"The GUI is a visual tool that lets you interact with the computer more intuitively. Here\u2019s how you can use it to train your model:
Start the Tool:
Prepare Your Data:
In the tool, find and click on the \u2018Tub Manager\u2019 menu.
Choose \u201cLoad car directory\u201d and navigate to your car\u2019s data directory.
Click \u201cLoad Tub\u201d and select the \u201cdata\u201d folder within that directory.
Train Your Model:
Go to the \u201cTrainer\u201d tab.
Click on \u201cTrain\u201d and wait patiently for the training to complete.
Customizing Your Training:
If you wish to change the type of AI model you're training, you can select a different model type from the \u201cModel Type\u201d dropdown menu.
To tweak the training process, such as how fast the AI learns, modify the \u201cTraining\u201d section in the myconfig.py file.
You can label your model with a comment for easy identification later on.
If you're more comfortable with typing commands, this method is for you:
Open the Terminal:
Type the Training Command:
Enter the following command, replacing placeholders with your specific information:
donkey train --tub=<path_to_your_data> [--config=<configuration_file>] [--model=<path_to_save_model>] [--type=<model_type>] [--transfer=<path_to_pretrained_model>]
--tub
specifies the location of your data. You can list multiple locations by separating them with commas or spaces.
--config
(optional) allows you to use a specific configuration file.
--model
(optional) sets where to save the trained model. If you don't specify a name, one will be automatically generated.
--type
lets you choose the type of model to train.
--transfer
(optional) continues training an already partially trained model.
You can also filter which data records to use for training by adjusting settings in the myconfig.py
file.
donkey train --tub=<path_to_your_data> [--config=<configuration_file>] [--model=<path_to_save_model>] [--type=<model_type>] [--transfer=<path_to_pretrained_model>]\n
Completion:
After training your model, you're ready to upload it to the Donkey Car for autonomous driving tests. This step will allow you to see how well your AI has learned to make decisions on its own. Remember, the goal here is to teach the AI to navigate a course independently, improving over time as it learns from more data.
"},{"location":"umn-curriculum/appendix/developer-install/","title":"Developer Install","text":"For those interested in contributing to the Donkey Car project or needing a development setup:
git clone https://github.com/autorope/donkeycar.git
git checkout dev
for the latest features.pip install -e .[dev]
to install all necessary dependencies for development.Configuring your training environment and parameters can significantly impact the performance of your AI model:
linear
, categorical
, or rnn
.myconfig.py
file to optimize training.Proper data preparation is essential for training a reliable model. Data tubs contain the collected driving data, but not all data points are useful for training. Here\u2019s how to trim data tubs to enhance model training:
donkey tubclean --tub <path_to_tub>
to launch a web interface for easier data selection and trimming.Encountering and resolving common issues during setup, training, or driving:
Visual Studio Code is a free text editor with useful features and extensions for nearly any kind of software development. One such extension is Remote - SSH.
As mentioned on the Networking page, SSH allows you to access your Raspberry Pi's command line remotely (from your own laptop/PC). The Remote - SSH extension pushes this further by allowing you to open folders, edit files, use the integrated terminal, and take full advantage of VS Code.
"},{"location":"umn-curriculum/appendix/vsc-remote/#installation","title":"Installation","text":"ssh <username>@<hostname>.local\n
Welcome to the fascinating world of Artificial Intelligence (AI)! AI has become a buzzword across various industries, impacting everything from the way we shop to how we interact with technology daily. But what exactly is AI? Let's dive in and explore the realm of artificial intelligence together.
"},{"location":"umn-curriculum/background/foundations-of-ai/#what-is-artificial-intelligence","title":"What is Artificial Intelligence?","text":"Artificial Intelligence, or AI, refers to the capability of a machine or computer program to think, learn, and make decisions similarly to a human. It's a broad field of computer science dedicated to creating systems capable of performing tasks that typically require human intelligence. These tasks include problem-solving, recognizing speech, understanding languages, and more.
"},{"location":"umn-curriculum/background/foundations-of-ai/#history-of-ai","title":"History of AI","text":"The concept of AI is not new. It dates back to ancient myths and stories of artificial beings endowed with intelligence or consciousness by master craftsmen. However, the academic field of AI was only born in the mid-20th century, during a workshop at Dartmouth College in 1956. This workshop is often considered the founding event of AI as a field. Since then, AI has evolved from simple algorithms to complex machine learning and neural networks, becoming an integral part of the modern world.
"},{"location":"umn-curriculum/background/foundations-of-ai/#applications-of-ai-in-daily-life","title":"Applications of AI in Daily Life","text":"AI is no longer just a futuristic concept; it's a part of our everyday lives. Here are a few examples of how AI impacts our daily activities:
Personal Assistants: Siri, Alexa, and Google Assistant use AI to understand natural language and perform tasks for us.
Navigation: GPS and mapping services like Google Maps use AI to analyze the best routes and traffic conditions.
Recommendation Systems: Netflix, Spotify, and Amazon use AI to recommend movies, music, and products based on our previous choices.
Healthcare: AI is used in predictive analysis for patient care, medical imaging, and drug discovery.
At its core, AI works by processing large amounts of data and learning from it, much like a human learns from experience. Through various algorithms and models, AI systems can identify patterns, make decisions, and improve over time. The ability to learn and adapt makes AI incredibly powerful and versatile.
"},{"location":"umn-curriculum/background/introduction-to-linux/","title":"Introduction to Linux","text":"Linux is an open-source, Unix-like operating system that powers a significant portion of the internet, including most web servers and is the foundation for many other devices and technologies, including Android smartphones. It's known for its stability, security, and flexibility, making it a preferred choice for developers and system administrators. Linux comes in various distributions (distros), such as Ubuntu, Fedora, and Debian, each tailored for different needs and preferences.
"},{"location":"umn-curriculum/background/introduction-to-linux/#why-linux-for-education","title":"Why Linux for Education?","text":"Accessibility: Linux can be installed on a wide range of hardware, from old desktops to the most modern laptops, making it accessible to students without the need for high-end equipment.
Cost-Effective: Being open-source, it's free to use, which significantly reduces the costs associated with software licensing.
Learning Environment: It offers a command-line interface (CLI) that encourages the understanding of the fundamentals of operating systems, file systems, and programming.
Community Support: There's a vast community of Linux users and developers ready to offer assistance, making it an excellent resource for learners.
Affordability: Raspberry Pi devices are very affordable, making them accessible for classroom settings and individual students.
Versatility: Raspberry Pi can be used for a wide range of projects, from simple programming exercises to complex hardware/software integration projects.
Educational Resources: The Raspberry Pi Foundation provides extensive educational materials and a supportive community for educators and learners.
Practical Learning: Through projects that involve both software and hardware, students gain hands-on experience that is invaluable for understanding key concepts in computer science and electronics.
pwd
: Print Working Directory. Shows the path of the current directory you are in.
ls
: Lists files and directories in the current directory. Can be combined with flags like -l for detailed listing and -a to show hidden files.
cd
: Change Directory. Used to move between directories.
mkdir
: Make Directory. Creates a new directory.
rmdir
: Remove Directory. Deletes an empty directory.
rm
: Remove. Deletes files or directories. Use -r for recursive deletion.
touch
: Creates a new empty file or updates the timestamp on an existing file.
cat
: Concatenate and display files.
echo
: Displays a line of text/string that is passed as an argument.
grep
: Global Regular Expression Print. Searches for patterns within files.
find
: Searches for files and directories based on conditions provided.
sudo
: SuperUser DO. Executes commands with the security privileges of the superuser (root).
chmod
: Change Mode. Modifies the file permissions.
chown
: Change Owner. Changes the owner and group of a file or directory.
wget
: Retrieves content from web servers.
curl
: Transfers data from or to a server, similar to wget
but with more capabilities.
tar
: Tape Archive. Used to create, maintain, modify, and extract files that are archived in the tar format.
zip/unzip
: Compresses files into a zip format and extracts them.
Variables: Storing and using values.
Control Structures: if statements, loops (for, while).
Input/Output: Reading from and writing to files, reading user input.
Functions: Writing reusable code blocks.
Pipelines and Redirection: Using | to direct the output of one command as input to another, > and >> to write output to files.
Networking Basics:
Explanation of how devices communicate over a network.
Introduction to IP addresses, DNS, routers, and modems.
What is a Wi-Fi Hotspot?
Definition and how it works.
Use cases for creating a Wi-Fi hotspot.
For Windows:
Step 1: Open 'Settings' > 'Network & Internet' > 'Mobile hotspot'.
Step 2: Turn on 'Share my Internet connection with other devices'.
Step 3: Configure your network by selecting 'Edit' to set a new network name (SSID) and password.
Step 4: Connect your device to the new Wi-Fi network using the credentials you have set.
For macOS:
Step 1: Click the Apple menu > 'System Preferences' > 'Sharing'.
Step 2: Select 'Internet Sharing' from the list on the left.
Step 3: Choose the way you want to share your connection (e.g., share your Ethernet connection via Wi-Fi).
Step 4: Click 'Wi-Fi Options...' to configure your network name (SSID) and password.
Step 5: Check the 'Internet Sharing' box to turn on your hotspot.
Step 6: Connect your device to the new Wi-Fi network.
For Linux (Ubuntu):
Step 1: Click the system menu > 'Wi-Fi Not Connected' > 'Wi-Fi Settings'.
Step 2: Click the menu button in the Wi-Fi settings window, then select 'Turn On Wi-Fi Hotspot...'.
Step 3: Adjust your network settings including SSID and password as prompted.
Step 4: Activate the hotspot and connect your device using the newly created network credentials.
Machine learning is a method of teaching computers to learn from data, identify patterns, and make decisions with minimal human intervention. One of the most exciting tools in machine learning is the neural network.
"},{"location":"umn-curriculum/background/introduction-to-nn/#what-is-a-neural-network","title":"What is a Neural Network?","text":"Imagine your brain. It's made up of billions of cells called neurons, connected by synapses. Each neuron receives signals, processes them, and passes signals to other neurons. A neural network in machine learning is inspired by this concept. It's a collection of artificial neurons or \"nodes\" that work together to understand data, learn patterns, and make decisions.
"},{"location":"umn-curriculum/background/introduction-to-nn/#the-basics-of-neural-networks","title":"The Basics of Neural Networks:","text":"Artificial Neurons: Just like neurons in the brain, artificial neurons receive input, process it, and generate an output. Each neuron's output is determined by a function of its input.
Layers: Neural networks have layers of neurons:
Input Layer: Receives the data.
Hidden Layers: The layers in between that process the data. There can be one or many hidden layers.
Output Layer: Produces the final result or prediction.
Learning: Neural networks learn by adjusting the importance (weights) of connections between neurons. This process is called training. During training, the network makes predictions, compares them to the correct answers, and adjusts the weights to improve future predictions.
Neural networks can learn and adapt. They're capable of handling complex problems like recognizing speech, identifying images, and predicting trends. Their ability to learn from data makes them incredibly powerful in today\u2019s data-driven world.
"},{"location":"umn-curriculum/background/introduction-to-os/","title":"Introduction to Operating Systems","text":"Welcome to the module on Operating Systems (OS) - an essential cornerstone for understanding the broader fields of Computer Science, Artificial Intelligence (AI), and Machine Learning (ML). This module is designed to introduce you to the basics of operating systems in a straightforward and accessible manner. It aims to provide a foundation for learners from various backgrounds, including high school students and professionals transitioning into tech. No prior expertise in computer science is required to grasp these concepts.
"},{"location":"umn-curriculum/background/introduction-to-os/#what-is-an-operating-system","title":"What is an Operating System?","text":"An Operating System is the software that manages all the computer hardware and software resources. It acts as an intermediary between the computer's hardware and the applications running on it. Think of the OS as a manager in an office environment, ensuring that resources such as printers, files, and applications are allocated efficiently, so everything runs smoothly.
"},{"location":"umn-curriculum/background/introduction-to-os/#key-functions-of-an-operating-system","title":"Key Functions of an Operating System","text":"Resource Management: Manages and allocates computer resources like CPU, memory, disk space, and peripheral devices.
File Management: Controls the reading and writing of data to disk and manages how files are stored, organized, and retrieved.
Process Management: Handles the creation, execution, and termination of processes. A process is an executing program.
Security and Access Control: Protects against unauthorized access to the system, ensuring data integrity and user privacy.
User Interface: Provides interfaces, such as graphical user interfaces (GUIs) or command-line interfaces (CLI), for users to interact with the computer.
Microsoft Windows: Known for its user-friendly GUI, it\u2019s widely used in homes, schools, and offices.
macOS: Developed by Apple Inc., it\u2019s known for its sleek design and integration with other Apple products.
Linux: An open-source OS that's highly customizable, making it popular among developers and in server environments.
Android: Based on the Linux kernel, it\u2019s designed primarily for touchscreen mobile devices like smartphones and tablets.
This module has introduced you to the basic concepts of operating systems, their key functions, types, examples, and their importance in the fields of AI and ML. Understanding operating systems is fundamental not only for aspiring computer scientists but also for anyone looking to delve into technology-driven fields. As we progress, we'll explore more technical aspects and real-world applications of operating systems in greater detail.
Stay tuned for the next modules where we'll dive deeper into the inner workings of operating systems and how they underpin the technological world around us.
"},{"location":"umn-curriculum/background/introduction-to-python/","title":"Introduction to Python","text":""},{"location":"umn-curriculum/background/introduction-to-python/#what-is-python","title":"What is Python?","text":"Python is a high-level, interpreted programming language known for its readability and versatility. It allows you to write clear programs on both a small and large scale. Because of its simplicity and elegance, Python is a great language for beginners, yet powerful enough for writing advanced applications.
"},{"location":"umn-curriculum/background/introduction-to-python/#why-learn-python","title":"Why Learn Python?","text":"Versatility: From web development to data analysis, machine learning, and even cybersecurity, Python can be used in various domains.
Community and Resources: Python has a large, supportive community. This means plenty of tutorials, documentation, and forums are available to help beginners.
Career Opportunities: Knowledge of Python opens up numerous career paths in tech, especially in growing fields like data science and artificial intelligence.
Installation: Installing Python is straightforward. You can download the latest version from the official Python website. Ensure to check the box that says \"Add Python to PATH\" during installation to use Python from the command line.
Your First Python Program: Once installed, you can write your first simple program. Open your text editor, type print(\"Hello, World!\")
, and save the file with a .py extension. Run it from your command line by typing python filename.py
.
Software versions indicate the state of the software at a particular point in time. They help in tracking changes, improvements, or fixes made over time.
Major Versions of Python
Python 2: Introduced in 2000, it was widely used for many years. However, Python 2 reached the end of its life on January 1, 2020, meaning it no longer receives updates or support.
Python 3: Released in 2008, Python 3 is the current version, actively developed and supported. It introduced many changes and improvements over Python 2, making programs more efficient and easier to read.
Using the latest version ensures access to new features, security patches, and performance improvements. However, when working on existing projects, you must use the version compatible with that project to avoid compatibility issues.
"},{"location":"umn-curriculum/background/introduction-to-python/#version-management","title":"Version Management","text":"Managing Multiple Versions: It's common for developers to work on projects that require different Python versions. Tools like pyenv for Unix-based systems and pyenv-win for Windows can help manage multiple versions on a single machine.
Virtual Environments: Virtual environments are a critical part of Python development, allowing you to maintain separate environments for different projects. This means you can have different versions of Python and various packages installed for each project without conflicts. You can create a virtual environment using python -m venv envname.
"},{"location":"umn-curriculum/background/introduction-to-python/#understanding-programming-through-python","title":"Understanding Programming Through Python","text":""},{"location":"umn-curriculum/background/introduction-to-python/#why-python","title":"Why Python?","text":"Python is a versatile and widely used programming language, favored for its easy-to-read syntax. It's used in web development, data analysis, artificial intelligence, scientific computing, and more. Python's syntax closely resembles English, which helps beginners understand the concepts of programming without getting bogged down by complex syntax rules.
"},{"location":"umn-curriculum/background/introduction-to-python/#first-steps-in-python-programming","title":"First Steps in Python Programming","text":"When you start programming in Python, you're learning to communicate with your computer through code. Let's explore some foundational concepts that will help you begin this journey.
Python Syntax: The rules that define how a Python program is written. Python was designed to be easy to understand and fun to use. The simplicity of Python's syntax allows beginners to focus on learning programming concepts rather than the intricacies of the language.
Print Statement: One of the basic functions in Python is print()
, which outputs data to the screen. It's a helpful way to see what your program is doing. For example, print(\"Hello, world!\")
displays the message \"Hello, world!\".
Variables and Data Types: Variables are used to store information that can be referenced and manipulated in a program. Each variable in Python has a data type, which dictates the kind of data it can hold. Common data types include:
int
(integer): Represents whole numbers, e.g., 5
float
: Represents decimal numbers, e.g., 5.0
str
(string): Represents text, e.g., \"Hello, world!\"
bool
(boolean): Represents True or False
Comments: Comments are parts of the code ignored by the Python interpreter. They're used to explain what the code does, making it easier to understand. In Python, you can create a comment by starting the line with the hash (#) symbol.
Basic Operations: Python supports basic arithmetic operations like addition (+), subtraction (-), multiplication (*), division (/), and more. These operations can be performed on numbers and, in some cases, on other types of data.
Control Flow: Control flow is the order in which individual statements, instructions, or function calls are executed or evaluated. The primary types of control flow are conditional statements (if, elif, else) and loops (for, while).
Conditional Statements allow you to execute different blocks of code based on certain conditions.
Loops allow you to execute a block of code multiple times, typically with some variation each time.
Promoting equity and innovation in AI education.
The AI Racing League is a fun way to learn the concepts behind artificial intelligence! We learn AI by teaching small remote-controlled (RC) cars to drive around a track autonomously. The cars use a low-cost Raspberry Pi computer (or NVIDIA Nano) with a camera. Students drive around the track and gather a set of images to \"train\" a neural network that can be used to automatically steer the car.
In addition to teaching machine learning, this course also teaches concepts like Python programming, computer vision, data science and generative AI. Our curriculum is inspired by the DonkeyCar and the CoderDojo mentoring system. We feature a wide variety of lesson plans for the 5th-grade up to college-level participants.
Our secret sauce is to combine the right hardware, software, mentors and a flexible learning curriculum to create fun events that students love to participate in.
Notice
We are terminating our support for the NVIDIA Nano. This is driven by the fact that the Raspberry Pi 4 with 8GB is selling at $75 and the Nano 4GB is now $150. There are considerable problems with the end-of-life of the NVIDIA support for the JetPack libraries.
"},{"location":"#current-status","title":"Current Status","text":"We have several new events happening in 2024. Please contact Dan McCreary if you would like to participate.
The following organizations have expressed an interest in helping out:
The mission of the AI Racing League is to create and deliver educational materials that will make fun AI training accessible to everyone. We place a special focus on students from disadvantaged communities including women and minorities. We work as a sub-project of the CodeSavvy not-for-profit organization and we adhere to their guidelines for the quality and security of our students. This means that all our volunteers have background checks and we limit the student-to-mentor ratios to no more than three students per mentor. We are committed to equal-opportunity mentoring. We strive to recruit, train and retain the best mentors we can find.
We are inspired by the values behind the CoderDojo mentoring system and their innovative use of flexible concept cards. We attempt to publish concept cards that provide a flexible and agile training environment for a wide variety of learners.
We believe strongly in student-led initiatives and project-based learning. We feel students learn the most when they are building things together in teams. We believe our curriculum should be broad to support a wide variety of students from beginning Python to advanced AI.
Rather than force students down a single path of learning, we believe our instructors should be more like travel guides to help students explore their areas of interest. Our curriculum needs to adapt to single-hour events up to multi-year mentoring. See Rhizomatic Learning for what inspires us.
"},{"location":"#checkout-our-other-sites","title":"Checkout Our Other Sites:","text":"The AI Racing League is a fun way to learn the concepts behind artificial intelligence! We learn AI by teaching small remote-controlled (RC) cars to drive around a track autonomously. The cars use a low-cost Raspberry Pi computer (or NVIDIA Nano) with a camera. Students drive around the track and gather a set of images to \"train\" a neural network that can be used to automatically steer the car.
In addition to teaching machine learning, this course also teaches concepts like Python programming, computer vision, data science and generative AI. Our curriculum is inspired by the DonkeyCar and the CoderDojo mentoring system. We feature a wide variety of lesson plans from the 5th grade up to college-level participants.
Our secret sauce is to combine the right hardware, software, mentors and a flexible learning curriculum to create fun events that students love to participate in.
"},{"location":"about/#diy-robocars-and-donkey-cars","title":"DIY Robocars and Donkey Cars","text":"DIY Robocars and the Donkey Cars are the communities that inspired the AI Racing League. We are grateful for all they have done and their willingness to share their knowledge through open-source software.
"},{"location":"about/#coder-dojo","title":"Coder Dojo","text":"A community of over 2,300 free, open and local programming clubs for young people 58,000 young people are being creative with technology with the help of 12,000 volunteers in 94 countries
Code Savvy strives to make kids and teens more code-savvy through creative educational programs and services. We incubate and support community-based programs that bring technology and know-how to local kids and educators, all the while championing gender and ethnic diversity. Code Savvy is dedicated to ensuring the next generation of computer science professionals represents the billions of users of tomorrow\u2019s innovative technologies. Visit https://codesavvy.org
"},{"location":"about/#licensing","title":"Licensing","text":"Like all CoderDojo-created content, you are free to use this content in K-12 noncommercial educational settings for teaching without paying license fees. We also encourage our community to create variations and help us enlarge the curriculum. We always appreciate attribution!
Details of the license terms are here:
Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
"},{"location":"ai-assisted-learning/","title":"AI Assisted Learning Call To Action","text":"ChatGPT Prompt: How Minnesota Can Enhance the K-12 Curriculum to Create More Tech Workers
"},{"location":"ai-assisted-learning/#1-early-introduction-to-computer-science","title":"1. Early Introduction to Computer Science","text":"By implementing these changes and continually evaluating the curriculum's effectiveness, Minnesota can foster a tech-savvy generation ready to take on roles in the ever-evolving tech industry.
"},{"location":"business-plan-2020/","title":"AI Racing League Business Plan for 2020","text":""},{"location":"business-plan-2020/#summary","title":"Summary","text":"This business plan governs the AI Racing League for the calendar year 2020. We are operating as a project within CodeSavvy. CodeSavvy is a register 501C organization. CodeSavvy's mission is to promote coding skills in youth with a focus on promoting mentoring and training for girls and disavantaged youth.
"},{"location":"business-plan-2020/#brief-history","title":"Brief History","text":"The AI Racing League was founded in the summary of 2019. We were inspired by the viral global DonkeyCar project. We wondered if the DonkeyCar racing events could be used to teach AI in the classroom. The goals of the founders were to promote fun events that taught AI to girls and disavantaged youth. We recieved an initial round of $9K in funding from the Optum Technology Social Responsibility and lauched our first event in August of 2019. This event was done at the International School of Minnesota and attracted members from the AI Research community, schools, educators and students.
Since August we have participated in over a dozen events promoting AI instruction. We have trained and initial set of approximatly 50 mentors that are familair with the use of the DonkeyCar
"},{"location":"business-plan-2020/#status-as-of-january-2020","title":"Status as of January 2020","text":""},{"location":"business-plan-2020/#inventory-of-assets","title":"Inventory of Assets","text":"Many colleges have \"Senior Seminar\" courses that allow students to spend time on projects that they are interested in pursuing. This is an outline of a sample college-level project that a team of six students will spend about 60 hours each on.
"},{"location":"college-project/#equipment","title":"Equipment","text":"We can currently supply both the cars and GPU. The school will need to purchase its own track or borrow one from a local school such as Washburn. Optum also has a track that is not being used. The price for a track is around $175.
"},{"location":"college-project/#key-learning-objectives","title":"Key Learning Objectives","text":"Each lesson is designed to provide both theoretical knowledge and practical experience, ensuring a comprehensive understanding of the topics covered.
Title: AI Racking League - 60 Hours
"},{"location":"college-project/#lesson-1-introduction-to-donkeycars-and-project-overview-6-hours","title":"Lesson 1: Introduction to DonkeyCars and Project Overview (6 hours)","text":""},{"location":"college-project/#theory","title":"Theory","text":"At the end of the course, students will be asked to give a presentation and demonstration to the other students in the course.
"},{"location":"college-project/#assessments","title":"Assessments","text":"The student will be asked to create a GitHub repository and keep their documentation, notebooks and code on that site. The use of mkdocs is strongly encouraged. Evaluators will look for signs of high-quality documentation that could be reused by future students in their learning and problem-solving
Students will be asked to describe the problems they had and how they overcame these challenges. They will also reflect on if they effectively used generative AI to solve problems.
Students will each be asked if they would recommend working with their teammates in the future. Higher ratings will be reflected in the student's course evaluations.
"},{"location":"command-line-tips/","title":"Command Line Tips","text":""},{"location":"command-line-tips/#get-setup","title":"Get Setup","text":"git config --global user.name \"Joe Smith\"\ngit config --global user.email \"Joe.Smith123@gmail.com\"\ngit config --global credential.helper store\n
"},{"location":"command-line-tips/#raspberry-pi-command-line-tips","title":"Raspberry Pi Command Line Tips","text":"From Terminal, you can open the current directory in the File Manager using the xdg-open
command. This is similar to the Mac open command.
$ xdg-open .\n
"},{"location":"command-line-tips/#see-if-the-pwm-board-is-working","title":"See If the PWM Board Is Working","text":"i2cdetect -l\n
This should return
i2c-1 i2c bcm2835 (i2c@7e804000) I2C adapter\n
i2cdetect -y 1\n
0 1 2 3 4 5 6 7 8 9 a b c d e f\n00: -- -- -- -- -- -- -- -- -- -- -- -- -- \n10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n40: 40 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n70: 70 -- -- -- -- -- -- -- \n
Note that the line 40 and 70 has values under column 0 (I2C bus 1) If you unplug the data you should get:
0 1 2 3 4 5 6 7 8 9 a b c d e f\n00: -- -- -- -- -- -- -- -- -- -- -- -- -- \n10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n70: -- -- -- -- -- -- -- -- \n
"},{"location":"command-line-tips/#sd-card-speed-test","title":"SD Card Speed Test","text":"Home -> Accessories -> Raspberry Pi Diagnostics
Raspberry Pi Diagnostics - version 0.9\nSat Jul 3 14:25:23 2021\n\nTest : SD Card Speed Test\nRun 1\nprepare-file;0;0;20628;40\nseq-write;0;0;21999;42\nrand-4k-write;0;0;4498;1124\nrand-4k-read;8695;2173;0;0\nSequential write speed 21999 KB/sec (target 10000) - PASS\nRandom write speed 1124 IOPS (target 500) - PASS\nRandom read speed 2173 IOPS (target 1500) - PASS\nTest PASS\n
"},{"location":"contacts/","title":"AI Racing League Contacts","text":"We have a long term vision of using an intelligent agent that will recommend the right content for each of our students based on their current knowledge and their learning goals.
"},{"location":"curriculum/#beginning-green-concepts","title":"Beginning (Green) Concepts","text":""},{"location":"curriculum/#batteries","title":"Batteries","text":""},{"location":"curriculum/#motors","title":"Motors","text":""},{"location":"curriculum/#donkey-car","title":"Donkey Car","text":""},{"location":"curriculum/#activity","title":"Activity","text":"Go to the donkey car station and look at the sample Donkey Car. Ask a mentor to show you the parts.
"},{"location":"curriculum/#questions","title":"Questions","text":"1) What are the key parts of the Donkey Car?The key parts are: * RC Car chassis * Nvidia Jetson Nano * Servo controller * Camera * Battery for the Nano
2) How do the front wheels turn?A 180 degree servo is used to steer the car
3) Can you find an electric motor?There is only a single motor in the RC chassis
4) Can you find a battery? Are their multiple batteries?There are two batteries - one for the motor and one for the Jetson Nano
5) Where is the Jetson Nano (computer)?It is right on top!
6) Where is the camera? Is it on the front or back of the car?The camera is on the top facing forward
7) What happens to the opposite wheel when you turn it?The transmission makes the wheels turn in opposite direction when one wheel is turned. - Is this correct?
8) How much does a Donkey Car cost?The car costs around $250 each. The RC chassis cost about $110.
9) Why do you think they call it a \u201cDonkey Car\u201d?They call it a \"Donkey Car\" because, like a Donkey, it is functional but not very sleek.
"},{"location":"curriculum/#intermediate-concepts","title":"Intermediate Concepts","text":""},{"location":"curriculum/#machine-learning","title":"Machine Learning","text":""},{"location":"curriculum/#activity_1","title":"Activity","text":"Go to the machine learning station and watch the demos. Ask about the difference between if-else statements and machine learning.
"},{"location":"curriculum/#questions_1","title":"Questions","text":"1) What is Machine Learning? How does it differ from traditional rule-based programming?Machine learning is a method of data analysis that automates analytical model building, based on the idea that systems can learn from data. Rule-based programming is built off of if-else statements in code, and therefore every possible situation has to be thought of in advance by the programmer. Therefore, machine learning is well suited for situations where all possible inputs may not be defined.
2) How does a computer learn?The computer learns through a process called training. Training is the process of adjusting a mathematical formula by feeding it data and adjusting the formula until it produces the desired output.
3) What are the major groups of machine learning?There are 5 major groups of algorithms within machine learning. They are: * The connectionists (Neural Networks) * The analogizers (Support Vector Machines) * The Bayesians (Bayes\u2019 Theorem) * The evolutionaries (Genetic Algorithms) * The symbolists (Inverse Deduction)
4) Applications of machine learning are everywhere, what are some examples?Some applications of machine learning are: * Voice Assistants (Siri, Alexa, etc.) * Translation * Self-Driving Cars
"},{"location":"curriculum/#blue-concepts","title":"Blue Concepts","text":""},{"location":"curriculum/#black-concepts","title":"Black Concepts","text":""},{"location":"demo/","title":"Demos","text":"Although our students love hands-on learning with our DonkeyCars, there are other aspects of Artificial Intelligence we like to discuss in our classes. Here are some demos we use in our classrooms.
"},{"location":"demo/#the-teachable-machine-by-google","title":"The Teachable Machine by Google","text":"This demo works with almost any PC that has a built-in video camera. You give it a set of images or pictures, or sounds and you build a model that predicts what a new images our sounds might be. This is called a \"classification\" model.
Much of our classroom work is centered around the hot topic of Deep Learning. But AI is much more than just Deep Learning. Here are a few other areas to consider. (Taken from the book The Master Algorithm)
"},{"location":"demo/#5-camps-of-machine-learning-demos","title":"5 Camps of Machine Learning Demos","text":""},{"location":"demo/#connectionists-neural-networks","title":"Connectionists (Neural Networks)","text":"Check Out TensorFlow Playground
"},{"location":"demo/#analogizers-support-vector-machines","title":"Analogizers (Support Vector Machines)","text":"TODO: FIND GOOD DEMO
"},{"location":"demo/#bayesians-bayes-theorem","title":"Bayesians (Bayes\u2019 Theorem)","text":"Check Out A Bayes' Theorem Example
"},{"location":"demo/#evolutionaries-genetic-algorithms","title":"Evolutionaries (Genetic Algorithms)","text":"Watch This Animation Learn to Walk
"},{"location":"demo/#symbolists-inverse-deduction","title":"Symbolists (Inverse Deduction)","text":"Look at this Decision Tree Demo
"},{"location":"faqs/","title":"AI Racing League Frequently Asked Questions","text":""},{"location":"faqs/#how-much-does-it-cost","title":"How much does it cost?","text":"All our events are free.
However, ff you want to build your own car you are welcome to bring these to the events. Parts for a DonkeyCar typically run about $250 US. See our Car Parts Lists for details.
"},{"location":"faqs/#what-do-i-have-to-know-before-i-come","title":"What do I have to know before I come?","text":"Nothing! We have material for beginners without any prior knowledge of AI.
"},{"location":"faqs/#what-car-part-hardware-do-you-use","title":"What car part hardware do you use?","text":"We use mostly NVIDIA Nano and Raspberry Pi 4 for our single board computers. We use a wide variety of RC-car engines but the [Exceed Magnet] 1/16 scale RC car is a low-cost standard. See our Car Parts Lists for details. Typical car parts cost around $250 US.
"},{"location":"faqs/#what-gpus-do-you-use-and-how-much-do-they-cost","title":"What GPUs do you use and how much do they cost?","text":"We use a standard PC chassis running Lunix with a NVIDIA GPU such as a GTX 2080. These PCs can be purchased for around $1,500. Se our GPU Parts List for details.
"},{"location":"faqs/#how-do-i-sign-up-as-a-student","title":"How do I sign up as a student?","text":"The best way to get involved is by signing up as a student at the CoderDojo Twin Cities web site:
Coderdojotc.org
"},{"location":"faqs/#how-do-i-become-a-mentor","title":"How do I become a mentor?","text":"The best way to get involved is by signing up as a mentor at the CoderDojo Twin Cities web site:
https://www.coderdojotc.org/mentor_signup/
"},{"location":"faqs/#how-do-i-start-my-own-chapter-of-the-ai-racing-league","title":"How do I start my own chapter of the AI Racing League","text":"Please connect with Dan McCreary on LinkedIn and indicate in the note you would like to start your own chapter. Be sure to include information about your leadership and technical background and any related experience working with STEM programs.
"},{"location":"faqs/#can-i-get-a-grant-to-purchase-hardware-for-our-school-or-club","title":"Can I get a grant to purchase hardware for our school or club?","text":"We are working on arranging a grant application process. The best way to start this process is to gather a small group of volunteers that can create a sustainable club. Include people that have a combination of fundraising, technology, education and marketing skills. Reach out to local school administration officials to build a community of science/math and STEM educators. Network with local companies that are trying to build local talent in AI and machine learning.
Please contact Dan McCreary on LinkedIn for details.
"},{"location":"glossary/","title":"AI Racing League Glossary of Term","text":""},{"location":"glossary/#calibration","title":"Calibration","text":"A step in setting up a DonkeyCar where around five values configuration file is created that reflect the physical aspects of the RC car. There are three parameters for the throttle and two parameters for the steering. It is important to get these five parameters correct so you can precisely drive your DonkeyCar.
"},{"location":"glossary/#catalog-file","title":"Catalog File","text":"A format of storing our image-related throttle and steering data in line-oriented file where each line contains the serialized JSON information when the image was captured.
Note that the catalog files are not pure JSON files. Only the data within each line is a valid data JSON object.
The catalog file formats have changed between DonkeyCar releases. The current version is called V2 format.
"},{"location":"glossary/#coderdojo","title":"CoderDojo","text":"An international program of over 2,300 coding clubs that uses data-driven practices to get students interested in coding. Many of the aspects of the AI Racing League uses these same principals.
Key aspects of CoderDojo are:
Their main web site is: http://coderdojo.com/
"},{"location":"glossary/#coderdojo-twin-cities-python-labs","title":"CoderDojo Twin Cities Python Labs","text":"These labs are a set of free-courses to learn Python using fun turtle graphics. There is no experience needed.
These labs are a set of free-courses to learn MicroPython. You should have a background in Python to use these labs. There are lessons in sensors, motors and robots.
This is a $25 robot that you build and program with MicroPython. If you are working on a project that will lead up to a full DonkeyCar, this is an ideal project to get you started. The robot will get you familiar with concepts like PWM, motor controllers and sensors.
Code Savvy is a not-for-profit organization with 501(c)3 status from the IRS that the AI Racing League works as a sub-project. All the AI Racing League financials are organized under a Code Savvy program. Donations to the AI Racing League should be done though Code Savvy donations. Questions about Code Savvy can be sent to kidscode@codesavvy.org
These are small laminated cards that have concepts information on them that students can learn. The idea is one-concept per card. See the CoderDojo TC Guide for Authoring Concept Cards
"},{"location":"glossary/#donkey-car","title":"Donkey Car","text":"This is a trademarked name of a car that is used at our events. The name implies \"ugly\" so you know that they are not designed to look pretty, just functional cars with a camera on the front.
Each of the AI Racing League events usually has at least one GPU server for training our models. These are typically small portable PCs with a GPU card in them. The entire GPU server cost around $1,200 each and can train a 20,000 image data set in under five minutes.
We typically suggest that clubs writing grants use a NVIDIA GEFORCE RTX 2070 8GB or similar card since it is both fast enough for 10-team events but cost effective that schools can afford them. These card are often available used on e-Bay for a few hundred dollars.
Note that we have tried to use cloud-based services at some of our events but we can't be guaranteed that there is enough WiFi bandwidth to move large datasets and models to and from the cloud. We feel that the tasks involved in setting up the GPU server is also a valuable skill for our students.
An electronic circuit that controls and regulates the speed of an electric motor. It also can reverse the direction of the motor. Our ESC
The angle our cameras need to point down to have a good view of the road ahead.
"},{"location":"glossary/#ollama","title":"Ollama","text":"A large-language model trained to answer questions in a chatbot-style interface. Ollama is unique in that is has versions that are highly compressed so they can run on a schools GPU with 12GB RAM. This can be purchased for under $2,000.
"},{"location":"glossary/#normalized","title":"Normalized","text":"Values that have been converted into a standard that can be used across many situations.
For example, we don't store the exact PWM ratios of the throttle and steering values in our catalog files. We convert these values into ranges from 0.0 to 1.0 so that all our machine learning models can share them. This is why we also need the configuration values when the drive commands are used to convert the normalized values back to the appropriate PWM ranges unique to each car.
"},{"location":"glossary/#pulse-width-modulation","title":"Pulse Width Modulation","text":"The way that we control the [Electronic Speed Controller] (ESC) and the servo by sending digital square waves with a variable ratio of the width of the positive part of the square wave.
We want our clubs to all have affordable but high-quality tracks that are easy to roll up and store. Our suggestion is to find used billboard vinyl in a dark color (black or dark blue) and then use white and yellow tape to place down the lines.
The step in DonkeyCar setup where we take approximately 20,000 small image files and the throttle and steering information with each image to build a deep neural network. The training step requires us to move the data off the DonkeyCar's SD card and transfer the data to a more powerful GPU server. Using a typical $1,200 GPU server we can build a model file in around five minutes. This file is then transferred back to the DonkeyCar for autonomous driving.
"},{"location":"glossary/#tubs","title":"Tubs","text":"This is the term that the DonkeyCar software uses to store training data. Each tub consists of a catalog of information about the drive and the images associated with that drive.
Note that the format of the tubs changes over time so old tubs formats may need to be converted to newer formats.
Each Tub is a directory (folder) has two components:
One of the two single board computers we use in our DonkeyCars. The current Nanos have 4GB RAM and a GPU for accelerating real-time inference.
The full product name is the NVIDIA Jetson Nano.
The price for a 4GB Nano is around $99 but they occasionally go on sale for $79. The Nano became available for sale in the US in April of 2019. A 2GB version has also been sold for $59 but the lack of RAM memory makes it difficult to use for many of our AI Racing League events and we don't recommend it.
Note that we do not use the Nano for training. We transfer the data to a GPU server that has more parallel cores for training.
"},{"location":"google-analytics/","title":"Google Analytics","text":"Our Google Tracking ID is: G-RL4MZ0MHZ4
You can see the activity here:
Google Dashboard
"},{"location":"google-analytics/#how-we-enabled-google-analytics","title":"How We Enabled Google Analytics","text":"mkdocs material supports Google Analysis. We only need to add four lines to our mkdocs.yml configuration file.
extra:\n analytics:\n provider: google\n property: G-RL4MZ0MHZ4\n
See our mkdocs.yml on GitHub here:
mkdocs.yml file in GitHub
The following line is placed into each HTML web page in the site branch:
<script async=\"\" src=\"https://www.googletagmanager.com/gtag/js?id=G-RL4MZ0MHZ4\"></script>\n
"},{"location":"hackathon/","title":"Hackathon Ideas","text":"Here are a set of ideas that can be used to plan a Hackathon around the DonkeyCar. Of course, if people are not familiar with the DonkeyCar, just getting it to work is a good project! These are more for teams that are extending the DonkeyCar software.
"},{"location":"hackathon/#beginner-projects-done-in-under-one-day","title":"Beginner Projects - done in under one day","text":"Make it easy to transfer DonkeyCar test data to our GPU server. Start with a UNIX shell script that compresses the tub file and puts the data on a jump drive. Then work on using SSH to copy the files to the GPU server. Then add configuration of the Avahi application and the mDNS protocols to autodiscover the ARL GPU servers and prompte the user.
"},{"location":"hackathon/#mobile-app-to-drive-the-car","title":"Mobile App to Drive The Car","text":"Most robot systems like the MIP have a simple mobile application for driving your robot around. There are two modes: A tilt mode (where you steer by tilting the phone) and a pressure mode where you can control the speed and direction by pressing on a virtual joystick. The problem we have with the current DonkeyCar 3.X system is that the web-based application is difficult to use. The tilt mode does not work on web browsers. We suggest you use a program like AppInventor for Android or Google Flutter and Dash building mobile apps.
"},{"location":"hackathon/#leaderboard-web-page","title":"Leaderboard Web Page","text":"Create a web application that tracks what teams are in the lead. The app should be a single-page application that allows team scores to be updated on a web form. The leaderboard can also be \"smart\" a look for the team config files on each DonkeyCar on the local-area network.
"},{"location":"hackathon/#oled-extension","title":"OLED Extension","text":"Add a low-cost OLED screen to each car using the SPI bus. Have the OLED screen show key parameters such as hostname, static IP address, disk space free, training data size etc. Bonus points for a mode button to cycle through screens. See Dan McCreary for the hardware.
"},{"location":"hackathon/#led-strips-for-training-server-status","title":"LED Strips for Training Server Status","text":"Add an low-cost WS-2811-B LED strip to the GPU server. Make the strip blue when idle, red when you start training an new model, and have it fade to green as the model converges. See Dan McCreary for the hardware.
"},{"location":"hackathon/#training-graph","title":"Training Graph","text":"As students walk in, give them a tablet to register. It will also ask them basic questions. It will then ask them how long they will be there. It will then suggest a set of activities and some concepts to master. The graph is a dependacy graph of all the concepts we teach at the event. Also suggest a probability they will have fun at the event.
"},{"location":"hackathon/#single-source-publishing-for-concept-cards","title":"Single Source Publishing for Concept Cards","text":"Our cards need to be authored in MarkDown but we want to disply on the web, in PPT and with PDF. To do this we want to adopt a single-source publishing pipeline.
"},{"location":"learning-strategy/","title":"AI Racing League Educational Philosophy","text":"The AI Racing League educational philosophy is founded on the following values:
Our Curriculum is based around building a series of concept cards that adhere to the \"one concept per card\" rule. Each card is a 5.5in X 8in laminated card with questions or challenges on the front and answers on the back. Concept cards have three difficulty levels with different colored borders.
Our goal is to keep the concepts as \"flat\" as possible without a deep level of dependency. We try to keep at least half of our cards mostly green beginner cards.
Students will walk into the AI Racing League and see a stack of cards. They will pick up one card or a set of cards and work on these. When they are done they return the cards and select another set of cards.
Because of our
Concept Cards in Google Docs
"},{"location":"learning-strategy/#engineering-challenges","title":"Engineering Challenges","text":"To develop a world class curriculum, we need to partner with senior engineers and curriculum developers. Here are some of the challenges we need to address.
"},{"location":"learning-strategy/#challenge-1-make-it-easy-for-short-term-learning","title":"Challenge #1: Make it easy for short term learning","text":"Engineers with experience in both hardware and software can build their own DonkeyCar from parts in a few weeks, our goal is to allow students from a wide variety of backgrounds to be able to participate in events in a flexible way. A typical CoderDojo event typically only lasts two hours and students may not have the appropriate background in hardware, Python programming or UNIX.
"},{"location":"learning-strategy/#challenge-2-on-site-traning-hardware","title":"Challenge #2: On site traning hardware","text":"Many people that are building DonkeyCars use a standard Mac or PC laptop. These systems take up to two hours to train a typical model - too long for many events. One solution would be to leverage clound-based GPUs to accelerate learning. This option typically requires transferring around 1/2 GB of images up to the clound for training the models. Models, which can typically be 10MB, then need to be transferred back from the clound to the local car. Our challenge here is that many locations may not have high-bandwith uploading and downloading services that could handle this traffic.
One solution is to acquire some robust GPUs that students can use to quickly train complex models - typically in 15 to 20 minutes. This hardware needs to be easy to use - for example we need to do folder-based drag and drops and press a single button to begin training.
"},{"location":"media/","title":"Media","text":"Ready, Set, Algorithms! Teams Learn AI by Racing Cars Morningstar, Liberty Mutual workers are coming up with business ideas after exploring machine learning via mini self-driving vehicles
"},{"location":"presentations/","title":"AI Racing League Presentations","text":"These presentations are all licensed under our creative commons share alike non-commercial with attribution licenses.
Here are some sites that are of interest:
Raspberry Pi 3, 4, the Nvidia Nano, the Nvdia DX2, and the Intel Mobius Neural Stick The base DonkeyCar today uses the Raspberry Pi 3+ which has a list price of $35. This hardware is just barly able to process images in real-time. Small changes in lighting will throw the car off the track. The new Raspberry Pi 4 with 4GB RAM is a new option.
The Nvidia Nano on the other hand has 128 CUDA core processors and has more than enough power to drive around a track in real time with varied lighting conditions. This is the hardware we have used for our first generation cars in the AI Racing League.
There are also college-level autonomous driving teams that use the more expensive Nvidia DX2 hardware.
"},{"location":"resources/#nvidia-nano","title":"Nvidia Nano","text":"Jetson Nano References
Brennan Neuser's PiCar Site
This is a much smaller car that has a small turning radius built around the SunFounder AI PiCar kit. The software was modified by Brennan Neuser and the hardware was purchased by Sharat Bhatra
"},{"location":"six-week-curriculum/","title":"Sample Six-Week Curriculum","text":"This is a sample suggested curriculum for a six week AI Racing League summer school project. The students would all meet together for two hours, once a week. There are then homework assignments. The students don't need any prior experience.
"},{"location":"six-week-curriculum/#week-1-overview-and-unboxing","title":"Week 1: Overview and Unboxing","text":"See the suggested parts list for week 1
"},{"location":"six-week-curriculum/#week-2-booting-a-raspberry-pi-unix-calibration-intro-to-python-and-raspberry-pi","title":"Week 2: Booting a Raspberry Pi, UNIX, Calibration, Intro to Python and Raspberry Pi","text":"This section show how leage administrators can create their own SD image files.
"},{"location":"admin/02-sd-image/","title":"Creating a League SD Image","text":"Many times teams will not have the time to build their own image during the time allocated for an event. It typically takes 2-4 hours to create a DonkeyCar image that is ready to drive. To get around this problem, leagues frequently create their own \"reference image\" that are given to teams.
"},{"location":"admin/02-sd-image/#checklist-for-the-league-image","title":"Checklist for the League Image","text":"python --version
## Things to Remove from your Image
Note
The AI Racing League ONLY uses this for training our models. We don't need elaborate CPU overclocking and a water cooled CPU. We don't need powerful CPU and lots of RAM. We just need to be able to train a 20K image model within around 5-10 minutes. Most GPUs can do this.
"},{"location":"admin/03-purchasing-guide/#portable-case","title":"Portable Case","text":"We wanted a small lightweight case with a handle and tempered glass sides so our teams can see what is inside. The price is around $110.00.
Lian Li TU150 Mini ITX Desktop Case
"},{"location":"admin/03-purchasing-guide/#motherboard","title":"Motherboard","text":""},{"location":"admin/03-purchasing-guide/#ram","title":"RAM","text":""},{"location":"admin/03-purchasing-guide/#gpu","title":"GPU","text":""},{"location":"admin/03-purchasing-guide/#solid-state-drive","title":"Solid State Drive","text":""},{"location":"admin/04-tools/","title":"AI Racing League Tools","text":"Here are some handy tools to help you with your cars.
"},{"location":"admin/04-tools/#usb-voltage-current-meter","title":"USB Voltage Current Meter","text":"Many USB battery packs don't maintain a voltage of 5.1v when under load. To monitor this, you need to watch the voltage level that arrives at the single-board computer input voltage. This needs to include any voltage drop over the USB power cable.
Sample item on e-Bay
This item uses a four semi-double loop integral converter ICs, with a precision bandgap reference. The vendor claims the device has:
Specifications:
Note the voltage drop when current is passing through the device. In practice, the exact voltage drop varies with the current. For a typical project drawing 300 milliamps, the voltage drop is around 100vV.
"},{"location":"admin/car-box-checklist/","title":"AI Racing League Car Box Checklist","text":""},{"location":"admin/car-box-checklist/#donkey-car-nvidia-kit","title":"Donkey Car NVIDIA Kit","text":"Car Name: ____ Mac Address: ____ Static IP Address: ___
[ ] RC Car Battery Charger (7.2v NiMh)
[ ] Pi Camera Module V2 with 3D printed chassis
[ ] Ankar 5v 6800mHA battery with charging cable - note draws 900ma when charging so use a 1ft high current USB cable.
[ ] 2.5 amp 5v barrel connector for desktop use of Nvidia
Optional Accessories (not in the box) 1. [ ] Keyboard 1. [ ] External Monitor
Nvidia Nano Serial Number: ________ Nvidia Nano Purchase Date: December 12, 2019
"},{"location":"admin/car-box-checklist/#raspberry-pi-donkeycar-kit","title":"Raspberry Pi DonkeyCar Kit","text":"We have looked at many variations of parts and decided to go with the Nvidia Nano, a TP-Link WiFi dongle and the Logitech F710 Joystick. Here are our recomended parts.
We are also looking into getting the wide-angle (160 degree) cameras but we have not tested these enough.
Part Name Description Price Link Note 128GB microSD card Samsung 128GB 100MB/s (U3) MicroSDXC Evo Select Memory Card with Adapter (MB-ME128GA/AM) $20 https://www.amazon.com/Samsung-MicroSD-Adapter-MB-ME128GA-AM/dp/B06XWZWYVP MicroCenter in St. Louis Park has these for about 1/2 the prices Camera Raspberry Pi Camera Module V2-8 Megapixel,1080p $30 https://www.amazon.com/Raspberry-Pi-Camera-Module-Megapixel/dp/B01ER2SKFS MUST be Module V2. The V1 will NOT work with the Nano. Dupont Connectors (F-F) EDGELEC 120pcs 20cm Dupont Wire Female to Female Breadboard Jumper Wires 3.9 inch 1pin-1pin 2.54mm Connector Multicolored Ribbon Cables DIY Arduino Wires 10 15 20 30 40 50 100cm Optional $8 for 120 https://www.amazon.com/EDGELEC-Breadboard-1pin-1pin-Connector-Multicolored/dp/B07GCY6CH7 Only need one of these Nvidia Nano Single Board Computer NVIDIA Jetson Nano Developer Kit $99 https://www.amazon.com/NVIDIA-Jetson-Nano-Developer-Kit/dp/B07PZHBDKT Ships in two days Power for Pi - 6700mAh Anker [Upgraded to 6700mAh] Astro E1 Candy-Bar Sized Ultra Compact Portable Charger, External Battery Power Bank, with High-Speed Charging PowerIQ Technology $24 https://www.amazon.com/Anker-Upgraded-Candy-Bar-High-Speed-Technology/dp/B06XS9RMWS I like this one but there are other variations. Some are rated at 10,000 mAh Power Supply for Nano SMAKN DC 5V/4A 20W Switching Power Supply Adapter 100-240 Ac(US) $10 https://www.amazon.com/SMAKN-Switching-Supply-Adapter-100-240/dp/B01N4HYWAM Note that this is a 4A 12V power supply. RC Car 1/16 2.4Ghz Exceed RC Magnet Electric Powered RTR Off Road Truck Stripe Blue NEW $119 https://www.ebay.com/itm/1-16-2-4Ghz-Exceed-RC-Magnet-Electric-Powered-RTR-Off-Road-Truck-Stripe-Blue-NEW/223337258165 E-Bay Wifi USB Dongle N150 USB wireless WiFi network Adapter for PC with SoftAP Mode - Nano Size, Compatible with Linux Kernal 2.6.18~4.4.3 (TL-WN725N) $7 https://www.amazon.com/TP-Link-TL-WN725N-wireless-network-Adapter/dp/B008IFXQFU/ I purchased one at Microcenter and it worked out-of-the-box on the Nano. The Ubuntu drivers are pre-loaded! Servo Module HiLetgo 2pcs PCA9685 16 Channel 12-Bit PWM Servo Motor Driver IIC Module for Arduino Robot $10 for 2 https://www.amazon.com/gp/product/B07BRS249H/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1 Note the quantity is 2 USB Power Cable Anker [4-Pack] Powerline Micro USB (1ft) - Charging Cable $10 for 4 https://www.amazon.com/gp/product/B015XR60MQ/ref=ppx_yo_dt_b_asin_title_o02_s00 Note the quantity is 4 but you only need one Replacement Battery 7.2V 1100mAh 6x 2/3A Rechargeable Ni-MH RC Battery Pack w/Small Tamiya Connector 10cmX3cmX1.5cm $9.88 + $2.39 Shipping https://www.ebay.com/i/183877810537 Takes several weeks to ship from China. We are looking for a local supplier. Some replacements (Airsoft guns) have reverse polarity."},{"location":"admin/gpu-parts/","title":"AI Racing League GPU Components","text":""},{"location":"admin/gpu-parts/#design-goals","title":"Design Goals","text":"We wanted to create a local training system that had fast training times but was portable so that we can easily carry it in a car and ship it to remote events. We can't assume any connectivity to the Internet for our events since some of them might be held in parking lots with no network access. Here are our design objectives.
We also drive to remote events and the equipment needs to be outside overnight in freezing weather. This rules out using any water-cooled hardware which gets easily damaged in freezing weather.
"},{"location":"admin/gpu-parts/#fast-training-times","title":"Fast Training Times","text":"We want students to be able to drive around a track 20 times (10 times clockwise and 10 times counterclockwise) and generate a reasonable sized data set of 20 frames per second and 224X224 images. This ends up being about 10,000 images. The sizes are a bit larger for larger tracks and slower drivers.
"},{"location":"admin/gpu-parts/#why-we-like-the-nvidia-rtx-2070","title":"Why We Like the NVIDIA RTX 2070","text":"We want to train with this data set in under five minutes. This means that we want to use a GPU card that has about 2000 CUDA cores. An example of this is the Nvidia GeForce GTX graphic cards. The RTX 2070 which currently has a list price of around $500. There are many people that are upgrading their video game systems and are selling these GPUs used on eBay and Craigslist.com for a few hundred dollars.
A higher cost option is the NVIDIA RTX 2080 which has a retail list price of around $1,200 USD. The benchmarks for image training for these two boards were done by Dr Donald Kinghorn in March of 2019. [His analysis] (https://www.pugetsystems.com/labs/hpc/TensorFlow-Performance-with-1-4-GPUs----RTX-Titan-2080Ti-2080-2070-GTX-1660Ti-1070-1080Ti-and-Titan-V-1386/) shows that a single GTX 2080 Ti can process about 293 images per second. The GTX 2070 only does about 191 images per second. But for about 1/3 of the price it is still a good value.
"},{"location":"admin/gpu-parts/#small-and-lightweight","title":"Small and Lightweight","text":"We originally were \"gifted\" a somewhat old GPU server used in a data center for training deep learning models. Although the sever was \"free\", it was over 70 pounds and had far more capability for RAM and power then we needed at events. Based in this experience we opted to build a much smaller system using a mini enclosure with a handle. We selected the Mini ITX Desktop Case and determined that we could still fit the GPU in this case.
"},{"location":"admin/gpu-parts/#rugged","title":"Rugged","text":"Must be able to take the bumps of shipping and be able to be left out in a car overnight in freezing temperatures. This was a requirement for remote events in rural Minnesota communities. We opted for a full SSD drive to keep the moving parts to a minimum.
"},{"location":"admin/gpu-parts/#easy-to-ship-to-remote-sites","title":"Easy to ship to remote sites","text":"We had to be able to put the unit is a remote shipping case. We are still looking for low-cost cases that are lightweight but protective.
"},{"location":"admin/gpu-parts/#visibility","title":"Visibility","text":"We wanted students to be able to look into the case and see the parts. There is a trend to also purchase RGB LED versions of components which we thought we could program to change from RED to Green during the training process as the model converges. We have not found a good API for the parts so a simple $5 LED strip on a Arduino Nano might be a better idea. See the Moving Rainbow project for sample designs. We create these at the IoT hackthons each year.
"},{"location":"admin/gpu-parts/#sample-parts-list","title":"Sample Parts List","text":""},{"location":"admin/gpu-parts/#2023-update","title":"2023 Update","text":"PCPartPicker Part List $769 with Monitor by Neal Kelly
Part Name Description Price Link CPU AMD Ryzen 5 3600 3.6 GHz 6-Core Processor $95.00 Amazon Motherboard MSI A520M-A PRO Micro ATX AM4 Motherboard $101.11 Amazon Memory Silicon Power SP016GBLFU320X02 16 GB (1 x 16 GB) DDR4-3200 CL22 Memory $23.99 Amazon Storage TEAMGROUP MP33 512 GB M.2-2280 PCIe 3.0 X4 NVME Solid State Drive $22.49 Amazon Video Card Asus Dual GeForce RTX 3060 V2 OC Edition GeForce RTX 3060 12GB 12 GB Video Card $299.99 Amazon Case Thermaltake Versa H18 MicroATX Mini Tower Case $49.99 Amazon Power Supply be quiet! Pure Power 11 400 W 80+ Gold Certified ATX Power Supply $89.69 Amazon Monitor Acer V227Q Abmix 21.5\" 1920 x 1080 75 Hz Monitor $87.29 Amazon Total $769.55 Part Name Description Price Link Note CPU AMD Ryzen 5 3600 3.6 GHz 6-Core Processor $189.99 Motherboard Gigabyte X570 I AORUS PRO WIFI Mini ITX AM4 $219.99 RAM Corsair Vengeance RGB Pro 32 GB (2 x 16 GB) DDR4-3200 Memory $162.99 Link Notes Storage Gigabyte AORUS NVMe Gen4 1 TB M.2-2280 NVME Solid State Drive $209.99 Link Notes Cooling Be quiet! Dark Rock Pro 4, BK022, 250W TDP $89.90 https://www.amazon.com/dp/B07BY6F8D9/ref=cm_sw_r_cp_api_i_PYp-DbFCY51CH Avoid liquid cooler GPU Card NVIDIA GeForce RTX 2070 Ti 8 GB $499.99 https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2070-super/ $500 price is a lower cost alternative Case Lian Li TU150 Mini ITX Desktop Case $109.99 Link We love the handle on this small case and the glass side panel. Power Supply Corsair SF 600W 80+ Gold SFX Power Supply $114.99 Link 600W is an overkillNote that this motherboard does come with builtin WiFi. The external antenna must be connected but it is easy to get lost in transport. You might want to get a few additional WiFi antennas like these RP-SMA Male Antenna We also think we could get buy with a smaller and lighter power supply, but the 600W model gives the system the opportunity to add external devices that might draw more power.
"},{"location":"admin/gpu-parts/#assembly","title":"Assembly","text":"There are several good videos on YouTube that show how to assemble custom systems. You can also use a search engine to find videos for each of the parts. The Liquid coolers can be tricky to install correctly if you don't have experience. We also recommend reading the user manauals for each of the parts. They are usually on line.
Jon Herke's Tiny Monster
"},{"location":"admin/gpu-parts/#installing-nvidia-drivers-on-ubuntu","title":"Installing NVIDIA Drivers on Ubuntu","text":"Installing NVIDIA drivers on Ubuntu is notoriously painful and difficult. One mis-step and you can't get to the monitor and have to ssh in to fix things. Make sure to setup ssh before you install the NVIDIA drivers.
We used the UNIX command line to install the NVIDIA drivers. The GUI tool on Ubuntu did not work for us in some settings. See NVIDIA Driver Install.
A guide to do this is here: Installation of Nvidia Drivers on Ubuntu 18
"},{"location":"admin/gpu-shell/","title":"Shell Commands for the GPU Server","text":"The following is a list of shell commands for the AI Racing League GPU Server. We have moved all the commands for setting up the NVIDIA GPU to the file NVIDIA Driver Install.
The samples below are run if you are on the GPU running the Terminal shell or you have logged on using the secure shell program.
"},{"location":"admin/gpu-shell/#secure-shell-login","title":"Secure Shell Login","text":"$ ssh arl@arl1.local\n
"},{"location":"admin/gpu-shell/#check-the-version-of-ubuntu","title":"Check the Version of Ubuntu","text":"$ lsb_release -a\n
Response:
No LSB modules are available.\nDistributor ID: Ubuntu\nDescription: Ubuntu 18.04.3 LTS\nRelease: 18.04\nCodename: bionic\n
"},{"location":"admin/gpu-shell/#list-the-cpu-information","title":"List the CPU Information","text":"lscpu\n
Response:
Architecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nCPU(s): 12\nOn-line CPU(s) list: 0-11\nThread(s) per core: 2\nCore(s) per socket: 6\nSocket(s): 1\nNUMA node(s): 1\nVendor ID: AuthenticAMD\nCPU family: 23\nModel: 113\nModel name: AMD Ryzen 5 3600 6-Core Processor\nStepping: 0\nCPU MHz: 2195.902\nCPU max MHz: 3600.0000\nCPU min MHz: 2200.0000\nBogoMIPS: 7187.07\nVirtualization: AMD-V\nL1d cache: 32K\nL1i cache: 32K\nL2 cache: 512K\nL3 cache: 16384K\nNUMA node0 CPU(s): 0-11\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca\n
The key is that we have 12 CPUs and each CPU has two threads. That means that we have 24 threads that run concurrent operations on this server. This is plenty of capacity for our GPU server.
"},{"location":"admin/gpu-shell/#ram","title":"RAM","text":"free -m\n
Response:
total used free shared buff/cache available\nMem: 32124 1627 28879 75 1618 30019\nSwap: 2047 0 2047\n
This indicates we have 32 GB RAM. The GPU server should have a minimum of 8 GB of RAM.
"},{"location":"admin/gpu-shell/#disk-space","title":"Disk Space","text":"df -h /\n
Response:
Filesystem Size Used Avail Use% Mounted on\n/dev/nvme0n1p3 229G 178G 40G 82% /\n
This shows we have a total of 229 gigabytes of RAM and we have 40 gigabytes available. We will need about 4 GB for each training set we store.
"},{"location":"admin/gpu-shell/#per-user-disk-usage","title":"Per User Disk Usage","text":"du -hs /home/* 2>/dev/null\n
Response:
8.5G /home/arl\n1.4G /home/dan\n16K /home/dan2\n155G /home/donkey\n
"},{"location":"admin/gpu-shell/#add-a-new-gpu-server-user","title":"Add A New GPU Server User","text":"adduser donkey\n
You can also allow the user to have \"sudo\" rights by using the following command: sudo usermod -aG sudo donkey\n
"},{"location":"admin/gpu-shell/#change-the-hostname","title":"Change the Hostname","text":"sudo vi /etc/hostname\n
Change the name to \"gpu-server2\" or a similar name.
"},{"location":"admin/gpu-shell/#gpu-commands","title":"GPU Commands","text":""},{"location":"admin/gpu-shell/#display-type","title":"Display Type","text":"sudo lshw -class display\n
Response:
description: VGA compatible controller\n product: TU102 [GeForce RTX 2080 Ti Rev. A]\n vendor: NVIDIA Corporation\n physical id: 0\n bus info: pci@0000:09:00.0\n version: a1\n width: 64 bits\n clock: 33MHz\n capabilities: pm msi pciexpress vga_controller bus_master cap_list rom\n configuration: driver=nvidia latency=0\n resources: irq:101 memory:f6000000-f6ffffff memory:e0000000-efffffff memory:f0000000-f1ffffff ioport:e000(size=128) memory:c0000-dffff\n
Note that line 2 lists the product of TU102 [GeForce RTX 2080 Ti Rev. A] but it does not indicate how much RAM we have on the defice.
"},{"location":"admin/gpu-shell/#nvidia-gpu-configuration","title":"NVIDIA GPU Configuration","text":"nvidia-smi\n
Result:
Mon Sep 25 14:17:46 2023 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n| | | MIG M. |\n|===============================+======================+======================|\n| 0 GeForce RTX 208... Off | 00000000:09:00.0 Off | N/A |\n| 40% 32C P8 18W / 260W | 184MiB / 11018MiB | 0% Default |\n| | | N/A |\n+-------------------------------+----------------------+----------------------+\n\n+-----------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=============================================================================|\n| 0 N/A N/A 1026 G /usr/lib/xorg/Xorg 35MiB |\n| 0 N/A N/A 2315 G /usr/lib/xorg/Xorg 90MiB |\n| 0 N/A N/A 2509 G /usr/bin/gnome-shell 10MiB |\n| 0 N/A N/A 4118 G ...eLanguageDetectionEnabled 34MiB |\n+-----------------------------------------------------------------------------+\n
This display is difficult to read. But the key item is in the second column under the Memory-Usage
area:
184MiB / 11018MiB
This says that we are using 184 megabytes and have 11.018 Gigabyte to use to run our machine learning tasks.
"},{"location":"admin/gpu-shell/#nvidia-gpu-monitor","title":"NVIDIA GPU Monitor","text":"The runs similar to the UNIX top command, but for the GPU.
watch -d -n 0.5 nvidia-smi\n
"},{"location":"admin/gpu-shell/#nvidia-gpu-utilization","title":"NVIDIA GPU Utilization","text":"This shows the GPU running at 42% utilization during the training process.
nvidia-smi\nMon Jul 26 20:24:16 2021 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n| | | MIG M. |\n|===============================+======================+======================|\n| 0 NVIDIA GeForce ... Off | 00000000:09:00.0 On | N/A |\n| 41% 49C P2 136W / 260W | 10892MiB / 11016MiB | 42% Default |\n| | | N/A |\n+-------------------------------+----------------------+----------------------+\n\n+-----------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=============================================================================|\n| 0 N/A N/A 1327 G /usr/lib/xorg/Xorg 18MiB |\n| 0 N/A N/A 1398 G /usr/bin/gnome-shell 71MiB |\n| 0 N/A N/A 1574 G /usr/lib/xorg/Xorg 155MiB |\n| 0 N/A N/A 1705 G /usr/bin/gnome-shell 32MiB |\n| 0 N/A N/A 23722 G ...AAAAAAAAA= --shared-files 25MiB |\n| 0 N/A N/A 27071 G ...AAAAAAAAA= --shared-files 9MiB |\n| 0 N/A N/A 32486 C ...a3/envs/donkey/bin/python 10571MiB |\n+-----------------------------------------------------------------------------+\n
"},{"location":"admin/joystick/","title":"Joystick","text":"Logitec F710 Game Controller for DonkeyCar
https://docs.donkeycar.com/parts/controllers/
Testing to see if the Nano Recognizes the F710 USB Dongle You can use the \"lsusb\" UNIX shell command to list all the USB devices:
$ lsusb Bus 002 Device 002: ID 0bda:0411 Realtek Semiconductor Corp. Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 004: ID 0bda:8179 Realtek Semiconductor Corp. RTL8188EUS 802.11n Wireless Network Adapter Bus 001 Device 005: ID 046d:c21f Logitech, Inc. F710 Wireless Gamepad [XInput Mode] Bus 001 Device 002: ID 0bda:5411 Realtek Semiconductor Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Note that the USB device with an ID of 046d:c21f has been found in the 4th line above. The first ID before the colon is the device manufacturer (Logiteh) and the second is the id of their device (c21f). Linux looks this number up in their system and then loads the driver for this type of device.
The driver will create a device file in the /dev/input directory called js0
$ ls -l /dev/input/js0 crw-rw-r--+ 1 root input 13, 0 Aug 16 19:30 /dev/input/js0
The \"c\" in the first letter says that this is a character I/O device.
$ sudo apt-get install evtest [sudo] password for dan: Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: apt-clone archdetect-deb bogl-bterm busybox-static cryptsetup-bin dpkg-repack gir1.2-timezonemap-1.0 gir1.2-xkl-1.0 grub-common kde-window-manager kinit kio kpackagetool5 kwayland-data kwin-common kwin-data kwin-x11 libdebian-installer4 libkdecorations2-5v5 libkdecorations2private5v5 libkf5activities5 libkf5attica5 libkf5completion-data libkf5completion5 libkf5declarative-data libkf5declarative5 libkf5doctools5 libkf5globalaccel-data libkf5globalaccel5 libkf5globalaccelprivate5 libkf5idletime5 libkf5jobwidgets-data libkf5jobwidgets5 libkf5kcmutils-data libkf5kcmutils5 libkf5kiocore5 libkf5kiontlm5 libkf5kiowidgets5 libkf5newstuff-data libkf5newstuff5 libkf5newstuffcore5 libkf5package-data libkf5package5 libkf5plasma5 libkf5quickaddons5 libkf5solid5 libkf5solid5-data libkf5sonnet5-data libkf5sonnetcore5 libkf5sonnetui5 libkf5textwidgets-data libkf5textwidgets5 libkf5waylandclient5 libkf5waylandserver5 libkf5xmlgui-bin libkf5xmlgui-data libkf5xmlgui5 libkscreenlocker5 libkwin4-effect-builtins1 libkwineffects11 libkwinglutils11 libkwinxrenderutils11 libqgsttools-p1 libqt5designer5 libqt5help5 libqt5multimedia5 libqt5multimedia5-plugins libqt5multimediaquick-p5 libqt5multimediawidgets5 libqt5opengl5 libqt5positioning5 libqt5printsupport5 libqt5qml5 libqt5quick5 libqt5quickwidgets5 libqt5sensors5 libqt5sql5 libqt5test5 libqt5webchannel5 libqt5webkit5 libxcb-composite0 libxcb-cursor0 libxcb-damage0 os-prober python3-dbus.mainloop.pyqt5 python3-icu python3-pam python3-pyqt5 python3-pyqt5.qtsvg python3-pyqt5.qtwebkit python3-sip qml-module-org-kde-kquickcontrolsaddons qml-module-qtmultimedia qml-module-qtquick2 rdate tasksel tasksel-data Use 'sudo apt autoremove' to remove them. The following additional packages will be installed: evemu-tools libevemu3 The following NEW packages will be installed: evemu-tools evtest libevemu3 0 upgraded, 3 newly installed, 0 to remove and 7 not upgraded. Need to get 38.2 kB of archives. After this operation, 191 kB of additional disk space will be used. Do you want to continue? [Y/n] y Get:1 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 libevemu3 arm64 2.6.0-0.1 [11.0 kB] Get:2 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 evemu-tools arm64 2.6.0-0.1 [12.3 kB] Get:3 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 evtest arm64 1:1.33-1build1 [14.9 kB] Fetched 38.2 kB in 1s (56.1 kB/s) debconf: delaying package configuration, since apt-utils is not installed Selecting previously unselected package libevemu3:arm64. (Reading database ... 140149 files and directories currently installed.) Preparing to unpack .../libevemu3_2.6.0-0.1_arm64.deb ... Unpacking libevemu3:arm64 (2.6.0-0.1) ... Selecting previously unselected package evemu-tools. Preparing to unpack .../evemu-tools_2.6.0-0.1_arm64.deb ... Unpacking evemu-tools (2.6.0-0.1) ... Selecting previously unselected package evtest. Preparing to unpack .../evtest_1%3a1.33-1build1_arm64.deb ... Unpacking evtest (1:1.33-1build1) ... Setting up evtest (1:1.33-1build1) ... Processing triggers for libc-bin (2.27-3ubuntu1) ... Processing triggers for man-db (2.8.3-2ubuntu0.1) ... Setting up libevemu3:arm64 (2.6.0-0.1) ... Setting up evemu-tools (2.6.0-0.1) ... Processing triggers for libc-bin (2.27-3ubuntu1) ... dan@danm-nano:~$
Now run it:
$ evtest
No device specified, trying to scan all of /dev/input/event* Not running as root, no devices may be available. Available devices: /dev/input/event2: Logitech Gamepad F710
Select the device event number [0-2]: 2
Logitech Gamepad F710 Input driver version is 1.0.1 Input device ID: bus 0x3 vendor 0x46d product 0xc21f version 0x305 Input device name: \"Logitech Gamepad F710\" Supported events: Event type 0 (EV_SYN) Event type 1 (EV_KEY) Event code 304 (BTN_SOUTH) Event code 305 (BTN_EAST) Event code 307 (BTN_NORTH) Event code 308 (BTN_WEST) Event code 310 (BTN_TL) Event code 311 (BTN_TR) Event code 314 (BTN_SELECT) Event code 315 (BTN_START) Event code 316 (BTN_MODE) Event code 317 (BTN_THUMBL) Event code 318 (BTN_THUMBR) Event type 3 (EV_ABS) Event code 0 (ABS_X) Value 128 Min -32768 Max 32767 Fuzz 16 Flat 128 Event code 1 (ABS_Y) Value -129 Min -32768 Max 32767 Fuzz 16 Flat 128 Event code 2 (ABS_Z) Value 0 Min 0 Max 255 Event code 3 (ABS_RX) Value 128 Min -32768 Max 32767 Fuzz 16 Flat 128 Event code 4 (ABS_RY) Value -129 Min -32768 Max 32767 Fuzz 16 Flat 128 Event code 5 (ABS_RZ) Value 0 Min 0 Max 255 Event code 16 (ABS_HAT0X) Value 0 Min -1 Max 1 Event code 17 (ABS_HAT0Y) Value 0 Min -1 Max 1 Properties: Testing ... (interrupt to exit)
Now as you press any key or move any joystick you will see the events.
When I press the yellow Y we see: Event: time 1566006064.962158, type 1 (EV_KEY), code 308 (BTN_WEST), value 1 Event: time 1566006064.962158, -------------- SYN_REPORT ------------ Event: time 1566006065.129981, type 1 (EV_KEY), code 308 (BTN_WEST), value 0 Event: time 1566006065.129981, -------------- SYN_REPORT ------------
Blue X Event: time 1566006110.047015, type 1 (EV_KEY), code 307 (BTN_NORTH), value 1 Event: time 1566006110.047015, -------------- SYN_REPORT ------------ Event: time 1566006110.182606, type 1 (EV_KEY), code 307 (BTN_NORTH), value 0 Event: time 1566006110.182606, -------------- SYN_REPORT ------------
Red B Event: time 1566006143.423217, type 1 (EV_KEY), code 305 (BTN_EAST), value 1 Event: time 1566006143.423217, -------------- SYN_REPORT ------------ Event: time 1566006143.499642, type 1 (EV_KEY), code 305 (BTN_EAST), value 0 Event: time 1566006143.499642, -------------- SYN_REPORT ------------
Green A Event: time 1566006184.060282, type 1 (EV_KEY), code 304 (BTN_SOUTH), value 1 Event: time 1566006184.060282, -------------- SYN_REPORT ------------ Event: time 1566006184.128408, type 1 (EV_KEY), code 304 (BTN_SOUTH), value 0 Event: time 1566006184.128408, -------------- SYN_REPORT ------------
Moving the joystick generates: Event: time 1566006255.549652, -------------- SYN_REPORT ------------ Event: time 1566006255.553650, type 3 (EV_ABS), code 1 (ABS_Y), value -10923 Event: time 1566006255.553650, -------------- SYN_REPORT ------------ Event: time 1566006255.557650, type 3 (EV_ABS), code 1 (ABS_Y), value -14264 Event: time 1566006255.557650, -------------- SYN_REPORT ------------ Event: time 1566006255.561652, type 3 (EV_ABS), code 1 (ABS_Y), value -18633
"},{"location":"admin/nvidia-driver-install/","title":"Install the NVIDIA Driver","text":"Ideally you should be able to use the Ubuntu \"Software and Updates\" tool to install the NIVIDA driver. This usually works, but if you get errors, you may need to use the unix shell.
"},{"location":"admin/nvidia-driver-install/#nvidia-card-verification","title":"NVIDIA Card Verification","text":"You can first verify that the GPU card has been installed and powered up. We can use the \"list hardware\" command with the display option:
$ sudo lshw -C display\n
*-display UNCLAIMED \n description: VGA compatible controller\n product: GV102\n vendor: NVIDIA Corporation\n physical id: 0\n bus info: pci@0000:09:00.0\n version: a1\n width: 64 bits\n clock: 33MHz\n capabilities: pm msi pciexpress vga_controller bus_master cap_list\n configuration: latency=0\n resources: memory:f6000000-f6ffffff memory:e0000000-efffffff memory:f0000000-f1ffffff ioport:e000(size=128) memory:c0000-dffff\n
This shows that there is a GPU card installed but not claimed by the display.
"},{"location":"admin/nvidia-driver-install/#nvidia-devices","title":"NVIDIA Devices","text":"You can then use the ubuntu-drivers command to see the devices.
$ ubuntu-drivers devices\n== /sys/devices/pci0000:00/0000:00:03.1/0000:09:00.0 ==\nmodalias : pci:v000010DEd00001E07sv000010DEsd000012A4bc03sc00i00\nvendor : NVIDIA Corporation\ndriver : nvidia-driver-470 - distro non-free recommended\ndriver : nvidia-driver-460-server - distro non-free\ndriver : nvidia-driver-418-server - distro non-free\ndriver : nvidia-driver-460 - distro non-free\ndriver : nvidia-driver-450-server - distro non-free\ndriver : xserver-xorg-video-nouveau - distro free builtin\n\n== /sys/devices/pci0000:00/0000:00:01.2/0000:02:00.0/0000:03:04.0/0000:05:00.0 ==\nmodalias : pci:v00008086d00002723sv00008086sd00000084bc02sc80i00\nvendor : Intel Corporation\nmanual_install: True\ndriver : backport-iwlwifi-dkms - distro free\n
"},{"location":"admin/nvidia-driver-install/#ubuntu-drivers-autoinstall","title":"Ubuntu Drivers Autoinstall","text":"sudo ubuntu-drivers autoinstall\n
This tool will tell you what drivers you need to install.
sudo apt-get install nvidia-driver-470\n
This will often generate errors but it will indicate what other libraries need to be installed for the 470 driver to work.
"},{"location":"admin/nvidia-driver-install/#final-test","title":"Final Test","text":"Now we are ready to probe the full GPU and get all the statistics of what is in the GPU.
nvidia-smi\n
Thu Jul 22 22:59:36 2021 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n| | | MIG M. |\n|===============================+======================+======================|\n| 0 NVIDIA GeForce ... Off | 00000000:09:00.0 Off | N/A |\n| 41% 36C P8 2W / 260W | 283MiB / 11016MiB | 0% Default |\n| | | N/A |\n+-------------------------------+----------------------+----------------------+\n\n+-----------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=============================================================================|\n| 0 N/A N/A 1327 G /usr/lib/xorg/Xorg 18MiB |\n| 0 N/A N/A 1398 G /usr/bin/gnome-shell 71MiB |\n| 0 N/A N/A 1574 G /usr/lib/xorg/Xorg 98MiB |\n| 0 N/A N/A 1705 G /usr/bin/gnome-shell 91MiB |\n+-----------------------------------------------------------------------------+\n
If you don't get this or a similar display, you must continue to search for installation instructions.
After you get this screen you can reboot.
"},{"location":"admin/nvidia-driver-install/#cuda-version","title":"CUDA Version","text":"nvcc --version\n
Results:
nvcc: NVIDIA (R) Cuda compiler driver\nCopyright (c) 2005-2017 NVIDIA Corporation\nBuilt on Fri_Nov__3_21:07:56_CDT_2017\nCuda compilation tools, release 9.1, V9.1.85\n
"},{"location":"admin/nvidia-driver-install/#cuda-tookkit-install-for-pytorch","title":"CUDA Tookkit Install for PyTorch","text":"conda install cudatoolkit=<CUDA Version> -c pytorch\n
conda install cudatoolkit=11 -c pytorch\n
"},{"location":"admin/tensorflow-gpu-install/","title":"Tensorflow GPU Software","text":"$ conda install tensorflow-gpu==2.2.0 Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Collecting package metadata (repodata.json): done Solving environment: done
"},{"location":"admin/tensorflow-gpu-install/#package-plan","title":"Package Plan","text":"environment location: /home/arl/miniconda3/envs/donkey
added / updated specs: - tensorflow-gpu==2.2.0
The following packages will be downloaded:
package | build\n---------------------------|-----------------\ncudatoolkit-10.1.243 | h6bb024c_0 347.4 MB\ncudnn-7.6.5 | cuda10.1_0 179.9 MB\ncupti-10.1.168 | 0 1.4 MB\ntensorflow-2.2.0 |gpu_py37h1a511ff_0 4 KB\ntensorflow-base-2.2.0 |gpu_py37h8a81be8_0 181.7 MB\ntensorflow-gpu-2.2.0 | h0d30ee6_0 3 KB\n------------------------------------------------------------\n Total: 710.4 MB\n
The following NEW packages will be INSTALLED:
cudatoolkit pkgs/main/linux-64::cudatoolkit-10.1.243-h6bb024c_0 cudnn pkgs/main/linux-64::cudnn-7.6.5-cuda10.1_0 cupti pkgs/main/linux-64::cupti-10.1.168-0 tensorflow-gpu pkgs/main/linux-64::tensorflow-gpu-2.2.0-h0d30ee6_0
The following packages will be DOWNGRADED:
_tflow_select 2.3.0-mkl --> 2.1.0-gpu tensorflow 2.2.0-mkl_py37h6e9ce2d_0 --> 2.2.0-gpu_py37h1a511ff_0 tensorflow-base 2.2.0-mkl_py37hd506778_0 --> 2.2.0-gpu_py37h8a81be8_0
Proceed ([y]/n)? Y
Downloading and Extracting Packages tensorflow-base-2.2. | 181.7 MB | ################################################################################################################################################################ | 100% cudnn-7.6.5 | 179.9 MB | ################################################################################################################################################################ | 100% cupti-10.1.168 | 1.4 MB | ################################################################################################################################################################ | 100% tensorflow-2.2.0 | 4 KB | ################################################################################################################################################################ | 100% tensorflow-gpu-2.2.0 | 3 KB | ################################################################################################################################################################ | 100% cudatoolkit-10.1.243 | 347.4 MB | ################################################################################################################################################################ | 100% Preparing transaction: done Verifying transaction: done Executing transaction: done
"},{"location":"events/","title":"Introduction to AI Racing League Events","text":""},{"location":"events/#selecting-a-venue","title":"Selecting a Venue","text":"The area should be at least a 24'x'36 room. A school gymnasium is ideal. Only about 1/2 of a full-size basketball court works well.
"},{"location":"events/#getting-a-track","title":"Getting a Track","text":"You should have a track that can support multiple teams driving at the same time.
See our Setup Track Options for details on how to create or purchase a track.
"},{"location":"events/#learning-stations","title":"Learning Stations","text":"Our events have 4-6 \"learning stations\" where participants can learn about the AI Racing League.
Typical tables include:
This table shows participants what hardware is used in the events.
Question: Have you ever used Jupyter Notebooks?
They are used by data scientists to manage data analysis projects.
Drive data is a collection of images that has the speed and steering direction associated with it.
How would you use them to analyze your drive data?
"},{"location":"events/python/","title":"Python Table","text":"At this table, we usually have a single computer running a Python IDE. The users come up to the table and are invited to change a few lines of code and see what changes the cause.
A good example is to have a Raspberry Pi running Thonny that changes colors on an LED strip. We use a Raspberry Pi Pico to demonstrate this.
We also have some information cards that talk about why Python has become the preferred language of AI and Machine Learning.
"},{"location":"events/unix/","title":"UNIX Table","text":"Here is a set of suggested lesson plans to teach a 14-week college-level course built around the AI Racing League.
This course is designed as a multi-disciplinary course that will draw on subjects in computer science, electrical engineering and social science.
"},{"location":"lesson-plans/#background-and-motivation","title":"Background and Motivation","text":"The AI Racing League project aims to provide students with hands-on experience in the fields of machine learning, computer vision, and robotic control. The project will involve developing autonomous vehicles (Donkey Cars) capable of competing in a racing environment. This initiative is motivated by the increasing relevance of AI in various aspects of technology and the need for practical, real-world application skills in our graduates.
"},{"location":"lesson-plans/#learning-objectives","title":"Learning Objectives","text":"After taking this course, students will be able to design, build, and test autonomous racing vehicles using Donkey Cars. Students will learn to apply machine learning algorithms, computer vision techniques, and hardware control to enable these vehicles to navigate a racetrack autonomously. The objective is to prepare students for careers in AI and robotics by providing a comprehensive learning experience that includes both theory and practice.
"},{"location":"lesson-plans/#student-deliverables","title":"Student Deliverables","text":"Students will work in small teams of 3 to 6 students. Ideally, each team will have their own car. Students will be graded on their teamwork.
Each team will produce a GitHub repository of their team's work. The GitHub repositories will include their configuration parameters, Jupyter notebooks and documentation on any modifications they have made to the cars.
"},{"location":"lesson-plans/#equipment-required","title":"Equipment Required","text":"The first module is a baseline to get all students up to speed using our development tools.
Hands-On Lab: Get familiar with car hardware. Assemble a new car if needed.
"},{"location":"lesson-plans/#module-2-calibration","title":"Module 2: Calibration","text":"Calibration
"},{"location":"lesson-plans/#module-3-gathering-data","title":"Module 3: Gathering Data","text":"Gathering Data
"},{"location":"lesson-plans/#module-4-building-a-model","title":"Module 4: Building a Model","text":"Building a Model
"},{"location":"lesson-plans/#module-5-using-a-model-for-automatic-drive","title":"Module 5: Using a Model for Automatic Drive","text":"Driving with a Model
"},{"location":"lesson-plans/#module-6-analyzing-data","title":"Module 6: Analyzing Data","text":"Analyzing Data
"},{"location":"lesson-plans/#module-7-tuning-performance","title":"Module 7: Tuning Performance","text":"Tuning Performance
"},{"location":"lesson-plans/#module-8-managing-the-gpu","title":"Module 8: Managing the GPU","text":"GPU Configuration
"},{"location":"lesson-plans/#module-9-purchasing-a-gpu","title":"Module 9: Purchasing a GPU","text":"Using on-line tools to configure a low-cost GPU PC tuned for machine learning training workloads.
"},{"location":"lesson-plans/#module-10-advanced-topics","title":"Module 10: Advanced Topics","text":""},{"location":"lesson-plans/#3d-printing-a-chassis","title":"3D Printing a chassis","text":""},{"location":"lesson-plans/#adding-a-lidar","title":"Adding a LIDAR","text":""},{"location":"lesson-plans/#lowering-the-cost-of-the-donkey-car","title":"Lowering the Cost of the Donkey Car","text":"Differential drives
"},{"location":"lesson-plans/building-a-model/","title":"Building A Model","text":""},{"location":"lesson-plans/calibration/","title":"Donkey Car Calibration","text":"We will be following the same steps outlined in the Donkey Car Docs
To run calibration, connect a HDMI connector to the Pi and then boot. You will also need a keyboard and mouse and a charged 7.2-volt battery for calibration.
We would like students to work in pairs to calibrate each car and record the configuration parameters in a Python configuration file. Some of the cars are in a plastic bin and these bins sometimes have a card with the right configuration parameters on them.
Our cars all use the PCA9685 I2C to PWM chip. We don't have the Donkey Car Hats.
Make sure the power LED on the PCA9685 is on. If not, check the power and ground connections to the Raspberry Pi SBC 40-pin connector.
Note
Some of the sample configuration files use an older format. Make sure you ONLY modify the configuration files that are generated with the command that generates the mycar
files.
There is an on/off switch on the Electronic Speed Control. Make sure it is in the ON position and that the motor battery is charged and connected.
When you turn the switch on you MUST hear the confirmation beep. If you don't hear it you need to check the battery and connections.
donkey calibrate --channel <your_steering_channel> --bus=<your_i2c_bus>\n
By default, our cars use channel 0 for the throttle and channel 1 for the steering. If the team that assembled the car switched the connections on the PWM card these may be reversed.
The key parameters to record are:
Throttle:
Steering
These should be placed online in a file that is associated with each car name or number.
"},{"location":"lesson-plans/calibration/#challenges-setting-reverse","title":"Challenges Setting Reverse","text":"Reverse on RC cars is a little tricky because the ESC must receive the following sequence:
To calibrate a reverse PWM setting...
Use the same technique as above
Here is the basic command to edit the config file:
nano ~/mycar/myconfig.py\n
The Raspberry Pi's also have Python IDEs installed so you can also use those editors. This lesson puts a focus on getting students familiar with the parts of the Donkey Car including:
Generating architecture diagrams can be a slow and painful process. But ChatGPT can help. here is a process of generating a block diagram using ChatGPT. We can use the Mermaid Live program to view the diagram. ChatGPT also has a diagram
plugin that will render the image within ChatGPT.
Create a mermaid block diagram for the structure of a Donkey Car. Make sure to label all the components and their connection types. Return the results in raw mermaid format. Include the following structures:\n\n1. The main computer (Raspberry Pi or NVIDIA Nano)\n2. The main computer has an I2C connection to the Servo Controller board\n3. The Servo Controller connects to the Steering Servo and the Electronic Speed Controller (ESC) via PWM connections\n4. The ESC connects to the RC Car Battery\n5. THE ESC connects to the Car Motor\n6. The 5v battery powers the main computer\n7. There are two batteries: a 5V battery and a 7V RC Car Battery.\n8. The Servo controller is a PCA9685 board.\n
"},{"location":"lesson-plans/car-structure/#chatgpt-response","title":"ChatGPT Response","text":"This code will create a block diagram where:\n\nThe \"Main Computer\" (A) connects to the \"Servo Controller\" (B) via an I2C connection.\nThe \"Servo Controller\" (B) connects to both the \"Steering Servo\" (C) and \"Electronic Speed Controller\" (D) via PWM connections.\nThe \"Electronic Speed Controller\" (D) connects to both the \"RC Car Battery\" (E) and the \"Car Motor\" (F).\nThe \"5V Battery\" (G) connects to the \"Main Computer\" (A) as well as to the \"Positive Connection\" (H) and \"Negative Connection\" (I).\nThe \"RC Car Battery\" (E) also connects to both the \"Positive Connection\" (H) and \"Negative Connection\" (I).\nPlease note that this is a simplified diagram and does not necessarily represent the physical arrangement of the components in an actual Donkey Car setup. This diagram should be taken as an abstract representation of how the components interact.\n
"},{"location":"lesson-plans/car-structure/#mermaid-diagram","title":"Mermaid Diagram","text":"graph TB\n A[\"Main Computer<br>Raspberry Pi or NVIDIA Nano\"]\n B[\"Servo Controller<br>PCA9685 board\"]\n C[\"Steering Servo\"]\n D[\"Electronic Speed Controller<br>(ESC)\"]\n E[\"RC Car Battery (7V)\"]\n F[\"Car Motor\"]\n G[\"5V Battery\"]\n\n A --I2C--> B\n B --PWM--> C\n B --PWM--> D\n D --- E\n D --- F\n G --- A\n
"},{"location":"lesson-plans/car-structure/joystick/","title":"The F710 Joystick","text":""},{"location":"lesson-plans/car-structure/joystick/#joystick-options","title":"Joystick Options","text":""},{"location":"lesson-plans/car-structure/joystick/#the-logitech-f710","title":"The Logitech F710","text":""},{"location":"lesson-plans/car-structure/joystick/#diagnostics","title":"Diagnostics","text":""},{"location":"lesson-plans/car-structure/joystick/#debugging","title":"Debugging","text":"Adding a Joystick to Your Donkey Car
"},{"location":"lesson-plans/computer-vision/","title":"AI Racing League Computer Vision Table","text":"Raspberry Pi and the NVIDIA Nano are popular systems for demonstrating various computer vision applications due to their affordability and flexibility.
"},{"location":"lesson-plans/computer-vision/#requirements","title":"Requirements","text":"For these lessons, you just need a Raspberry Pi (or Nano) and the attached Camera that we use for all our cars.
"},{"location":"lesson-plans/computer-vision/#sample-labs","title":"Sample Labs","text":"Here are several demos we show to high school students using OpenCV and Raspberry Pi:
"},{"location":"lesson-plans/computer-vision/#face-detection-and-recognition","title":"Face Detection and Recognition","text":"We can use the built-in Haar cascades in OpenCV for face and eyes detection. For the face recognition part, you can use either OpenCV's built-in algorithms or deep learning-based models such as FaceNet.
"},{"location":"lesson-plans/computer-vision/#object-detection","title":"Object Detection","text":"Use pre-trained models from OpenCV's DNN module or TensorFlow's model zoo to recognize multiple objects in real-time.
"},{"location":"lesson-plans/computer-vision/#optical-character-recognition-ocr","title":"Optical Character Recognition (OCR):","text":"Combine OpenCV for image processing and Tesseract for character recognition to demonstrate how a device can read text from images or real-time video feed.
"},{"location":"lesson-plans/computer-vision/#color-detection","title":"Color Detection","text":"Write a simple program that detects specific colors in real-time. This can be used as a stepping stone to more advanced object-tracking projects.
We can also combine this lab with our Raspberry Pi Pico color detection sensors.
"},{"location":"lesson-plans/computer-vision/#motion-detection-and-tracking","title":"Motion Detection and Tracking","text":"Implement a simple surveillance system that detects motion and tracks moving objects. This can be a good introduction to video analysis.
"},{"location":"lesson-plans/computer-vision/#augmented-reality","title":"Augmented Reality","text":"Show how to overlay graphics on a real-time video feed based on detected features. For example, you can use OpenCV's capabilities for feature detection (like SIFT, SURF, ORB) and perspective transformation to overlay 3D objects on a marker.
"},{"location":"lesson-plans/computer-vision/#hand-gesture-recognition","title":"Hand Gesture Recognition","text":"Create a program that recognizes hand gestures and associates them with commands. You could use this to control a game or navigate a user interface.
"},{"location":"lesson-plans/computer-vision/#license-plate-recognition","title":"License Plate Recognition","text":"You can implement a simple Automatic Number Plate Recognition (ANPR) system using image processing techniques in OpenCV and OCR.
"},{"location":"lesson-plans/computer-vision/#qr-code-and-barcode-scanner","title":"QR Code and Barcode Scanner","text":"Use OpenCV for real-time detection and decoding of QR codes and bar codes.
Most of these demonstrations will require additional Python libraries beyond just OpenCV, like NumPy, Pillow, or TensorFlow.
For hardware, you will need the Raspberry Pi 3 with 4GB RAM, a camera module, and potentially additional items like a monitor, mouse, and keyboard for a fully interactive setup.
"},{"location":"lesson-plans/computer-vision/face-filters/","title":"OpenCV Lab: Fun Face Filters**","text":"Objective: Learn to detect faces using OpenCV and apply fun filters like cat whiskers and ears.
"},{"location":"lesson-plans/computer-vision/face-filters/#tools-libraries","title":"Tools & Libraries","text":"Haar Cascades are a type of machine learning object detection method used to identify objects in images or video. In OpenCV, Haar Cascades are mainly used to detect faces, but they can also identify other objects like eyes, smiles, and more. There were named after Hungarian mathematician Alfr\u00e9d Haar who make key contributions to the mathematics of transformation of matrix data in the 1920s.
Imagine you have a magic magnifying glass that you move across a photo. Whenever this magnifying glass sees a face, it lights up! That's kind of what Haar Cascades does in OpenCV.
"},{"location":"lesson-plans/computer-vision/face-filters/#how-do-harr-cascades-work","title":"How Do Harr Cascades Work?","text":"Creating a Snapchat-like face filter using Haar Cascades involves two main steps:
Example Python Code:
```python import cv2
"},{"location":"lesson-plans/computer-vision/face-filters/#load-the-haar-cascade-for-face-detection","title":"Load the Haar cascade for face detection","text":"face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
"},{"location":"lesson-plans/computer-vision/face-filters/#read-the-image","title":"Read the image","text":"img = cv2.imread('your_photo.jpg') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
"},{"location":"lesson-plans/computer-vision/face-filters/#detect-faces","title":"Detect faces","text":"faces = face_cascade.detectMultiScale(gray, 1.1, 4)
for (x, y, w, h) in faces: # Here you'd typically overlay your filter, e.g., cat ears at coordinates (x, y) cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 2) # Drawing a rectangle around the detected face for now
cv2.imshow('Face Detection', img) cv2.waitKey()
Resources: - Haarcascades for face and eye detection (provided by OpenCV).
Steps:
Setup & Installation
Make sure you have Python and OpenCV installed.
bash pip install opencv-python
Face Detection
Before adding filters, students should understand face detection.
```python\nimport cv2\n\nface_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')\n\nimg = cv2.imread('path_to_image.jpg')\ngray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n\nfaces = face_cascade.detectMultiScale(gray, 1.1, 4)\nfor (x, y, w, h) in faces:\n cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 2)\n\ncv2.imshow('Detected Faces', img)\ncv2.waitKey()\n```\n
Designing the Filters
Use any graphic software (like GIMP or Photoshop) to design transparent PNG images of cat whiskers and ears.
Ensure the filter graphics are adjustable in size or make several versions to fit different face sizes.
Applying the Cat Ears Filter
Given the coordinates (x, y, w, h)
of the detected face:
```python ear_image = cv2.imread('path_to_ear_image.png', -1) # The -1 reads the alpha channel
Applying the Whiskers Filter
Similar to the cat ears, you can position whiskers on the cheeks using the (x, y, w, h)
of the face.
Real-time Filters Application
Capture video from the webcam and apply the filters in real-time.
```python cap = cv2.VideoCapture(0)
while True: ret, frame = cap.read() gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.1, 4)\nfor (x, y, w, h) in faces:\n # Apply cat ears and whiskers filter here\n\ncv2.imshow('Filters in Action', frame)\nif cv2.waitKey(1) & 0xFF == ord('q'):\n break\n
cap.release() cv2.destroyAllWindows() ```
Challenge & Extension
Assessment:
Note: This is a basic version and doesn't incorporate sophisticated technologies like deep learning which Snapchat uses. It's aimed at high school students to give a foundational understanding.
"},{"location":"lesson-plans/computer-vision/face-filters/#resize-the-cat-ear-image-to-fit-the-width-of-the-face","title":"Resize the cat ear image to fit the width of the face","text":"ear_width = w aspect_ratio = ear_image.shape[1] / float(ear_image.shape[0]) ear_height = int(ear_width / aspect_ratio) ear_image = cv2.resize(ear_image, (ear_width, ear_height))
"},{"location":"lesson-plans/computer-vision/face-filters/#region-where-we-want-to-place-the-cat-ears-taking-care-not-to-exceed-image-dimensions","title":"Region where we want to place the cat ears (taking care not to exceed image dimensions)","text":"for c in range(0, 3): img[y:y+ear_image.shape[0], x:x+ear_image.shape[1], c] = img[y:y+ear_image.shape[0], x:x+ear_image.shape[1], c] * (1 - ear_image[:, :, 3] / 255.0) + ear_image[:, :, c] * (ear_image[:, :, 3] / 255.0) ```
"},{"location":"lesson-plans/computer-vision/open-cv/","title":"OpenCV Labs","text":"Lesson Plan: Introduction to OpenCV in Python
"},{"location":"lesson-plans/computer-vision/open-cv/#setup","title":"Setup","text":"Students should have access to a computer with a webcam and Python installed. We use Rasberry Pi or NIVIDA Nanos with cameras.
Objective: By the end of the lesson, students will be able to explain the purpose and basic functions of OpenCV, and implement some basic image processing tasks.
"},{"location":"lesson-plans/computer-vision/open-cv/#1-introduction-10-minutes","title":"1. Introduction (10 minutes)","text":"Demo: How to install OpenCV using pip
.
pip install opencv-python
Quick navigation of the OpenCV documentation to encourage self-learning.
Objective: The goal of this lab is to have students use OpenCV to create basic filters for a live webcam feed.
Exercise:
In these lessons, we learn how to write some basic data analysis Python programs.
In the real world, you are often given some data and people ask us \"Tell me what insights you can give me about this data.\" This forms the basis of a field of data science called \"EDA\" for \"Electronic Data Analysis\".
For example, say you are on a project to get cars to drive using machine learning. What insights can you derive from the sample images and driving data?
"},{"location":"lesson-plans/data-analysis/#numpy-profiler","title":"Numpy Profiler","text":"TBD
"},{"location":"lesson-plans/data-analysis/01-intro/","title":"AI Racing League Data Analysis","text":""},{"location":"lesson-plans/data-analysis/01-intro/#why-analysis","title":"Why Analysis?","text":"Data analysis is a core part of building accurate models that create high quality predictions. Here are some sample analytics tasks:
import os\nfrom IPython.display import Image\n\nimage_dir = \"/home/arl/mycar/data/dans-msp/data/images\"\n\nfiles = os.listdir(image_dir)\n# last basement image is 1710\nn = 1710\nfile_n = files[n]\nfile_2 = files[n+1]\nprint(n, file_n)\nfile_path1 = image_dir + '/' + file_n\nfile_path2 = image_dir + '/' + file_2\ni1 = Image(file_path1)\ni2 = Image(file_path2)\nprint(n+1, file_2)\n\ndisplay(i1, i2)\n
"},{"location":"lesson-plans/data-analysis/02-listing-files/","title":"Working with Files","text":""},{"location":"lesson-plans/data-analysis/02-listing-files/#listing-files-with-the-os-library","title":"Listing Files with the OS library","text":"Python provides a powerful library for working with Operating System resources like file systems. We will start out with the listdir()
function that lists the files in a directory.
Here is program that lists all the tub files in our mycar/data directory:
import os\ndata_dir = \"/home/arl/mycar/data/\"\n\nfiles = os.listdir(data_dir)\nfor file in files:\n print(file)\n
returns:
a-test-tub\nmy-test-tub\njunk-tub\nproduction-run\ntub-47\n
"},{"location":"lesson-plans/data-analysis/02-listing-files/#listing-files-in-a","title":"Listing Files in a","text":"import os\nfrom IPython.display import Image\n\nimage_dir = \"/home/arl/mycar/data/dans-msp/data/images\"\n\nfiles = os.listdir(image_dir)\n# last basement image is 1710\nn = 1710\nfile_n = files[n]\nfile_2 = files[n+1]\nprint(n, file_n)\nfile_path1 = image_dir + '/' + file_n\nfile_path2 = image_dir + '/' + file_2\ni1 = Image(file_path1)\ni2 = Image(file_path2)\nprint(n+1, file_2)\n\ndisplay(i1, i2)\n
"},{"location":"lesson-plans/data-analysis/02-listing-files/#list-random-files-in-images-directory","title":"List Random Files In Images Directory","text":"import os\nimport matplotlib.pyplot as plt\nfrom IPython.display import Image\n\nimage_dir = \"/home/arl/mycar/data/dans-msp/data/images\"\nimage_file_name_list = os.listdir(image_dir)\n\nfor index in range(0,10):\n file_name = image_file_name_list[index]\n print(file_name)\n
returns:
10263_cam_image_array_.jpg\n6257_cam_image_array_.jpg\n15248_cam_image_array_.jpg\n3916_cam_image_array_.jpg\n5223_cam_image_array_.jpg\n15765_cam_image_array_.jpg\n8437_cam_image_array_.jpg\n5871_cam_image_array_.jpg\n16681_cam_image_array_.jpg\n15281_cam_image_array_.jpg\n
Note that the files are not in any specific order.
"},{"location":"lesson-plans/data-analysis/02-listing-files/#show-images-for-10-random-files","title":"Show Images for 10 Random Files","text":"import glob\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n%matplotlib inline\n\nimages = []\nfor img_path in glob.glob('/home/arl/mycar/data/dans-msp/data/images/*.jpg'):\n images.append(mpimg.imread(img_path))\n\nplt.figure(figsize=(20,10))\ncolumns = 5\nfor i, image in enumerate(images):\n plt.subplot(len(images) / columns + 1, columns, i + 1)\n plt.imshow(image)\n
"},{"location":"lesson-plans/data-analysis/02-listing-files/#sorting-images-by-file-name","title":"Sorting Images By File Name","text":"We can add an additional step if we want to sort the images by the file name:
image_file_name_list = os.listdir(image_dir)\nimage_file_name_list.sort()\n
"},{"location":"lesson-plans/data-analysis/02-listing-files/#return-images-based-on-creation-date","title":"Return Images Based On Creation Date","text":"import os\nimport matplotlib.pyplot as plt\nfrom IPython.display import Image\nfrom pathlib import Path\n\nimage_dir = \"/home/arl/mycar/data/dans-msp/data/images\"\npaths = sorted(Path(image_dir).iterdir(), key=os.path.getmtime)\n\nimages = []\n# just get the first 10 items in the list of images\nfor path in paths[:10]:\n print(path)\n images.append(mpimg.imread(path))\n\nplt.figure(figsize=(20,10))\ncolumns = 5\nfor i, image in enumerate(images):\n plt.subplot(len(images) / columns + 1, columns, i + 1)\n plt.imshow(image)\n
"},{"location":"lesson-plans/data-analysis/03-viewing-images/","title":"Viewing Images","text":""},{"location":"lesson-plans/data-analysis/03-viewing-images/#viewing-a-single-jpg-image","title":"Viewing a Single JPG Image","text":""},{"location":"lesson-plans/data-analysis/04-viewing-catalog-files/","title":"Viewing Catalog Files","text":"The data about each image, sometimes called the image \"metadata\", is stored in a file that ends with the file extension .catalog. If you open these files, you will see a simple layout that looks like the following:
{'_index': 16000, '_session_id': '21-07-20_0', '_timestamp_ms': 1626797545360, 'cam/image_array': '16000_cam_image_array_.jpg', 'user/angle': 1.0, 'user/mode': 'user', 'user/throttle': 0.5}\n{'_index': 16001, '_session_id': '21-07-20_0', '_timestamp_ms': 1626797545411, 'cam/image_array': '16001_cam_image_array_.jpg', 'user/angle': 0.37, 'user/mode': 'user', 'user/throttle': 0.7}\n{'_index': 16002, '_session_id': '21-07-20_0', '_timestamp_ms': 1626797545460, 'cam/image_array': '16002_cam_image_array_.jpg', 'user/angle': -0.23, 'user/mode': 'user', 'user/throttle': 0.25}\n
This file consists of multiple lines, each line starts and ends with curly braces \"{\" and \"}\". Within these curly braces are a set of key-value pairs where the label is a string in single quotes followed by a colon, the value and a comma. This file uses newlines to separate records and a JSON object format within each single line. Note this is NOT a full JSON file format so you can't just use a standard JSON library to read the catalog file.
Here is that format with a the key and value on separate lines to make the line easier to read.
{\n '_index': 16000,\n '_session_id': '21-07-20_0',\n '_timestamp_ms': 1626797545360,\n 'cam/image_array':\n '16000_cam_image_array_.jpg',\n 'user/angle': 1.0,\n 'user/mode': 'user',\n 'user/throttle': 0.5\n}\n
This format is very similar to a JSON file with the following exceptions:
Here is what a properly formatted JSON file would look like:
{\n \"driveData\": [\n {\n \"_index\": 16000,\n \"_session_id\": \"21-07-20_0\",\n \"_timestamp_ms\": 1626797545360,\n \"cam/image_array\": \"16000_cam_image_array_.jpg\",\n \"user/angle\": 1.0,\n \"user/mode\": \"user\",\n \"user/throttle\": 0.5\n },\n {\n \"_index\": 16001,\n \"_session_id\": \"21-07-20_0\",\n \"_timestamp_ms\": 1626797545411,\n \"cam/image_array\": \"16001_cam_image_array_.jpg\",\n \"user/angle\": 0.37,\n \"user/mode\": \"user\",\n \"user/throttle\": 0.70\n },\n {\n \"_index\": 16002,\n \"_session_id\": \"21-07-20_0\",\n \"_timestamp_ms\": 1626797545460,\n \"cam/image_array\": \"16002_cam_image_array_.jpg\",\n \"user/angle\": -0.23,\n \"user/mode\": \"user\",\n \"user/throttle\": 0.25\n }\n ]\n}\n
Here is a sample JSON file reader that would read this file:
# program to read a DonkeyCar Catalog File\nimport os,json\n\n# this program assumes that test.json is in the same directory as this script\n# get the direcotry that this script is running\nscript_dir = os.path.dirname(__file__)\n# get a relative path to the script dir\npath_to_json_file = script_dir + '/test.json'\n\n# Open the JSON test file for read only\nf = open(path_to_json_file, 'r')\n\n# returns JSON object as a dictionary\ndata = json.load(f)\n\n# Iterating through the json file for the items in the drive data dictionary\nfor i in data['driveData']:\n print(i)\n\n# Close the JSON file\nf.close()\n
Note that the open() function reads the file with the \"r\" option which indicates read-only mode.
Although this format would make reading the file simple, there are some disadvantages. The key is that individual lines in the new catalog format are atomic units of storage and the files can be easily split and joined using line-by-line tools.
"},{"location":"lesson-plans/data-analysis/04-viewing-catalog-files/#reading-catalog-lines-to-json-objects","title":"Reading Catalog Lines to JSON Objects","text":"To read in the values of the catalog file we will open using a line-oriented data structure assuming that there is a newline at the end of each record. We can then just the json library's loads()
function which will convert each line to a JSON object.
Sample Objects.json file:
{\"name\":\"Ann\",\"age\":15}\n{\"name\":\"Peggy\",\"age\":16}\n{\"name\":\"Rima\",\"age\":13}\n{\"name\":\"Sue\",\"age\":14}\n
import os, json \njson_file = \"objects.json\"\nscript_dir = os.path.dirname(__file__)\n\n# get a relative path to the script dir\npath_to_catalog_file = script_dir + '/' + json_file\n\nf = open(path_to_catalog_file)\nlines = f.readlines()\n\ncount = 0\n# Convert each line to a JSON object\nfor line in lines:\n line_in_json = json.loads(line)\n count += 1\n print(count, ' ', end='')\n print(line_in_json)\n # the result is a Python dictionary\n print(line_in_json['name'])\n print(\"Name:\", line_to_json[\"name\"] )\n print(\"Age:\", line_to_json[\"age\"] )\n
Returns
1 {'name': 'Ann', 'age': 15}\nName: Ann\nAge: 15\n2 {'name': 'Peggy', 'age': 16}\nName: Peggy\nAge: 16\n3 {'name': 'Rima', 'age': 13}\nName: Rima\nAge: 13\n4 {'name': 'Sue', 'age': 14}\nName: Sue\nAge: 14\n
"},{"location":"lesson-plans/data-analysis/04-viewing-catalog-files/#sample-catalog-reader-program","title":"Sample CataLog Reader Program","text":"# program to read a DonkeyCar Catalog File\nimport os,json\n\n# this program assumes that test.catalog is in the same directory as this script\n# get the direcotry that this script is running\nscript_dir = os.path.dirname(__file__)\n\n# get a relative path to the script dir\npath_to_catalog_file = script_dir + '/test.catalog'\n\nf = open(path_to_catalog_file)\nlines = f.readlines()\n\ncount = 0\n# Convert each line to a JSON object\nfor line in lines:\n # each line as a JSON dictionary object\n j = json.loads(line)\n count += 1\n print('\\n\\nline:', count)\n # print(j)\n print(\"Index:\", j[\"_index\"] )\n print(\"Session:\", j[\"_session_id\"] )\n print(\"Timestamp:\", j[\"_timestamp_ms\"] )\n print(\"cam/image_array:\", j[\"cam/image_array\"] )\n print(\"user/angle:\", j[\"user/angle\"] )\n print(\"user/mode:\", j[\"user/mode\"] )\n print(\"user/throttle:\", j[\"user/throttle\"] )\n
returns:
line: 1\nIndex: 16000\nSession: 21-07-20_0\nTimestamp: 1626797545360\ncam/image_array: 16000_cam_image_array_.jpg\nuser/angle: 1.0\nuser/mode: user\nuser/throttle: 0.31\n\n\nline: 2\nIndex: 16001\nSession: 21-07-20_0\nTimestamp: 1626797545411\ncam/image_array: 16001_cam_image_array_.jpg\nuser/angle: 0.3715323343607898\nuser/mode: user\nuser/throttle: 0.31\n\n\nline: 3\nIndex: 16002\nSession: 21-07-20_0\nTimestamp: 1626797545460\ncam/image_array: 16002_cam_image_array_.jpg\nuser/angle: 0.2371288186284982\nuser/mode: user\nuser/throttle: 0.31\n
"},{"location":"lesson-plans/data-analysis/04-viewing-catalog-files/#reference","title":"Reference","text":"The Python class that creates version 2 of the catalog files is here
"},{"location":"lesson-plans/data-analysis/05-catalog-statistics/","title":"Catalog Statistics","text":"Now that we know how to reach each item in the tub catalog, we can now do some simple statistics on this data. For example we can calculate the average throttle and steering angle and create some plots of the distribution of these values.
"},{"location":"lesson-plans/data-analysis/05-catalog-statistics/#calculating-average-throttle-and-angle","title":"Calculating Average Throttle and Angle","text":"When we drive around the track each image records both the throttle and steering values at the instant the image was taken by the camera. Although the values sent to the Electronic Speed Controller (ESC) and the servo are unique to every car, instead we store values that have been converted to a range between 0 and 1. Both these values are Normalized to values of between 0 and 1.
# program to read a DonkeyCar Catalog File\nimport os,json\n\n# this program assumes that test.catalog is in the same directory as this script\n# get the direcotry that this script is running\nscript_dir = os.path.dirname(__file__)\n\n# get a relative path to the script dir\npath_to_catalog_file = script_dir + '/test.catalog'\n\nf = open(path_to_catalog_file)\nlines = f.readlines()\n# create a dictionary object\ndict = {}\ncount = 0\ntotal_throttle = 0\ntotal_angle = 0\n# Add each line to our dictionary\nfor line in lines:\n # each line as a JSON dictionary object\n j = json.loads(line)\n count += 1\n dict.update(json.loads(line))\n total_throttle += j[\"user/throttle\"]\n total_angle += j[\"user/angle\"]\nprint(count, \"items in dictionary\")\nprint(\"Average throttle: \", round(total_throttle/count, 3))\nprint(\"Average angle:\", round(total_angle/count, 3))\n
returns:
100 items in dictionary\nAverage throttle: 0.31\nAverage angle: 0.53\n
These values look reasonable. Our throttle should be between 0 and 1 and our average steering should be around 0.5. If we drive in a pure circle only in a single direction the average angle will be offset from the 0.5 center value.
"},{"location":"lesson-plans/data-analysis/05-catalog-statistics/#viewing-min-and-max-values","title":"Viewing Min and Max values","text":"# program to read a DonkeyCar Catalog File\nimport os,json\n\n# this program assumes that test.catalog is in the same directory as this script\n# get the direcotry that this script is running\nscript_dir = os.path.dirname(__file__)\n\n# get a relative path to the script dir\npath_to_catalog_file = script_dir + '/test.catalog'\n\nf = open(path_to_catalog_file)\nlines = f.readlines()\n# create a dictionary object\ndict = {}\ncount = 0\n\ntotal_throttle = 0\nmin_throttle = 1\nmax_throttle = 0\n\ntotal_angle = 0\nmin_angle = 1\nmax_angle = 0\n\n# Add each line to our dictionary\nfor line in lines:\n # each line as a JSON dictionary object\n j = json.loads(line)\n count += 1\n dict.update(json.loads(line))\n total_throttle += j[\"user/throttle\"]\n total_angle += j[\"user/angle\"]\n\n # check for min and max throttle\n if j[\"user/throttle\"] < min_throttle:\n min_throttle = j[\"user/throttle\"]\n if j[\"user/throttle\"] > max_throttle:\n max_throttle = j[\"user/throttle\"]\n\n if j[\"user/angle\"] < min_angle:\n min_angle = j[\"user/angle\"]\n if j[\"user/angle\"] > max_angle:\n max_angle = j[\"user/angle\"]\n\nprint('\\n')\nprint(count, \"items in catalog\")\n\nprint(\"Min throttle:\", round(min_throttle, 3))\nprint(\"Average throttle: \", round(total_throttle/count, 3))\nprint(\"Max throttle:\", round(max_throttle, 3))\n\nprint(\"Min angle:\", round(min_throttle, 3))\nprint(\"Average angle:\", round(total_angle/count, 3))\nprint(\"Max angle:\", round(max_angle, 3))\nprint('\\n')\n
returns:
100 items in catalog\nMin throttle: -0.31\nAverage throttle: 0.308\nMax throttle: 0.5\nMin angle: -0.31\nAverage angle: 0.534\nMax angle: 1.0\n
"},{"location":"lesson-plans/data-analysis/05-catalog-statistics/#converting-the-dictionary-to-a-dataframe","title":"Converting the Dictionary to a DataFrame","text":"df = pd.DataFrame(list(dict.items()))\nprint(df)\n
returns
0 1\n0 _index 16099\n1 _session_id 21-07-20_1\n2 _timestamp_ms 1626797880229\n3 cam/image_array 16099_cam_image_array_.jpg\n4 user/angle 0.56914\n5 user/mode user\n6 user/throttle 0.0632649\n
"},{"location":"lesson-plans/data-analysis/05-catalog-statistics/#plotting-steering-distributions","title":"Plotting Steering Distributions","text":""},{"location":"lesson-plans/data-analysis/06-cleaning-datasets/","title":"Cleaning Datasets","text":"Up until now, we have only been viewing metrics and files. These are all read-only operations. Now we will write our first programs that change the tub datasets.
"},{"location":"lesson-plans/data-analysis/06-cleaning-datasets/#splitting-datasets","title":"Splitting Datasets","text":"In this lab we will assume that we want to break our data into two distinct subsets and place them in different \"tubs\", which are just directories that contain both the catalogs and images for a dataset.
You can begin by taking a single dataset in a tub and then duplicating that tub. You can then selectively remove the data from the two tubs to effectively split the tubs.
The UNIX shell command to copy an entire directly is the \"cp\" command with the \"-r\" option for recursive copy.
cp -r from-dir to-dir\n
You can also add the \"-v\" option to see what files are being copied.
"},{"location":"lesson-plans/driving-with-a-model/","title":"Driving with A Model","text":""},{"location":"lesson-plans/gathering-data/","title":"Gathering Data","text":""},{"location":"lesson-plans/gathering-data/#image-size","title":"Image Size","text":""},{"location":"lesson-plans/gathering-data/#transfer-rates-to-micro-sd-cards","title":"Transfer Rates to Micro-SD Cards","text":""},{"location":"lesson-plans/gathering-data/#tools-to-monitor-write-performance","title":"Tools to Monitor Write Performance","text":""},{"location":"lesson-plans/gathering-data/#tubs","title":"Tubs","text":""},{"location":"lesson-plans/gathering-data/#monitoring-disk-space","title":"Monitoring Disk Space","text":""},{"location":"lesson-plans/github/","title":"Learning GitHub","text":""},{"location":"lesson-plans/github/#lesson-objectives","title":"Lesson Objectives","text":"The goal of this lesson is to be able to:
In this lesson, students will understand how NIVIDA GPUs are setup and configured under Linux. We will review the process of adding a GPU to a PC chassis, power and cooling requirements and the UNIX command line tools to set up, configure and test the GPU within the PC chassis.
After the GPU is set up the students will be able to test the performance of the GPU in building a machine-learning model using the Donkey Car software.
"},{"location":"lesson-plans/gpu-config/#steps-in-gpu-config","title":"Steps in GPU Config","text":""},{"location":"lesson-plans/gpu-config/#purchasing-gpu-pc-hardware","title":"Purchasing GPU PC Hardware","text":"Using part configuraiton tools
https://pcpartpicker.com/list/mrFYPX
"},{"location":"lesson-plans/gpu-config/#motherboard","title":"Motherboard","text":""},{"location":"lesson-plans/gpu-config/#ram","title":"RAM","text":""},{"location":"lesson-plans/gpu-config/#disk-drive","title":"Disk Drive","text":""},{"location":"lesson-plans/gpu-config/#gpus","title":"GPUs","text":""},{"location":"lesson-plans/gpu-config/#powersupply","title":"PowerSupply","text":""},{"location":"lesson-plans/gpu-config/#options-wireless","title":"Options - wireless","text":""},{"location":"lesson-plans/gpu-config/#downloading-a-ubuntu-image","title":"Downloading a Ubuntu Image","text":""},{"location":"lesson-plans/gpu-config/#booting-linux","title":"Booting Linux","text":""},{"location":"lesson-plans/gpu-config/#configuring-linux","title":"Configuring Linux","text":""},{"location":"lesson-plans/gpu-config/#configuring-ssh","title":"Configuring SSH","text":""},{"location":"lesson-plans/gpu-config/#configuring-the-gpu","title":"Configuring the GPU","text":"Here are the key reference documents:
This is an optional section for students that will be assisting schools that are purchasing and configuring their own GPUs.
"},{"location":"lesson-plans/jupyter-notebooks/","title":"Introduction to Jupyter Notebooks and Basic Data Analysis","text":""},{"location":"lesson-plans/jupyter-notebooks/#learning-objectives","title":"Learning Objectives","text":"By the end of this lesson, students should be able to:
Lesson 1: Introduction to Jupyter Notebooks (2 hours)
1.1 Lecture: What is a Jupyter Notebook? (30 mins)
1.2 Hands-on Activity: Getting Started with Jupyter Notebook (90 mins)
Lesson 2: Data Loading and Inspection in Jupyter Notebooks (2 hours)
2.1 Lecture: Basics of pandas (30 mins)
2.2 Hands-on Activity: Loading and Inspecting Data (90 mins)
Lesson 3: Basic Data Analysis in Jupyter Notebooks (3 hours)
3.1 Lecture: Data Analysis with pandas (30 mins)
3.2 Hands-on Activity: Basic Data Analysis (150 mins)
Lesson 4: Data Analysis of Donkey Car Project Data (3 hours)
4.1 Recap: Overview of the Donkey Car project (30 mins)
4.2 Hands-on Activity: Donkey Car Data Analysis (150 mins)
Students' understanding will be evaluated through their participation in the hands-on activities and the insights they generate from the Donkey Car project's data analysis. An end-of-unit quiz will also be provided to assess their theoretical understanding and practical skills in Jupyter Notebooks and data analysis.
"},{"location":"lesson-plans/jupyter-notebooks/#extension-activities","title":"Extension Activities","text":"In this lesson we will learn about how to do object detection using a Raspberry Pi with a Camera.
Edje Electronics Github Repo
"},{"location":"lesson-plans/software-setup/","title":"Donkey Car Software Setup","text":""},{"location":"lesson-plans/tuning-performance/","title":"Tuning Performance","text":""},{"location":"lesson-plans/tuning-performance/#objectives","title":"Objectives","text":"At this point, your car may or may not drive around the track. Now we will learn how to adjust the configuration parameters to optimize drive performance.
"},{"location":"proposal-generator/","title":"Proposal Generator","text":"The following is a proposal generator using ChatGPT.
The prompt includes a template of the format and the content.
"},{"location":"proposal-generator/#prompt","title":"Prompt","text":"Create a document in raw markdown format.\n\nUse the following template:\n\n**Title:** Faculty Sponsored Design Project Proposal\n\n**Course Name:** EE 4951 \u2013 Senior Design\n\n**Semester:** Spring 2024\n\n**Project Name:** AI Racing League\n\n**Advisor name:** Dan McCreary\n\n**Email:** Dan.McCreary@gmail.com\n\n**Phone:** 612-625-7322\n\n## Project Motivation and Background\n\n[Single Paragraph]\n\n## Project Goal and Objective\n\n[Single Paragraph]\n\n## Project Specifications\n\n[Detailed Project Specification 1 to 2 Pages]\n\n## Expected Deliverables for Final Product\n\n[Single Paragraph and Numbered Bullet List]\n\n## Technical Skills Required\n\n[Numbered Bullet List of Technical Skills]\n\n## Expected Budget\n\n[Short Description of Costs - Bullet List]\n
"},{"location":"proposal-generator/proposal/","title":"Generative AI Project Description","text":"The following was mostly generated by ChatGPT using the content and template provided. See the dialog for details.
Title: Faculty-Sponsored Design Project Proposal
Course Name: EE 4951 \u2013 Senior Design
Semester: Spring 2024
Project Name: AI Racing League
Advisor name: Dan McCreary
Email: Dan.McCreary@gmail.com
Phone: 612-625-7322
"},{"location":"proposal-generator/proposal/#project-motivation-and-background","title":"Project Motivation and Background","text":"The AI Racing League project aims to provide students with hands-on experience in the fields of machine learning, computer vision, and robotic control. The project will involve developing autonomous vehicles (Donkey Cars) capable of competing in a racing environment. This initiative is motivated by the increasing relevance of AI in various aspects of technology and the need for practical, real-world application skills in our graduates.
"},{"location":"proposal-generator/proposal/#project-goal-and-objective","title":"Project Goal and Objective","text":"The primary goal of this project is to design, build, and test autonomous racing vehicles using Donkey Cars. Students will learn to apply machine learning algorithms, computer vision techniques, and hardware control to enable these vehicles to navigate a racetrack autonomously. The objective is to prepare students for careers in AI and robotics by providing a comprehensive learning experience that includes both theory and practice.
"},{"location":"proposal-generator/proposal/#project-specifications","title":"Project Specifications","text":"Equipment:
When you use a new GPU server at an AI Racing League event there are many questions you need to have answered about your GPU server.
Here is a handy quiz you can use to check your knowledge. The answers to the questions are listed below.
"},{"location":"quizzes/gpu-shell-commands/#questions","title":"Questions","text":"Question 1: How would you log into the GPU server using the secure shell program?
A) $ login arl@arl1.local
B) $ ssh arl@arl1.local
C) $ enter arl@arl1.local
D) $ connect arl@arl1.local
Question 2: How would you check the version of Ubuntu on the GPU server?
A) $ version -a
B) $ lsb_release -a
C) $ ubuntu_version -all
D) $ check_ubuntu -a
Question 3: What information does the lscpu
command provide?
A) It provides the CPU information. B) It lists the amount of RAM on the server. C) It checks the disk space. D) It shows per-user disk usage.
Question 4: Which command is used to check the total RAM on the GPU server?
A) $ free -m
B) $ checkram -m
C) $ listram -m
D) $ raminfo -m
Question 5: What does the command df -h /
provide?
A) It lists per user disk usage. B) It adds a new GPU server user. C) It checks the disk space. D) It monitors the NVIDIA GPU.
Question 6: How can a new GPU server user be added?
A) $ adduser <username>
B) $ newuser <username>
C) $ createuser <username>
D) $ useradd <username>
Question 7: How can you give a user \"sudo\" rights?
A) $ sudo usermod -aG sudo <username>
B) $ sudo addrights -aG sudo <username>
C) $ sudo giverights -aG sudo <username>
D) $ sudo addrules -aG sudo <username>
Question 8: How can the hostname be changed?
A) $ sudo vi hostname
B) $ sudo edit hostname
C) $ sudo change hostname
D) $ sudo alter hostname
Question 9: What does the command watch -d -n 0.5 nvidia-smi
do?
A) It adds a new GPU server user. B) It runs similar to the UNIX top command, but for the GPU. C) It checks the version of Ubuntu. D) It lists the CPU information.
Question 10: How would you check the NVIDIA GPU utilization?
A) $ checkgpu
B) $ nvidia-smi
C) $ gpu-utilization
D) $ utilization nvidia
$ ssh arl@arl1.local
$ lsb_release -a
$ free -m
$ adduser <username>
$ sudo usermod -aG sudo <username>
$ sudo vi hostname
$ nvidia-smi
A school infobot is a battery powered robot that wonders the halls of a school answering questions for students such as \"Are there any coding clubs in this school?\". They are built using the following components:
Note that the School Infobot is not designed to answer general questions, only questions about the school. The content is typically taken from the schools website with additional data in the history of the school, staff and current calendar all pulled from the school's website.
"},{"location":"school-infobot/#architecture","title":"Architecture","text":"Claud Recommended Architecture
"},{"location":"school-infobot/#streaming-tools","title":"Streaming Tools","text":"Streaming Tools
"},{"location":"school-infobot/claude-arch/","title":"Claude Architecture","text":"flowchart LR\n M[Microphone] -->|Audio Stream| W[Whisper.cpp]\n W -->|Text Stream| F[FastAPI Server]\n F -->|WebSocket| O[Ollama/Llama]\n O -->|Text Stream| F\n F -->|Text Stream| C[Coqui TTS]\n C -->|Audio Stream| S[Speaker]\n\n subgraph \"Raspberry Pi\"\n M\n W\n S\n end\n\n subgraph \"School GPU Server\"\n F\n O\n C\n end\n
"},{"location":"school-infobot/claude-arch/#tools-and-implementation","title":"Tools and Implementation","text":"Here are the recommended tools and implementation approach:
"},{"location":"school-infobot/claude-arch/#1-speech-to-text-on-raspberry-pi","title":"1. Speech-to-Text (On Raspberry Pi):","text":"whisper.cpp
for real-time speech recognitionwhisper-cpp-python
- Use `FastAPI` with WebSocket support for bidirectional streaming\n- `websockets` library for Python WebSocket client on the Raspberry Pi\n- This enables real-time streaming of both text and audio\n
"},{"location":"school-infobot/claude-arch/#3-llm-interface-gpu-server","title":"3. LLM Interface (GPU Server):","text":"- Use Ollama's Python SDK for streaming interface\n- Connect via WebSocket to enable streaming responses\n
"},{"location":"school-infobot/claude-arch/#4-text-to-speech-gpu-server","title":"4. Text-to-Speech (GPU Server):","text":"- Use Coqui TTS (formerly Mozilla TTS) for high-quality speech synthesis\n- It can run efficiently on the GPU\n- Supports streaming audio output\n
Key features of this implementation:
"},{"location":"school-infobot/claude-arch/#1-streaming-audio-processing","title":"### 1. Streaming Audio Processing:","text":"- Uses `sounddevice` for real-time audio capture\n- Processes audio in chunks for immediate response\n- Streams both input and output audio\n
"},{"location":"school-infobot/claude-arch/#2-websocket-communication","title":"2. WebSocket Communication:","text":"- Maintains persistent connection between Pi and GPU server\n- Enables bidirectional streaming of text and audio\n- Low latency for real-time conversation\n
"},{"location":"school-infobot/claude-arch/#3-asynchronous-design","title":"3. Asynchronous Design:","text":"- Uses `asyncio` for non-blocking I/O\n- Handles multiple concurrent operations efficiently\n- Maintains responsive user experience\n
"},{"location":"school-infobot/claude-arch/#python-library-installation","title":"Python Library Installation","text":"To implement this system:
"},{"location":"school-infobot/claude-arch/#on-the-raspberry-pi","title":"On the Raspberry Pi:","text":"pip install whisper-cpp-python websockets sounddevice numpy\n
"},{"location":"school-infobot/claude-arch/#on-the-gpu-server","title":"On the GPU Server:","text":"pip install fastapi uvicorn TTS ollama websockets\n
The system is designed to be modular, so you can easily:
# raspberry_pi/main.py\nimport asyncio\nimport websockets\nimport sounddevice as sd\nimport numpy as np\nfrom whisper_cpp import Whisper\nclass RobotClient:\ndef __init__(self):\nself.whisper = Whisper(\"base.en\")\nself.sample_rate = 16000\nself.ws = None\nasync def process_audio_stream(self, indata, frames, time, status):\nif status:\nprint(status)\n\n# Process audio chunk with Whisper\ntext = self.whisper.transcribe(indata)\nif text.strip():\n# Send to server\nawait self.ws.send(text)\nasync def start_streaming(self):\nself.ws = await websockets.connect('ws://gpu-server:8000/chat')\n\n# Start audio streaming\nwith sd.InputStream(callback=self.process_audio_stream,\nchannels=1,\nsamplerate=self.sample_rate):\nwhile True:\n# Receive audio response from server\naudio_data = await self.ws.recv()\n# Play audio\nsd.play(audio_data, self.sample_rate)\nsd.wait()\n
"},{"location":"school-infobot/streaming-tools/","title":"School Infobot Streaming Tools","text":""},{"location":"school-infobot/streaming-tools/#prompt","title":"Prompt","text":"Prompt
We are a group of software engineers that would like to get high-school students exposed to AI and LLMs. We would like to build a mobile robot that will wonder the halls of schools and be able to interact with students via voice. We have a brand new Raspberry Pi with an AI Hat that can do model inference faster than before. The Raspberry Pi has a high-quality 360-degree mic and speakers. We also have a wireless connection to a school's GPU which is a NVIDIA GTX 3080Ti with 12GB of RAM. It is running Ollama now. Our team has a preference for Python.
Here are the goals of the project:
What tools should I use to connect all these components together with a streaming interface?
"},{"location":"school-infobot/streaming-tools/#response","title":"Response","text":"Below is a high-level overview of how you can stitch all the components together for a real-time (i.e., \"streaming\") pipeline, along with recommended tools for each step. The approach will let your Raspberry Pi (with its AI Hat) continuously capture audio, convert speech to text, send text to Ollama/Llama running on the remote GPU, receive the response, convert it to speech, and then play it back in near real time.
"},{"location":"school-infobot/streaming-tools/#1-audio-capture-and-streaming-from-the-raspberry-pi","title":"1. Audio Capture and Streaming from the Raspberry Pi","text":"Tools and libraries:
PyAudio (Python) or PortAudio (C/C++).
GStreamer
If you want a lower-level pipeline approach (especially for complex audio routing or if you need to handle multiple input/output devices or audio filters), GStreamer provides a powerful framework with built-in elements for streaming audio over the network.
For example, you can capture audio from the Pi's 360-degree microphone and stream it out via a network protocol like RTP or TCP/UDP.
If your robot is already running ROS2, you could publish audio data to a topic and subscribe to it elsewhere. ROS2 can handle real-time communication via DDS. This might be overkill if you are not using ROS for other parts of your robot, but it's worth considering if you're building a larger robotics stack.
In many simpler cases, PyAudio plus a small Python script is enough to capture short audio segments (or continuous chunks) and feed them to your STT engine.
"},{"location":"school-infobot/streaming-tools/#2-speech-to-text-stt","title":"2. Speech-to-Text (STT)","text":"Local vs. Remote STT:
Recommended approach:
Your recognized text then needs to be sent to the remote server (running Ollama / Llama 3.3 on the NVIDIA GTX 3080 Ti).
Tools and libraries to connect to Ollama/Llama:
HTTP/REST API (with streaming)
WebSocket
If you want to implement your own streaming, you could stand up a small WebSocket server around Ollama. This is a bit more custom but allows you to send partial text to Llama, get partial responses back, and maintain an ongoing conversation state.
If Ollama supports or can be wrapped with a gRPC server, you could build a streaming gRPC endpoint.
Recommended approach:
Most out-of-the-box Ollama installations expose a local HTTP API that supports streaming tokens as they're generated. You can forward your text to that API, capture the streaming response in Python, and pass it on to the next step (TTS).
Once you receive text back from Llama, you need to turn it into speech. This can either happen locally on the Pi's AI Hat or on the remote server (and then streamed back). Often, TTS is relatively lightweight, so local TTS is common if you have a decent model (e.g., a smaller Coqui TTS or espeak).
Tools and libraries:
Local TTS Engines
Remote TTS
If you want a higher-fidelity, neural TTS model that needs the GPU, you can run something like NVIDIA Riva TTS, Coqui TTS, or another neural TTS on the GPU.
If the Pi cannot handle it, implement a streaming TTS server on the GPU with gRPC or WebSocket and send back a real-time audio stream to the Pi for playback.
Playback on the Raspberry Pi Speaker
Finally, the synthesized audio must be played on the Pi's speakers.
Tools and libraries for audio playback:
alsasink
element on Raspberry Pi.SDL or other audio libraries if you want more advanced features.
Recommended \"Glue\" for Streaming Architecture
Putting this all together, you essentially have two major \"pipeline\" paths:
Here are some good ways to implement the streaming \"glue\":
"},{"location":"school-infobot/streaming-tools/#a-python-async-io-websocketsgrpc","title":"A. Python + Async I/O + WebSockets/gRPC","text":"PyAudio
to capture raw audio frames.Below is a simplified, conceptual outline using Python async and websockets (assuming local STT and TTS on the Pi, plus a remote Ollama server that offers a streaming HTTP SSE endpoint):
import asyncio\nimport websockets\nimport requests\n\nasync def audio_capture_stt():\n \"\"\"\n Continuously capture audio from mic, run STT (local or remote).\n As soon as partial transcripts are ready, yield them.\n \"\"\"\n while True:\n # Capture audio frames from PyAudio\n # Send frames to local STT or use local inference (Vosk/Whisper).\n # If partial transcript is ready, yield it.\n yield partial_transcript\n\nasync def ask_ollama(question):\n \"\"\"\n Send `question` to remote Ollama (HTTP SSE).\n Yield partial responses (tokens) as they arrive.\n \"\"\"\n # SSE or chunked transfer from Ollama\n response_stream = requests.get(\"http://GPU_SERVER:PORT/generate\", stream=True)\n for chunk in response_stream.iter_content(chunk_size=None):\n yield chunk # partial text token from Llama\n\nasync def text_to_speech_playback(text_chunk):\n \"\"\"\n Convert text_chunk to speech (e.g. local Coqui TTS) and play immediately.\n \"\"\"\n # local TTS\n audio_data = coqui_generate_audio(text_chunk)\n # Play with PyAudio or GStreamer sink\n play_audio(audio_data)\n\nasync def main():\n async for partial_text in audio_capture_stt():\n # Once a partial or final transcript is ready,\n # send it to the remote Llama server.\n async for partial_llama_answer in ask_ollama(partial_text):\n # As text arrives, TTS and play\n await text_to_speech_playback(partial_llama_answer)\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n
This architecture ensures minimal latency because everything is happening in a streaming manner:
For Audio Capture and Playback:
For STT:
Local: Vosk / Whisper (possibly utilizing your AI Hat for acceleration).
For Connection to Ollama/Llama 3.3:
HTTP-based streaming (SSE or chunked responses) or a custom WebSocket/gRPC wrapper if needed.
For TTS:
Local: Coqui TTS, eSpeak, Festival, or any neural TTS engine that can run on the Pi's AI Hat.
For Orchestration:
Python with asyncio + websockets / requests for streaming.
With these pieces in place, you'll have a real-time pipeline where the Raspberry Pi captures audio, converts it to text, sends the text to Llama on the GPU, receives back a streaming text response, converts that response to speech, and plays it back to the student.
"},{"location":"school-infobot/streaming-tools/#workflow","title":"Workflow","text":"ChatGPT thought about it for 17 seconds.
Creating the interaction experience
I'm designing a pipeline for a mobile robot: speech to text on Raspberry Pi, text transformation for Ollama, and text to speech. Streaming interface options include WebSocket, gRPC, GStreamer, and ffmpeg for real-time audio transmission.
Evaluating options
I'm weighing Vosk, Google's TTS, Coqui TTS, espeak, NVIDIA's Riva, and cloud solutions for streaming STT to the GPU with Ollama. The Raspberry Pi's roles in audio capture and local or GPU-based processing are also considered.
Crafting the strategy
I'm mapping out the pipeline: real-time audio capture on Raspberry Pi, streaming STT to GPU with Ollama, and streaming text to speech for playback. WebSockets or gRPC are solutions I'm considering for data exchange.
Mapping data flow
I'm integrating the Raspberry Pi with PyAudio for mic audio, using a speech recognition service, and streaming text to the GPU server for Ollama's text generation, followed by TTS on the Pi or GPU.
Integrating tools
I'm pulling together GStreamer, WebSockets, gRPC, Docker, and containerization for streaming audio from the GPU server to the Raspberry Pi, ensuring smooth playback.
Integrating solutions
I'm steering towards using PyAudio or SoX for audio capture, streaming STT via Vosk or Google Speech, and passing text to Ollama through REST or WebSocket. For TTS, exploring eSpeak, gTTS, or Coqui TTS.
"},{"location":"setup/battery-options/","title":"Battery Options","text":""},{"location":"setup/battery-options/#dual-batteries","title":"Dual Batteries","text":"In general, most of our cars use two separate batteries:
Getting batteries charged before each event requires some strong organizational skills. Although the LiPo batteries retain a charge for a long time, the RC car batteries must be fully charged the night before each event.
"},{"location":"setup/battery-options/#18650-standards","title":"18650 Standards","text":"The 18650 is a standard cell size for 3.7 volt Lithium batteries (18mmX65mm). The problem is that if these batteries go below 3.2 volts they become damaged and can't be recharged. This is why you should never use these batteries without charge protection circuits.
A typical 18650 battery produces 4.2 V when fully charged with a nominal voltage of 3.7 V.
The TP4056 is a low-cost charger chip for one (1) cell. If you plan to charge multiple cells simultaneously, each charging slot for a cell needs its own TP4056.
Warning
Never connect a 5V power supply directly to a lithium cell. Even though the cells have built-in protection, they are not chargers. It is the last line of protection to prevent catastrophic failures by disconnecting the cell under abnormal conditions - unless the protection does not work.
"},{"location":"setup/battery-options/#boost-and-buck-converters","title":"Boost and Buck Converters","text":"These are low-cost DC-to-DC voltage converters that allow you to use standard 3.7-volt cells that drive a 5-volt system. Boost converters increase the voltage and buck converters lower the voltage. You can purchase basic devices for $2, but fancier versions have displays that show both the input and output voltages.
We initially used several Ankar 2,000 milliamp-hour batteries for powering the cars. The batteries would last for the entire single-day events as long as they were charged before the event and not used to power the cars when not running on the tracks. The batteries typically could be recharged over 100 times before they had to be replaced.
Many teams use larger (and heavier) 10,000 milliamp-hour Lithium batteries. These are especially useful when learning to drive around the track. Teams also loved the fact that the INIU batteries showed the percent power remaining.
The INIU Portable Charger, Slimmest 10000mAh 5V/3A Power Bank is also a popular choice since they only cost about $19.
"},{"location":"setup/battery-options/#traditional-rc-car-batteries","title":"Traditional RC Car Batteries","text":"The traditional RC car batteries are based on standard \"Safe\" battery chemistry that can be fully drained and are not a strong fire hazard. The problem is that they take a long time to charge (typically overnight) and they don't store a lot of power so they typically only last a few minutes at high speed.
"},{"location":"setup/battery-options/#camera-lipo-batteries","title":"Camera LiPo Batteries","text":"A newer option is lower-cost rechargeable camera batteries that are built around the Sony NP-F standard. These are widely regarded as safe by the camera industry and there are many low-cost smart batteries and chargers available for around $10 each. They produce a stable 7.4-volt output that is perfect for our DC motors in RC cars.
The only consideration is getting cables that convert the batteries to use the RC car standards. I have made my own cables, but the quality of the connectors on the Donkey Car still leaves some things to be desired. The ends of the connectors are easy to pull off. I think that soldering a new barrel connector on all the RC cars might be another option.
The one other challenge is trying to find a low-cost connection plate. Right now the lowest cost plate is about $15 on Adafruit. I hope we can find lower-cost options or use a 3D printer to print our own.
Adafruit Camcorder Battery Holder/Adapter for Sony NP-F750 w/ DC Jack $15
Two batteries and charger on e-Bay for $23
3D Print Thingiverse NP-F Battery Wall Mount with Soft Lock
Neewer Battery Connection Plate for $27 on Amazon
The camera industry has also developed standard 7.4-volt battery packs that enclose low-cost 18650 battery cells. These standards have electronics built in to protect from drainage. Here is a list of parts that were recommended by the NVIDIA Jetson Hacks team:
Several participants used long battery cables with a small wire gauge. These cables caused voltage drops that made the cars stop working. We told all teams to use short 8-inch battery cables and most of these problems went away. We strongly suggest you build your own cables and use 18-gauge wire.
Sample 1ft Charging Cable
"},{"location":"setup/battery-options/#connectors-and-crimping-tool","title":"Connectors and Crimping Tool","text":"You can also purchase \"raw\" 7.4-volt LiPo batteries that are designed to work with RC cars. I have not purchased these directly so I can't attest to their reliability or safety. I will say that the power connector on them appears to be incompatible with our current cars.
1500mAh 7.4V 2S 18650 Battery
"},{"location":"setup/battery-options/#references","title":"References","text":"NVIDIA Jetson on Battery Power - a nice video by
"},{"location":"setup/building-arl-image/","title":"Building an AI Racing League Image for the Car","text":"This process takes about four hours and requires close attention to detail.
"},{"location":"setup/building-arl-image/#install-the-latest-nvidia-image","title":"Install The Latest NVIDIA image","text":""},{"location":"setup/building-arl-image/#enable-ssh","title":"Enable SSH","text":"This allows you to do a remote login to the car
sudo systemctl enable ssh\nsudo systemctl start ssh\n
Test ssh:
ssh arl@arl-01\n
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 4.9.253-tegra aarch64)\n\n * Documentation: https://help.ubuntu.com\n * Management: https://landscape.canonical.com\n * Support: https://ubuntu.com/advantage\nThis system has been minimized by removing packages and content that are\nnot required on a system that users do not log into.\n\nTo restore this content, you can run the 'unminimize' command.\n\nExpanded Security Maintenance for Infrastructure is not enabled.\n\n0 updates can be applied immediately.\n\nEnable ESM Infra to receive additional future security updates.\nSee https://ubuntu.com/esm or run: sudo pro status\n
"},{"location":"setup/building-arl-image/#add-a-swap-file","title":"Add a swap file","text":"The Jetson Hacks GitHub repo contains a nice UNIX script that will add a swap file.
git clone https://github.com/JetsonHacksNano/installSwapfile\ncd installSwapfile\n./installSwapfile.sh\n
You should reboot the Nano after you add this file.
"},{"location":"setup/building-arl-image/#add-htop","title":"Add HTOP","text":"The standard UNIX top
command does not display the load across multiple cores. The much more modern htop
is preferred and has better use of color.
sudo apt-get install htop\n
"},{"location":"setup/building-arl-image/#customize-browser-add-the-bookmark-bar","title":"Customize Browser Add the Bookmark Bar","text":"Ubuntu comes with several \"Office\" applications for doing word-processing, spreadsheets and presentations. These applications take up a lot of disk space so we will remove them.
sudo apt-get remove --purge libreoffice*\nsudo apt-get clean\nsudo apt-get autoremove\n
"},{"location":"setup/building-arl-image/#upgrade-local-libraries","title":"Upgrade Local Libraries","text":""},{"location":"setup/building-arl-image/#download-donkey-car-dependencies","title":"Download Donkey Car Dependencies","text":"sudo apt-get update -y sudo apt-get upgrade -y sudo apt-get install -y libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran sudo apt-get install -y python3-dev python3-pip sudo apt-get install -y libxslt1-dev libxml2-dev libffi-dev libcurl4-openssl-dev libssl-dev libpng-dev libopenblas-dev sudo apt-get install -y git nano sudo apt-get install -y openmpi-doc openmpi-bin libopenmpi-dev libopenblas-dev
"},{"location":"setup/building-arl-image/#monitoring-activity","title":"Monitoring Activity","text":""},{"location":"setup/building-arl-image/#version-checks","title":"Version Checks","text":"pip3 --version\n
pip 21.3.1 from /home/arl/env/lib/python3.6/site-packages/pip (python 3.6)\n
"},{"location":"setup/calibrate/","title":"Calibrate","text":"$ donkey calibrate --channel 0 --bus=1\n
Result
________ ______ _________ \n___ __ \\_______________ /___________ __ __ ____/_____ ________\n__ / / / __ \\_ __ \\_ //_/ _ \\_ / / / _ / _ __ `/_ ___/\n_ /_/ // /_/ / / / / ,< / __/ /_/ / / /___ / /_/ /_ / \n/_____/ \\____//_/ /_//_/|_| \\___/_\\__, / \\____/ \\__,_/ /_/ \n /____/ \n\nusing donkey v4.2.1 ...\nsombrero enabled\ninit PCA9685 on channel 0 address 0x40 bus 1\nUsing PWM freq: 60\nTraceback (most recent call last):\n File \"/home/pi/env/bin/donkey\", line 33, in <module>\n sys.exit(load_entry_point('donkeycar', 'console_scripts', 'donkey')())\n File \"/home/pi/projects/donkeycar/donkeycar/management/base.py\", line 500, in execute_from_command_line\n c.run(args[2:])\n File \"/home/pi/projects/donkeycar/donkeycar/management/base.py\", line 219, in run\n c = PCA9685(channel, address=address, busnum=busnum, frequency=freq)\n File \"/home/pi/projects/donkeycar/donkeycar/parts/actuator.py\", line 30, in __init__\n self.pwm = Adafruit_PCA9685.PCA9685(address=address)\n File \"/home/pi/env/lib/python3.7/site-packages/Adafruit_PCA9685/PCA9685.py\", line 75, in __init__\n self.set_all_pwm(0, 0)\n File \"/home/pi/env/lib/python3.7/site-packages/Adafruit_PCA9685/PCA9685.py\", line 111, in set_all_pwm\n self._device.write8(ALL_LED_ON_L, on & 0xFF)\n File \"/home/pi/env/lib/python3.7/site-packages/Adafruit_GPIO/I2C.py\", line 114, in write8\n self._bus.write_byte_data(self._address, register, value)\n File \"/home/pi/env/lib/python3.7/site-packages/Adafruit_PureIO/smbus.py\", line 327, in write_byte_data\n self._device.write(data)\nOSError: [Errno 121] Remote I/O error\nsombrero disabled\n
"},{"location":"setup/calibrate/#diagnostics","title":"Diagnostics","text":""},{"location":"setup/calibrate/#i2c-detect","title":"I2C Detect","text":"i2cdetect -y 1\n 0 1 2 3 4 5 6 7 8 9 a b c d e f\n00: -- -- -- -- -- -- -- -- -- -- -- -- -- \n10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n40: 40 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n70: 70 -- -- -- -- -- -- -- \n
"},{"location":"setup/calibrate/#i2c-device-file","title":"I2C Device File","text":"ls -ld /dev/i2*\ncrw-rw---- 1 root i2c 89, 1 Jul 3 13:17 /dev/i2c-1\n
"},{"location":"setup/calibrate/#i2c-functions-enabled","title":"I2C Functions Enabled","text":" i2cdetect -F 1\n
returns:
Functionalities implemented by /dev/i2c-1:\nI2C yes\nSMBus Quick Command yes\nSMBus Send Byte yes\nSMBus Receive Byte yes\nSMBus Write Byte yes\nSMBus Read Byte yes\nSMBus Write Word yes\nSMBus Read Word yes\nSMBus Process Call yes\nSMBus Block Write yes\nSMBus Block Read no\nSMBus Block Process Call no\nSMBus PEC yes\nI2C Block Write yes\nI2C Block Read yes\n
Note that both SMBus Block Read and SMBus Block Process Call are set to no. The rest are yes.
"},{"location":"setup/calibrate/#upgrade-to-python-370","title":"Upgrade to Python 3.70","text":"python3 -m virtualenv -p python3.7 env --system-site-packages\n
created virtual environment CPython3.7.3.final.0-32 in 2535ms\n creator CPython3Posix(dest=/home/pi/env, clear=False, no_vcs_ignore=False, global=True)\n seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/pi/.local/share/virtualenv)\n added seed packages: pip==21.1.2, setuptools==57.0.0, wheel==0.36.2\n activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator\n
python --version\n
Python 3.7.3\n
"},{"location":"setup/camera-testing/","title":"Testing the Camera","text":"To test the camera and cable, we need a command that captures video from a CSI camera connected to an NVIDIA Jetson Nano, converts the video format and resolution, and then displays the video on the screen. We will use the GStreamer command first.
"},{"location":"setup/camera-testing/#gstreamer-test-on-the-nano","title":"GStreamer Test on the Nano","text":"gst-launch-1.0 nvarguscamerasrc ! \\\n'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12' ! \\\nnvvidconv flip-method=0 ! \\\n'video/x-raw,width=960, height=616' ! \nnvvidconv ! \\\nnvegltransform ! \\\nnveglglessink -e\n
This command is a GStreamer command used to test the functionality of a camera on a NVIDIA Jetson Nano device. GStreamer is a multimedia framework that provides a pipeline for media data. The gst-launch-1.0
utility is used to build and run basic GStreamer pipelines.
On newer systems with two cameras you can add: sensor-id=0
to make sure you are getting camera 0.
This guide explains how to clone a Raspberry Pi microSD card using the dd
command on UNIX and UNIX-like systems.
First, identify the device name for your microSD card. Use either lsblk
or df
to list devices and their mount points.
lsblk\n
Or:
df -h\n
Look for the device corresponding to your microSD card. It's generally something like /dev/sdX
or /dev/mmcblkX
, where X
is a letter.
\u26a0\ufe0f Caution: Be very careful to identify the correct device, as choosing the wrong one could result in data loss.
Before copying data, unmount the partitions of the microSD card to ensure that no data is being read or written during the cloning process.
sudo umount /dev/sdX*\n
Use the dd
command to create an image file of the microSD card.
sudo dd if=/dev/sdX of=raspberrypi_backup.img bs=4M status=progress\n
if
: Input File \u2014 the device you are copying from (your microSD card).of
: Output File \u2014 the image file you are creating.bs
: Block Size \u2014 specifies how much data should be read at each iteration. 4M
is usually a good size.status=progress
: shows the progress during the copy.To clone the image onto a new microSD card, insert the new card and identify it just like you did in the first step.
sudo dd if=raspberrypi_backup.img of=/dev/sdY bs=4M status=progress\n
Replace /dev/sdY
with the device name of your new microSD card.
\u26a0\ufe0f Caution: Again, be very careful to identify the correct device to avoid data loss.
If your new microSD card is larger than the original, you might need to expand the filesystem to use the additional space. You can do this using raspi-config
on the Raspberry Pi.
sudo raspi-config\n
Navigate to Advanced Options
> Expand Filesystem
.
After following these steps, you should have successfully cloned your Raspberry Pi's microSD card.
Note: The dd
command can be very dangerous if misused. Always double-check your device names and ensure you understand the commands you're running. ```
Here's a breakdown of the command:
nvarguscamerasrc sensor_mode=0
: This is a GStreamer plugin specific to the NVIDIA platform that provides support for the Camera Serial Interface (CSI) cameras. sensor_mode=0
indicates that the command should use the first sensor mode of the camera. The sensor mode usually defines properties such as the resolution and frame rate that the camera supports.
'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12'
: This part of the command specifies the desired output from the camera source. The properties indicate that the video should be in NV12 format, with a resolution of 3820x2464 pixels and a frame rate of 21 frames per second. NVMM refers to NVIDIA's proprietary multimedia memory.
nvvidconv flip-method=0
: This is another NVIDIA specific GStreamer plugin that converts video from one format to another. The flip-method=0
option means that no flipping operation should be performed on the frames.
'video/x-raw,width=960, height=616'
: This specifies the desired output format and resolution after the conversion. The resolution is downscaled to 960x616 pixels.
nvvidconv ! nvegltransform ! nveglglessink -e
: This part of the pipeline takes the video from the conversion, applies an EGLStream transformation (nvegltransform
) and then sends it to a EGL/GLES-based render sink (nveglglessink
). This sink displays the video on the device's screen. The -e
flag at the end of the command tells GStreamer to send an end-of-stream signal when the source stops, which will properly close down the pipeline.
In the context of a GStreamer command, the \"!\" (aka bang) character is used to connect different elements of a GStreamer pipeline together. It serves a similar role to the UNIX \"|\" (pipe) character in a regular UNIX shell command, where it's used to pipe the output from one command into another.
However, there's an important difference between the two. In a UNIX shell command, the | character sends the standard output (stdout) of one command to the standard input (stdin) of another. In a GStreamer pipeline, the ! character doesn't simply pipe data from one element to the next. Instead, it establishes a connection between two GStreamer elements, allowing them to negotiate formats, buffer management, and other details. This negotiation process can involve more complex operations like format conversion, and it happens before any data is actually transferred.
So, in summary, while | and ! might seem similar, the latter is used in GStreamer to create more complex, negotiated connections between different multimedia processing elements.
"},{"location":"setup/camera-testing/#flip-modes","title":"Flip Modes","text":"The flip-method
property of the nvvidconv
(NVIDIA Video Converter) plugin controls the orientation of the output video in the NVIDIA Jetson platform. This is useful for handling scenarios where the camera could be mounted in various orientations.
Here are the possible values for the flip-method
parameter:
0
(Identity) - No rotation, no vertical flip.1
(Counterclockwise) - Rotate counter-clockwise 90 degrees.2
(Rotate 180) - Rotate 180 degrees.3
(Clockwise) - Rotate clockwise 90 degrees.4
(Horizontal Flip) - Flip horizontally.5
(Upper Right Diagonal) - Flip across upper right/lower left diagonal.6
(Vertical Flip) - Flip vertically.7
(Upper Left Diagonal) - Flip across upper left/lower right diagonal.Each number corresponds to a specific operation on the video frames. The specific operation will be applied to each frame of the video before it's sent to the next element in the GStreamer pipeline.
"},{"location":"setup/camera-testing/#resources","title":"Resources","text":""},{"location":"setup/camera-testing/#dans-blog","title":"Dan's Blog","text":"NVIDIA CSI Camera GitHub Repo
"},{"location":"setup/camera-testing/#jetson-hacks-blog","title":"Jetson Hacks Blog","text":"https://jetsonhacks.com/2019/04/02/jetson-nano-raspberry-pi-camera/
"},{"location":"setup/clone-sd/","title":"Cloning a microSD Card using UNIXdd
Command","text":"This guide explains how to clone a microSD card using the dd
command on UNIX and UNIX-like systems.
dd
CommandFirst, identify the device name for your microSD card. Use either lsblk
or df
to list devices and their mount points.
lsblk\n
Or:
df -h\n
Look for the device corresponding to your microSD card. It's generally something like /dev/sdX
or /dev/mmcblkX
, where X
is a letter.
\u26a0\ufe0f Caution: Be very careful to identify the correct device, as choosing the wrong one could result in data loss.
Before copying data, unmount the partitions of the microSD card to ensure that no data is being read or written during the cloning process.
sudo umount /dev/sdX*\n
Use the dd
command to create an image file of the microSD card.
sudo dd if=/dev/sdX of=raspberrypi_backup.img bs=4M status=progress\n
if
: Input File \u2014 the device you are copying from (your microSD card).of
: Output File \u2014 the image file you are creating.bs
: Block Size \u2014 specifies how much data should be read at each iteration. 4M
is usually a good size.status=progress
: shows the progress during the copy.To clone the image onto a new microSD card, insert the new card and identify it just like you did in the first step.
sudo dd if=raspberrypi_backup.img of=/dev/sdY bs=4M status=progress\n
Replace /dev/sdY
with the device name of your new microSD card.
\u26a0\ufe0f Caution: Again, be very careful to identify the correct device to avoid data loss.
If your new microSD card is larger than the original, you might need to expand the filesystem to use the additional space. You can do this using raspi-config
on the Raspberry Pi.
sudo raspi-config\n
Navigate to Advanced Options
> Expand Filesystem
.
After following these steps, you should have successfully cloned your Raspberry Pi's microSD card.
Note: The dd
command can be very dangerous if misused. Always double-check your device names and ensure you understand the commands you're running.
When asked:
Do you wish the installer to prepend the Miniconda3 install location to PATH in your /root/.bashrc?
Answer: yes
cd /tmp\nwget http://repo.continuum.io/miniconda/Miniconda3-latest-Linux-armv7l.sh\nchmod 755 Miniconda3-latest-Linux-armv7l.sh\n./Miniconda3-latest-Linux-armv7l.sh\n
"},{"location":"setup/conda-pi-setup/#test-conda-in-your-path","title":"Test Conda In Your PATH","text":"which conda\n
Should return: /home/pi/miniconda3/bin/conda\n
"},{"location":"setup/conda-pi-setup/#add-the-raspberry-pi-channel-to-conda","title":"Add the Raspberry Pi Channel to Conda","text":"conda config --add channels rpi\nconda install python=3.6\n
"},{"location":"setup/conda-pi-setup/#test-python","title":"Test Python","text":"python --version\n
Python 3.6.6\n
"},{"location":"setup/conda-pi-setup/#create-a-donkeycar-conda-environment","title":"Create a DonkeyCar Conda Environment","text":"conda create --name donkey python=3\n
The following NEW packages will be INSTALLED:\n\n ca-certificates: 2018.8.24-0 rpi\n certifi: 2018.8.24-py36_1 rpi\n ncurses: 6.1-h4f752ac_1 rpi\n openssl: 1.0.2r-hdff2a78_0 rpi\n pip: 18.0-py36_1 rpi\n python: 3.6.6-hd0568c0_1 rpi\n readline: 7.0-hcb560eb_1 rpi\n setuptools: 40.2.0-py36_0 rpi\n sqlite: 3.24.0-hfcb1bcf_1 rpi\n tk: 8.6.8-h849d6a0_0 rpi\n wheel: 0.31.1-py36_1 rpi\n xz: 5.2.4-hdff2a78_1 rpi\n zlib: 1.2.11-hdff2a78_1003 rpi\n\nProceed ([y]/n)? y\n
"},{"location":"setup/conda-pi-setup/#add-the-conda-shell-to-the-end-of-our-bashrc-file","title":"Add the conda shell to the end of our .bashrc file","text":"echo \". /home/pi/miniconda3/etc/profile.d/conda.sh\" >> ~/.bashrc\n
conda activate\n
The shell prompt should now be \"base\""},{"location":"setup/conda-pi-setup/#activate-your-donkey-python-environment","title":"Activate Your Donkey Python Environment","text":"source activate donkey\n
You should see the prompt:
(donkey) pi@myhost:~ $\n
"},{"location":"setup/conda-pi-setup/#verify-git-is-installed","title":"Verify Git Is installed","text":"git --version\n
git version 2.20.1
"},{"location":"setup/conda-pi-setup/#clone-the-donkeycar-repository","title":"Clone the DonkeyCar repository","text":"git clone https://github.com/autorope/donkeycar\ncd donkeycar\ngit checkout master\n
sudo apt-get install build-essential python3 python3-dev python3-pip python3-virtualenv python3-numpy python3-picamera python3-pandas python3-rpi.gpio i2c-tools avahi-utils joystick libopenjp2-7-dev libtiff5-dev gfortran libatlas-base-dev libopenblas-dev libhdf5-serial-dev libgeos-dev git ntp\n
sudo apt-get install libilmbase-dev libopenexr-dev libgstreamer1.0-dev libjasper-dev libwebp-dev libatlas-base-dev libavcodec-dev libavformat-dev libswscale-dev libqtgui4 libqt4-test\n
"},{"location":"setup/conda-pi-setup/#clone-donkeycar-repo","title":"Clone DonkeyCar Repo","text":"pip freeze\n
certifi==2018.8.24
git clone https://github.com/autorope/donkeycar\ncd donkeycar\npip install -e .[pi]\n
"},{"location":"setup/donkey-cmd/","title":"The Donkey Command","text":"In normal configuration, the donkey
is a UNIX shell script that is located in the bin directory of your local environment. For example here is a sample path to the donkey
command if you are logged into the standard AI Racing account called arl
/home/arl/env/bin\n
#!/home/dan/env/bin/python3\n# EASY-INSTALL-ENTRY-SCRIPT: 'donkeycar','console_scripts','donkey'\n__requires__ = 'donkeycar'\nimport re\nimport sys\nfrom pkg_resources import load_entry_point\n\nif __name__ == '__main__':\n sys.argv[0] = re.sub(r'(-script\\.pyw?|\\.exe)?$', '', sys.argv[0])\n sys.exit(\n load_entry_point('donkeycar', 'console_scripts', 'donkey')()\n )\n
This file is automatically generated
This UNIX script is automatically generated and serves as an entry point for the donkeycar package. The console_scripts and donkey parameters provide more information about the entry point, usually used by package management tools like setuptools.
"},{"location":"setup/donkey-cmd/#imports","title":"Imports","text":"import re\nimport sys\nfrom pkg_resources import load_entry_point\n
On the installer for the 4.5.1, this file is not generated.
"},{"location":"setup/gpu-options/","title":"GPU Options","text":"PCPartPicker Part List: https://pcpartpicker.com/list/mrFYPX
CPU: AMD Ryzen 5 3600 3.6 GHz 6-Core Processor ($95.00 @ Amazon) Motherboard: MSI A520M-A PRO Micro ATX AM4 Motherboard ($101.11 @ Amazon) Memory: Silicon Power SP016GBLFU320X02 16 GB (1 x 16 GB) DDR4-3200 CL22 Memory ($23.99 @ Amazon) Storage: TEAMGROUP MP33 512 GB M.2-2280 PCIe 3.0 X4 NVME Solid State Drive ($22.49 @ Amazon) Video Card: Asus Dual GeForce RTX 3060 V2 OC Edition GeForce RTX 3060 12GB 12 GB Video Card ($299.99 @ Amazon) Case: Thermaltake Versa H18 MicroATX Mini Tower Case ($49.99 @ Amazon) Power Supply: be quiet! Pure Power 11 400 W 80+ Gold Certified ATX Power Supply ($89.69 @ Amazon) Monitor: *Acer V227Q Abmix 21.5\" 1920 x 1080 75 Hz Monitor ($87.29 @ Amazon) Total: $769.55
"},{"location":"setup/installation-tips/","title":"Installation Tips","text":"Installing the Donkey Car software is a complex process. It typically takes about six hours to add the required tools and Python libraries to a raw NVIDIA or Raspberry Pi OS image.
Here are a few tips.
"},{"location":"setup/installation-tips/#use-the-h-mode-when-installing-software","title":"Use the -H mode when installing software","text":"We should always use the HOME environment variables when using sudo:
sudo -H pip install package_name\n
In this command, sudo -H is used to ensure that the home environment variable is set to the home directory of the target user (root in this case), which can sometimes avoid permissions issues that arise when sudo retains your normal user's home directory environment variable. Essentially, the -H option makes sure that the operation is as clean as possible from an environment perspective, which can be important when installing software that might write configuration files or data into the user's home directory.
"},{"location":"setup/installation-tips/#remember-to-use-python3-and-pip3","title":"Remember to Use Python3 and Pip3","text":"Sometimes older Python2 tools get mixed up with the current Python 3 tools. By adding the \"3\" suffix to your commands you can guarantee that that your path will pick up the right version of Python tools.
"},{"location":"setup/installation-tips/#when-to-use-python3-and-pip3-over-python-and-pip-in-unix-shell","title":"When to Usepython3
and pip3
Over python
and pip
in UNIX Shell","text":""},{"location":"setup/installation-tips/#python-version","title":"Python Version","text":"python3
explicitly runs Python 3.x, whereas python
might run either Python 2.x or Python 3.x depending on the system configuration.Multiple Python Installations: On some systems, you may have both Python 2 and Python 3 installed. In such cases, python
usually refers to Python 2 and python3
to Python 3. Similarly, pip
might point to the package manager for Python 2, and pip3
will point to Python 3. Always use python3
and pip3
to ensure that you're working with Python 3.x.
Aliases: Some systems alias python
to python3
. This is common in more recent Linux distributions. On such systems, it may not matter if you use python
or python3
. However, using python3
is more explicit and can avoid ambiguity.
Version-Specific Code: If you're running or writing code that is specific to Python 3, use python3
. Similarly, if you're installing packages that are intended for use with Python 3, use pip3
.
Portability: If you're writing a script that you plan to share with others, it's safer to specify python3
if your code is not compatible with Python 2.
python
and pip
commands will point to the versions associated with the active environment, regardless of whether it's Python 2 or 3. So, within a Python 3 virtual environment, python
and pip
will be equivalent to python3
and pip3
.Explicit is Better: If you're in doubt, being explicit is usually better. Using python3
and pip3
makes it clear that you're using Python 3.
Check Version: If ever in doubt, you can always check which version you're running by using python --version
or python3 --version
and pip --version
or pip3 --version
.
In summary, if you want to make sure you are using Python 3 and its associated package manager, use python3
and pip3
.
Although you can boot the Raspberry Pi and NVIDIA Nano on a 16GB microSD card, it will quickly run out of room after you download the 36K Python libraries. Our suggestion is to use a minimum of a 64 GB card and ideally a 128 GB microSD card.
In the item above, the \"Class 10\" indicates it will write above 10 MB/sec.
"},{"location":"setup/microSD-write-speed/#bandwidth-calculation-for-224x224-color-images","title":"Bandwidth Calculation for 224x224 Color Images","text":"What speed microSD card should you purchase? Low-cost cards write at 10 MB/sec and more expensive cards can run up to 100 MB/sec. What one should we purchase?
The default image for the Donkey Car is 224x224 color images.
"},{"location":"setup/microSD-write-speed/#analysis","title":"Analysis","text":"Calculate the total microSD card write bandwidth needed for a camera that reads 224x224 color images and needs to write 20 images per second to the disk.
"},{"location":"setup/microSD-write-speed/#step-by-step-calculation","title":"Step-by-Step Calculation","text":""},{"location":"setup/microSD-write-speed/#step-1-calculate-the-size-of-a-single-image","title":"Step 1: Calculate the Size of a Single Image","text":"A 224x224 color image would have 224 * 224 = 50,176 pixels.
Assuming the image uses a standard RGB color scheme, each pixel would have three color channels: Red, Green, and Blue. Each channel typically requires 1 byte (or 8 bits) to store its intensity value. So, each pixel would need 3 bytes. The size of a single image would be:
Size of a single image = Number of pixels * Bytes per pixel\n\n= 50,176 pixels * 3 bytes/pixel\n= 150,528 bytes/image\n= 150 KB\napprox 0.147 MB\n
"},{"location":"setup/microSD-write-speed/#step-2-calculate-the-total-bandwidth-needed-per-second","title":"Step 2: Calculate the Total Bandwidth Needed Per Second","text":"We are writing 20 images a second to the disk, so the total bandwidth needed per second would be:
Total bandwidth per second = Size of a single image * Number of images per second\n\n= 0.147 MB * 20\n= approx 2.94 megabytes/s\n
"},{"location":"setup/microSD-write-speed/#conclusion","title":"Conclusion","text":"The microSD would need a total bandwidth of approximately 3 MB/s to write 20 224x224 color images to the disk each second. Since most microSD cards today write from 10 MB/s to 100 MB/s there is no worry about having enough write speed to the microSD card.
"},{"location":"setup/nano-docker/","title":"Docker on Nano","text":"Note
These instructions were copied from a message on the Discord Server that were posed by user naisy on at 08/31/2023 5:48 AM. They are not supported by the DonkeyCar project (yet).
Due to the complexities of building the correct environments on the Nano, one alternative is to use the Docker container system. Although this adds some additional overhead, it means that you can download a virtual machine with all the correct software installed.
You must have a minumum of a 64GB microSD card to run Docker on the NVIDIA Nano.
"},{"location":"setup/nano-docker/#step-1-download-docker-github-repo","title":"Step 1: Download Docker GitHub Repo","text":"In this step we will clone a repository that has some very small UNIX shell scripts.
mkdir ~/projects\ncd ~/projects\ngit clone https://github.com/naisy/Docker\ncd Docker\n
"},{"location":"setup/nano-docker/#step-2-run-the-permissions-shell-script","title":"Step 2: Run the Permissions Shell Script","text":"mkdir ~/docker\ncp run-jetson-jp461-donkeycar??.sh ~/docker\ncp run-jetson-jp461-gpio-permission.sh ~/docker\n\ncd ~/docker\n./run-jetson-jp461-gpio-permission.sh\n
"},{"location":"setup/nano-docker/#step-3-run-the-main-shell-script","title":"Step 3: Run the Main Shell Script","text":"Due to memory limitations, only one of either DonkeyCar 4.5 or DonkeyCar 5.0 should be activated.
"},{"location":"setup/nano-docker/#donkeycar-45","title":"DonkeyCar 4.5","text":"./run-jetson-jp461-donkeycar45.sh
"},{"location":"setup/nano-docker/#or-donkeycar-50","title":"or DonkeyCar 5.0","text":"./run-jetson-jp461-donkeycar50.sh
"},{"location":"setup/nano-docker/#login-into-the-virtual-machine","title":"Login into the Virtual Machine","text":"Log in to the jupyterlab terminal: PC Web browser for DonkeyCar 4.5
`` http://your_jetson_ip:8890/
for DonkeyCar 5.0 (dev3)\n
http://your_jetson_ip:8891/ Password: jupyterlab Launch JupyterLab Terminal ``` DonkeyCar: From this point on, it is exactly the same as a official donkeycar.
"},{"location":"setup/nano-docker/#create-mycar","title":"create mycar","text":"donkey createcar --path ~/data/mycar45 cd ~/data/mycar45 ls
"},{"location":"setup/nano-python-upgrade/","title":"How to Upgrade Python on the NVIDIA Nano","text":"The current Jetson Nano image is version 4.6.1. The Donkey Car software only runs on 4.5.1
There are several bugs with the Donkey Car system that do not run on Jetson Nano 4.5.2. So how do we get the latest good stuff without allowing the system to automatically update to 4.5.2?
Warning
Do not run these commands!
sudo apt-get update -y\nsudo apt-get upgrade -y\n
When the command sudo apt-get upgrade -y
is sent, the JetPack dependency tree tries to upgrade to Jetpack version 4.5.2 which will cause the JetPack to lock up when it tries to boot into the Ubuntu Desktop. Note, this is a bug in the NVIDIA code, not the Donkey Car code.
We should never run sudo apt-get update -y
or sudo apt-get upgrade -y
when running JP4.5.1.
The following is how we upgrade Ubuntu, but keep JP from attempting to upgrade to JP4.5.2:
Warning
Always use the \"Software Updater GUI\" to update your Ubuntu and never run sudo apt-get upgrade -y
.
After completing the above process, let the Desktop stay open for a while and the \"Software Updater\" will appear.
Let the \"Software Updater\" update Ubuntu whenever you use the Desktop.
"},{"location":"setup/nano-python-upgrade/#references","title":"References","text":"This is a checklist that is genralized for all our events. We can't assume any network connectivity at these events.
"},{"location":"setup/nano-sd-image-checklist/#required","title":"Required","text":"The DonkeyCar uses the low cost PCA9685 PWM board.
"},{"location":"setup/pwm-board/#pca9685-pwm-board","title":"PCA9685 PWM Board","text":""},{"location":"setup/pwm-board/#pi-40-pin-header","title":"Pi 40 Pin Header","text":""},{"location":"setup/pwm-board/#connections","title":"Connections","text":""},{"location":"setup/pwm-board/#references","title":"References","text":"We only use the 8GB version:
Raspberry Pi 4
"},{"location":"setup/raspberry-pi-parts/#power-supply","title":"Power Supply","text":"The Raspberry Pi 4 needs a 5V.1, 3A (15W) USB-C Power Supply. It is unclear to us if you can connect existing barrel connectors power directly to the 40-pin 5V/GND pins.
Offical Raspberry Pi Page
Digi-Key Part Number 2648-SC0445-ND - AC/DC WALL MNT ADAPTER 5.1V 15W
Amason Raspberry Pi 15W USB-C Power Supply US - White $15 - Official Raspberry Pi 4 type-c power supply Input:100-240V 50/60Hz 0.5A,Output:5.1-3.0A
"},{"location":"setup/raspberry-pi-parts/#40-pin-power-option","title":"40-Pin Power Option","text":"Apt-get is the software package installed on the Raspberry Pi OS that allows you to install application libraries.
"},{"location":"setup/software-install-notes/#donkeycar-libraries-required","title":"DonkeyCar Libraries (required)","text":"sudo apt-get install build-essential python3 python3-dev python3-pip python3-virtualenv python3-numpy python3-picamera python3-pandas python3-rpi.gpio i2c-tools avahi-utils joystick libopenjp2-7-dev libtiff5-dev gfortran libatlas-base-dev libopenblas-dev libhdf5-serial-dev libgeos-dev git ntp\n
The other libraries are mostly small support libraries used for supporting debugging.
"},{"location":"setup/software-install-notes/#opencv-optional","title":"OpenCV (optional)","text":"sudo apt-get install libilmbase-dev libopenexr-dev libgstreamer1.0-dev libjasper-dev libwebp-dev libatlas-base-dev libavcodec-dev libavformat-dev libswscale-dev libqtgui4 libqt4-test\n
"},{"location":"setup/track-options/","title":"Track Options","text":"Although you can just put tape down on a floor, that is time-consuming and is often a low-quality track. There are several other options and the prices vary from under $100 to $1,300.
"},{"location":"setup/track-options/#vinyl-tarps","title":"Vinyl Tarps","text":"In Minnesota, Billboard Tarps sells used vinyl sign material. For around $70 you can get a 16' X 25' used black billboard 9oz black vinyl sign that is ideal for creating your own track.
Here is a full-size track that is 25' X 35' for about $170:
This is what 1/2 of the 16'X'25 foot tarp looks like after delivery from Billboard Tarps and Vinyl.
Billboard Tarps and Vinyl - We suggest you get a dark color (black or dark blue) and then tape down white edges and a yellow dashed line in the center. You can also buy smaller scraps and use HH-66 vinyl cement as an adhesive to glue smaller tarps together. You can also glue a black track on a white background to make the road.
"},{"location":"setup/track-options/#picking-the-right-size","title":"Picking the Right Size","text":"The typical dimensions of a full-event track is 22 x 34 feet. These dimensions are based on the DIYRobocars Standard Track, which is a popular track for donkey car racing.
The smaller track is a good option for beginners, as it is easier to navigate and control. The larger track is a better option for experienced drivers, as it offers more challenges and opportunities for speed.
Of course, the dimensions of a donkey car track can vary depending on the specific design. However, the dimensions listed above are a good starting point for anyone who is planning to build or race a donkey car.
"},{"location":"setup/track-options/#keeping-a-standard-width","title":"Keeping A Standard Width","text":"The standard width of all the \"road\" tracks is two feet or 24 inches. This is the distance to the centerline of the white edges. The roads are typically black with a white edge and a dashed yellow line down the middle of the track. The key is to have a high contrast between the black road and the white edges.
Many people use 2\" (or 1 and 7/8\") inch wide duct tape or Gaffers tape. Gaffer's tape is often preferred for temporary events on carpet. Gaffer's tape doesn't harm the surface to which it adhered.
"},{"location":"setup/track-options/#minnesota-stem-partners-tracks","title":"Minnesota STEM Partners Tracks","text":"Below is a sample of a tarp purchased from Billboard Tarps. Note the actual track is twice this size since it is still folded in half in this photo.
Track setup in the driver training room:
Note that this track does not adhere to the 2-foot wide rule. This is sometimes done when you have many students doing practice driving on the same track.
"},{"location":"setup/track-options/#optum-track","title":"Optum Track","text":"Optum printed their own track on a local printer that specialized in printing large format signage. The custom printing cost was about $1,300.00
"},{"location":"setup/track-options/#best-buy-track","title":"Best Buy Track","text":"Best Buy also printed its own track for their events. This photo only shows about 1/3 of the track.
"},{"location":"setup/track-options/#dan-mccrearys-basement-track","title":"Dan McCreary's Basement Track","text":"This track is just a single piece of white electrical tape.
"},{"location":"setup/track-options/#interlocking-foam-mats","title":"Interlocking Foam Mats","text":"You can also purchase interlocking foam mats. These are typically two feet by two feet and cost about $30 for a package of 6. Since each package covers 24 square feet and a full track is about 24x36 feet (758 square feet) we can see the cost of 32 packages is around $960.00.
Interlocking Foam Mats From WalMart
Amazon Foam Mats
"},{"location":"setup/track-options/#references","title":"References","text":"DIYRobocars Standard Track
"},{"location":"training-logs/dans-basement/","title":"Dans Basement Training Log","text":"I have a very small track in my basement. I put down a single white line about 3/4 inch wide using white electrical tape. The background was a marble blue expoy floor with a lot of color variation. The surface was very reflective and there were lights in the ceiling with lots of glare. I drove the car around 10 times in each direction and collected around 4,500 images.
"},{"location":"training-logs/dans-basement/#catalogs","title":"Catalogs","text":"I manually edited the catlog files and then edited the manifest.json file to modify the paths:
{\"paths\": [\"catalog_3.catalog\", \"catalog_4.catalog\", \"catalog_5.catalog\", \"catalog_6.catalog\", \"catalog_7.catalog\"]\n
wc -l data/dans-basement/*.catalog\n
781 data/dans-basement/catalog_3.catalog\n 1000 data/dans-basement/catalog_4.catalog\n 1000 data/dans-basement/catalog_5.catalog\n 1000 data/dans-basement/catalog_6.catalog\n 750 data/dans-basement/catalog_7.catalog\n 4531 total\n
This matched the ls -1 ~/mycar/data/dans-basement/images | wc -l
command that counted the number of images.
I time the training time on the NIVID RTX 2080 and got the model trained in about 1.5 minutes.
$ time donkey train --tub=./data/dans-basement --model=./models/dans-basement.h5\n________ ______ _________ \n___ __ \\_______________ /___________ __ __ ____/_____ ________\n__ / / / __ \\_ __ \\_ //_/ _ \\_ / / / _ / _ __ `/_ ___/\n_ /_/ // /_/ / / / / ,< / __/ /_/ / / /___ / /_/ /_ / \n/_____/ \\____//_/ /_//_/|_| \\___/_\\__, / \\____/ \\__,_/ /_/ \n /____/ \n\nusing donkey v4.2.1 ...\nloading config file: ./config.py\nloading personal config over-rides from myconfig.py\n\"get_model_by_type\" model Type is: linear\nCreated KerasLinear\n2021-07-26 21:05:34.259364: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\n2021-07-26 21:05:34.278301: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:05:34.278898: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: \npciBusID: 0000:09:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5\ncoreClock: 1.635GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s\n2021-07-26 21:05:34.279098: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2021-07-26 21:05:34.280320: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2021-07-26 21:05:34.281822: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n2021-07-26 21:05:34.282037: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n2021-07-26 21:05:34.283140: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n2021-07-26 21:05:34.283726: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n2021-07-26 21:05:34.285524: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2021-07-26 21:05:34.285676: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:05:34.286176: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:05:34.286568: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0\n2021-07-26 21:05:34.286793: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA\n2021-07-26 21:05:34.290920: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3592950000 Hz\n2021-07-26 21:05:34.291228: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x557d8a05bbb0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n2021-07-26 21:05:34.291241: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n2021-07-26 21:05:34.291374: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:05:34.291795: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: \npciBusID: 0000:09:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5\ncoreClock: 1.635GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s\n2021-07-26 21:05:34.291830: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2021-07-26 21:05:34.291842: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2021-07-26 21:05:34.291852: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n2021-07-26 21:05:34.291862: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n2021-07-26 21:05:34.291872: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n2021-07-26 21:05:34.291881: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n2021-07-26 21:05:34.291891: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2021-07-26 21:05:34.291955: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:05:34.292398: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:05:34.292782: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0\n2021-07-26 21:05:34.292805: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2021-07-26 21:05:34.366898: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:\n2021-07-26 21:05:34.366930: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 \n2021-07-26 21:05:34.366937: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N \n2021-07-26 21:05:34.367194: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:05:34.367855: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:05:34.368446: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:05:34.368971: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9911 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:09:00.0, compute capability: 7.5)\n2021-07-26 21:05:34.370680: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x557d8bec8fa0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n2021-07-26 21:05:34.370693: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce RTX 2080 Ti, Compute Capability 7.5\nModel: \"model\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\nimg_in (InputLayer) [(None, 224, 224, 3) 0 \n__________________________________________________________________________________________________\nconv2d_1 (Conv2D) (None, 110, 110, 24) 1824 img_in[0][0] \n__________________________________________________________________________________________________\ndropout (Dropout) (None, 110, 110, 24) 0 conv2d_1[0][0] \n__________________________________________________________________________________________________\nconv2d_2 (Conv2D) (None, 53, 53, 32) 19232 dropout[0][0] \n__________________________________________________________________________________________________\ndropout_1 (Dropout) (None, 53, 53, 32) 0 conv2d_2[0][0] \n__________________________________________________________________________________________________\nconv2d_3 (Conv2D) (None, 25, 25, 64) 51264 dropout_1[0][0] \n__________________________________________________________________________________________________\ndropout_2 (Dropout) (None, 25, 25, 64) 0 conv2d_3[0][0] \n__________________________________________________________________________________________________\nconv2d_4 (Conv2D) (None, 23, 23, 64) 36928 dropout_2[0][0] \n__________________________________________________________________________________________________\ndropout_3 (Dropout) (None, 23, 23, 64) 0 conv2d_4[0][0] \n__________________________________________________________________________________________________\nconv2d_5 (Conv2D) (None, 21, 21, 64) 36928 dropout_3[0][0] \n__________________________________________________________________________________________________\ndropout_4 (Dropout) (None, 21, 21, 64) 0 conv2d_5[0][0] \n__________________________________________________________________________________________________\nflattened (Flatten) (None, 28224) 0 dropout_4[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 100) 2822500 flattened[0][0] \n__________________________________________________________________________________________________\ndropout_5 (Dropout) (None, 100) 0 dense_1[0][0] \n__________________________________________________________________________________________________\ndense_2 (Dense) (None, 50) 5050 dropout_5[0][0] \n__________________________________________________________________________________________________\ndropout_6 (Dropout) (None, 50) 0 dense_2[0][0] \n__________________________________________________________________________________________________\nn_outputs0 (Dense) (None, 1) 51 dropout_6[0][0] \n__________________________________________________________________________________________________\nn_outputs1 (Dense) (None, 1) 51 dropout_6[0][0] \n==================================================================================================\nTotal params: 2,973,828\nTrainable params: 2,973,828\nNon-trainable params: 0\n__________________________________________________________________________________________________\nNone\nUsing catalog /home/arl/mycar/data/dans-basement/catalog_7.catalog\n\nRecords # Training 3364\nRecords # Validation 842\nEpoch 1/100\n2021-07-26 21:05:35.291438: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2021-07-26 21:05:35.613762: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2021-07-26 21:05:36.322576: W tensorflow/stream_executor/gpu/asm_compiler.cc:116] *** WARNING *** You are using ptxas 9.1.108, which is older than 9.2.88. ptxas 9.x before 9.2.88 is known to miscompile XLA code, leading to incorrect results or invalid-address errors.\n\nYou do not need to update to CUDA 9.2.88; cherry-picking the ptxas binary is sufficient.\n2021-07-26 21:05:36.376195: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] Internal: ptxas exited with non-zero error code 65280, output: ptxas fatal : Value 'sm_75' is not defined for option 'gpu-name'\n\nRelying on driver to perform ptx compilation. \nModify $PATH to customize ptxas location.\nThis message will be only logged once.\n27/27 [==============================] - ETA: 0s - loss: 0.2495 - n_outputs0_loss: 0.1717 - n_outputs1_loss: 0.0778\nEpoch 00001: val_loss improved from inf to 0.14744, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 8s 301ms/step - loss: 0.2495 - n_outputs0_loss: 0.1717 - n_outputs1_loss: 0.0778 - val_loss: 0.1474 - val_n_outputs0_loss: 0.1291 - val_n_outputs1_loss: 0.0183\nEpoch 2/100\n27/27 [==============================] - ETA: 0s - loss: 0.1487 - n_outputs0_loss: 0.1265 - n_outputs1_loss: 0.0223\nEpoch 00002: val_loss improved from 0.14744 to 0.09815, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 120ms/step - loss: 0.1487 - n_outputs0_loss: 0.1265 - n_outputs1_loss: 0.0223 - val_loss: 0.0981 - val_n_outputs0_loss: 0.0777 - val_n_outputs1_loss: 0.0205\nEpoch 3/100\n27/27 [==============================] - ETA: 0s - loss: 0.1075 - n_outputs0_loss: 0.0893 - n_outputs1_loss: 0.0182\nEpoch 00003: val_loss improved from 0.09815 to 0.07897, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 117ms/step - loss: 0.1075 - n_outputs0_loss: 0.0893 - n_outputs1_loss: 0.0182 - val_loss: 0.0790 - val_n_outputs0_loss: 0.0687 - val_n_outputs1_loss: 0.0102\nEpoch 4/100\n27/27 [==============================] - ETA: 0s - loss: 0.0917 - n_outputs0_loss: 0.0759 - n_outputs1_loss: 0.0158\nEpoch 00004: val_loss improved from 0.07897 to 0.07055, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 110ms/step - loss: 0.0917 - n_outputs0_loss: 0.0759 - n_outputs1_loss: 0.0158 - val_loss: 0.0705 - val_n_outputs0_loss: 0.0610 - val_n_outputs1_loss: 0.0096\nEpoch 5/100\n27/27 [==============================] - ETA: 0s - loss: 0.0880 - n_outputs0_loss: 0.0734 - n_outputs1_loss: 0.0146\nEpoch 00005: val_loss did not improve from 0.07055\n27/27 [==============================] - 3s 105ms/step - loss: 0.0880 - n_outputs0_loss: 0.0734 - n_outputs1_loss: 0.0146 - val_loss: 0.0751 - val_n_outputs0_loss: 0.0553 - val_n_outputs1_loss: 0.0198\nEpoch 6/100\n27/27 [==============================] - ETA: 0s - loss: 0.0757 - n_outputs0_loss: 0.0629 - n_outputs1_loss: 0.0127\nEpoch 00006: val_loss improved from 0.07055 to 0.05840, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 111ms/step - loss: 0.0757 - n_outputs0_loss: 0.0629 - n_outputs1_loss: 0.0127 - val_loss: 0.0584 - val_n_outputs0_loss: 0.0485 - val_n_outputs1_loss: 0.0099\nEpoch 7/100\n27/27 [==============================] - ETA: 0s - loss: 0.0672 - n_outputs0_loss: 0.0551 - n_outputs1_loss: 0.0120\nEpoch 00007: val_loss improved from 0.05840 to 0.05028, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 110ms/step - loss: 0.0672 - n_outputs0_loss: 0.0551 - n_outputs1_loss: 0.0120 - val_loss: 0.0503 - val_n_outputs0_loss: 0.0450 - val_n_outputs1_loss: 0.0053\nEpoch 8/100\n27/27 [==============================] - ETA: 0s - loss: 0.0621 - n_outputs0_loss: 0.0510 - n_outputs1_loss: 0.0111\nEpoch 00008: val_loss improved from 0.05028 to 0.04540, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 110ms/step - loss: 0.0621 - n_outputs0_loss: 0.0510 - n_outputs1_loss: 0.0111 - val_loss: 0.0454 - val_n_outputs0_loss: 0.0385 - val_n_outputs1_loss: 0.0069\nEpoch 9/100\n27/27 [==============================] - ETA: 0s - loss: 0.0545 - n_outputs0_loss: 0.0441 - n_outputs1_loss: 0.0104\nEpoch 00009: val_loss improved from 0.04540 to 0.04351, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 107ms/step - loss: 0.0545 - n_outputs0_loss: 0.0441 - n_outputs1_loss: 0.0104 - val_loss: 0.0435 - val_n_outputs0_loss: 0.0358 - val_n_outputs1_loss: 0.0077\nEpoch 10/100\n27/27 [==============================] - ETA: 0s - loss: 0.0558 - n_outputs0_loss: 0.0458 - n_outputs1_loss: 0.0099\nEpoch 00010: val_loss improved from 0.04351 to 0.04070, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 110ms/step - loss: 0.0558 - n_outputs0_loss: 0.0458 - n_outputs1_loss: 0.0099 - val_loss: 0.0407 - val_n_outputs0_loss: 0.0357 - val_n_outputs1_loss: 0.0050\nEpoch 11/100\n27/27 [==============================] - ETA: 0s - loss: 0.0505 - n_outputs0_loss: 0.0415 - n_outputs1_loss: 0.0090\nEpoch 00011: val_loss improved from 0.04070 to 0.03935, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 109ms/step - loss: 0.0505 - n_outputs0_loss: 0.0415 - n_outputs1_loss: 0.0090 - val_loss: 0.0393 - val_n_outputs0_loss: 0.0340 - val_n_outputs1_loss: 0.0054\nEpoch 12/100\n27/27 [==============================] - ETA: 0s - loss: 0.0476 - n_outputs0_loss: 0.0388 - n_outputs1_loss: 0.0088\nEpoch 00012: val_loss improved from 0.03935 to 0.03624, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 110ms/step - loss: 0.0476 - n_outputs0_loss: 0.0388 - n_outputs1_loss: 0.0088 - val_loss: 0.0362 - val_n_outputs0_loss: 0.0298 - val_n_outputs1_loss: 0.0065\nEpoch 13/100\n27/27 [==============================] - ETA: 0s - loss: 0.0453 - n_outputs0_loss: 0.0373 - n_outputs1_loss: 0.0080\nEpoch 00013: val_loss improved from 0.03624 to 0.03507, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 108ms/step - loss: 0.0453 - n_outputs0_loss: 0.0373 - n_outputs1_loss: 0.0080 - val_loss: 0.0351 - val_n_outputs0_loss: 0.0294 - val_n_outputs1_loss: 0.0057\nEpoch 14/100\n27/27 [==============================] - ETA: 0s - loss: 0.0430 - n_outputs0_loss: 0.0352 - n_outputs1_loss: 0.0079\nEpoch 00014: val_loss improved from 0.03507 to 0.03211, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 111ms/step - loss: 0.0430 - n_outputs0_loss: 0.0352 - n_outputs1_loss: 0.0079 - val_loss: 0.0321 - val_n_outputs0_loss: 0.0265 - val_n_outputs1_loss: 0.0056\nEpoch 15/100\n27/27 [==============================] - ETA: 0s - loss: 0.0397 - n_outputs0_loss: 0.0327 - n_outputs1_loss: 0.0070\nEpoch 00015: val_loss improved from 0.03211 to 0.03208, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 110ms/step - loss: 0.0397 - n_outputs0_loss: 0.0327 - n_outputs1_loss: 0.0070 - val_loss: 0.0321 - val_n_outputs0_loss: 0.0279 - val_n_outputs1_loss: 0.0042\nEpoch 16/100\n27/27 [==============================] - ETA: 0s - loss: 0.0382 - n_outputs0_loss: 0.0316 - n_outputs1_loss: 0.0065\nEpoch 00016: val_loss improved from 0.03208 to 0.02880, saving model to ./models/dans-basement.h5\n27/27 [==============================] - 3s 108ms/step - loss: 0.0382 - n_outputs0_loss: 0.0316 - n_outputs1_loss: 0.0065 - val_loss: 0.0288 - val_n_outputs0_loss: 0.0243 - val_n_outputs1_loss: 0.0046\nEpoch 17/100\n27/27 [==============================] - ETA: 0s - loss: 0.0381 - n_outputs0_loss: 0.0313 - n_outputs1_loss: 0.0069\nEpoch 00017: val_loss did not improve from 0.02880\n27/27 [==============================] - 3s 104ms/step - loss: 0.0381 - n_outputs0_loss: 0.0313 - n_outputs1_loss: 0.0069 - val_loss: 0.0322 - val_n_outputs0_loss: 0.0281 - val_n_outputs1_loss: 0.0041\nEpoch 18/100\n27/27 [==============================] - ETA: 0s - loss: 0.0375 - n_outputs0_loss: 0.0310 - n_outputs1_loss: 0.0065\nEpoch 00018: val_loss did not improve from 0.02880\n27/27 [==============================] - 3s 107ms/step - loss: 0.0375 - n_outputs0_loss: 0.0310 - n_outputs1_loss: 0.0065 - val_loss: 0.0293 - val_n_outputs0_loss: 0.0257 - val_n_outputs1_loss: 0.0036\nEpoch 19/100\n27/27 [==============================] - ETA: 0s - loss: 0.0372 - n_outputs0_loss: 0.0308 - n_outputs1_loss: 0.0064\nEpoch 00019: val_loss did not improve from 0.02880\n27/27 [==============================] - 3s 108ms/step - loss: 0.0372 - n_outputs0_loss: 0.0308 - n_outputs1_loss: 0.0064 - val_loss: 0.0307 - val_n_outputs0_loss: 0.0275 - val_n_outputs1_loss: 0.0032\nEpoch 20/100\n27/27 [==============================] - ETA: 0s - loss: 0.0347 - n_outputs0_loss: 0.0285 - n_outputs1_loss: 0.0062\nEpoch 00020: val_loss did not improve from 0.02880\n27/27 [==============================] - 3s 104ms/step - loss: 0.0347 - n_outputs0_loss: 0.0285 - n_outputs1_loss: 0.0062 - val_loss: 0.0325 - val_n_outputs0_loss: 0.0283 - val_n_outputs1_loss: 0.0042\nEpoch 21/100\n27/27 [==============================] - ETA: 0s - loss: 0.0349 - n_outputs0_loss: 0.0290 - n_outputs1_loss: 0.0058\nEpoch 00021: val_loss did not improve from 0.02880\n27/27 [==============================] - 3s 107ms/step - loss: 0.0349 - n_outputs0_loss: 0.0290 - n_outputs1_loss: 0.0058 - val_loss: 0.0293 - val_n_outputs0_loss: 0.0258 - val_n_outputs1_loss: 0.0035\nWARNING: CPU random generator seem to be failing, disable hardware random number generation\nWARNING: RDRND generated: 0xffffffff 0xffffffff 0xffffffff 0xffffffff\n\nreal 1m26.930s\nuser 1m30.911s\nsys 0m42.818s\n
"},{"location":"training-logs/msp-1-cpu/","title":"Training Log for MSP Car #1","text":"After cleanup we only got about 1,500 records. But here is a log of the training. It took about 1.5 minutes.
$ donkey train --tub=./data/msp-car-1 --model=./models/msp-car-1.f5\n
_ ______ _________ ___ __ ___ /__ __ __ _/_ ______ __ / / / __ _ __ _ /// _ _ / / / _ / _ __ `/ ___/ _ // // // / / / / ,< / __/ // / / / / // /_ / // _/// ////|| _/__, / _/ _,/ /_/ /____/ using donkey v4.2.1 ... loading config file: ./config.py loading personal config over-rides from myconfig.py \"get_model_by_type\" model Type is: linear Created KerasLinear 2021-07-26 19:50:45.562205: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2021-07-26 19:50:45.565106: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3592950000 Hz 2021-07-26 19:50:45.565470: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55d85e9d19f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2021-07-26 19:50:45.565492: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2021-07-26 19:50:45.565578: I tensorflow/core/common_runtime/process_util.cc:147] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance. Model: \"model\"
"},{"location":"training-logs/msp-1-cpu/#layer-type-output-shape-param-connected-to","title":"Layer (type) Output Shape Param # Connected to","text":"img_in (InputLayer) [(None, 224, 224, 3) 0
conv2d_1 (Conv2D) (None, 110, 110, 24) 1824 img_in[0][0]
dropout (Dropout) (None, 110, 110, 24) 0 conv2d_1[0][0]
conv2d_2 (Conv2D) (None, 53, 53, 32) 19232 dropout[0][0]
dropout_1 (Dropout) (None, 53, 53, 32) 0 conv2d_2[0][0]
conv2d_3 (Conv2D) (None, 25, 25, 64) 51264 dropout_1[0][0]
dropout_2 (Dropout) (None, 25, 25, 64) 0 conv2d_3[0][0]
conv2d_4 (Conv2D) (None, 23, 23, 64) 36928 dropout_2[0][0]
dropout_3 (Dropout) (None, 23, 23, 64) 0 conv2d_4[0][0]
conv2d_5 (Conv2D) (None, 21, 21, 64) 36928 dropout_3[0][0]
dropout_4 (Dropout) (None, 21, 21, 64) 0 conv2d_5[0][0]
flattened (Flatten) (None, 28224) 0 dropout_4[0][0]
dense_1 (Dense) (None, 100) 2822500 flattened[0][0]
dropout_5 (Dropout) (None, 100) 0 dense_1[0][0]
dense_2 (Dense) (None, 50) 5050 dropout_5[0][0]
dropout_6 (Dropout) (None, 50) 0 dense_2[0][0]
n_outputs0 (Dense) (None, 1) 51 dropout_6[0][0]
"},{"location":"training-logs/msp-1-cpu/#n_outputs1-dense-none-1-51-dropout_600","title":"n_outputs1 (Dense) (None, 1) 51 dropout_6[0][0]","text":"Total params: 2,973,828 Trainable params: 2,973,828 Non-trainable params: 0
None Using catalog /home/arl/mycar/data/msp-car-1/catalog_17.catalog
Records # Training 1265 Records # Validation 317 Epoch 1/100 10/10 [==============================] - ETA: 0s - loss: 1.0885 - n_outputs0_loss: 0.5975 - n_outputs1_loss: 0.4909 Epoch 00001: val_loss improved from inf to 0.54341, saving model to ./models/msp-car-1.f5 2021-07-26 19:50:57.881390: W tensorflow/python/util/util.cc:329] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them. WARNING:tensorflow:From /home/arl/miniconda3/envs/donkey/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.init (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. 10/10 [==============================] - 11s 1s/step - loss: 1.0885 - n_outputs0_loss: 0.5975 - n_outputs1_loss: 0.4909 - val_loss: 0.5434 - val_n_outputs0_loss: 0.4668 - val_n_outputs1_loss: 0.0767 Epoch 2/100 10/10 [==============================] - ETA: 0s - loss: 0.5522 - n_outputs0_loss: 0.4640 - n_outputs1_loss: 0.0882 Epoch 00002: val_loss improved from 0.54341 to 0.53272, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 999ms/step - loss: 0.5522 - n_outputs0_loss: 0.4640 - n_outputs1_loss: 0.0882 - val_loss: 0.5327 - val_n_outputs0_loss: 0.4605 - val_n_outputs1_loss: 0.0722 Epoch 3/100 10/10 [==============================] - ETA: 0s - loss: 0.5392 - n_outputs0_loss: 0.4638 - n_outputs1_loss: 0.0754 Epoch 00003: val_loss improved from 0.53272 to 0.50775, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.5392 - n_outputs0_loss: 0.4638 - n_outputs1_loss: 0.0754 - val_loss: 0.5077 - val_n_outputs0_loss: 0.4551 - val_n_outputs1_loss: 0.0527 Epoch 4/100 10/10 [==============================] - ETA: 0s - loss: 0.5318 - n_outputs0_loss: 0.4605 - n_outputs1_loss: 0.0713 Epoch 00004: val_loss improved from 0.50775 to 0.49783, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 999ms/step - loss: 0.5318 - n_outputs0_loss: 0.4605 - n_outputs1_loss: 0.0713 - val_loss: 0.4978 - val_n_outputs0_loss: 0.4455 - val_n_outputs1_loss: 0.0523 Epoch 5/100 10/10 [==============================] - ETA: 0s - loss: 0.5333 - n_outputs0_loss: 0.4608 - n_outputs1_loss: 0.0725 Epoch 00005: val_loss improved from 0.49783 to 0.49721, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.5333 - n_outputs0_loss: 0.4608 - n_outputs1_loss: 0.0725 - val_loss: 0.4972 - val_n_outputs0_loss: 0.4451 - val_n_outputs1_loss: 0.0521 Epoch 6/100 10/10 [==============================] - ETA: 0s - loss: 0.5277 - n_outputs0_loss: 0.4619 - n_outputs1_loss: 0.0658 Epoch 00006: val_loss did not improve from 0.49721 10/10 [==============================] - 9s 934ms/step - loss: 0.5277 - n_outputs0_loss: 0.4619 - n_outputs1_loss: 0.0658 - val_loss: 0.4981 - val_n_outputs0_loss: 0.4461 - val_n_outputs1_loss: 0.0520 Epoch 7/100 10/10 [==============================] - ETA: 0s - loss: 0.5265 - n_outputs0_loss: 0.4577 - n_outputs1_loss: 0.0688 Epoch 00007: val_loss improved from 0.49721 to 0.49668, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.5265 - n_outputs0_loss: 0.4577 - n_outputs1_loss: 0.0688 - val_loss: 0.4967 - val_n_outputs0_loss: 0.4442 - val_n_outputs1_loss: 0.0525 Epoch 8/100 10/10 [==============================] - ETA: 0s - loss: 0.5138 - n_outputs0_loss: 0.4467 - n_outputs1_loss: 0.0671 Epoch 00008: val_loss improved from 0.49668 to 0.49536, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.5138 - n_outputs0_loss: 0.4467 - n_outputs1_loss: 0.0671 - val_loss: 0.4954 - val_n_outputs0_loss: 0.4408 - val_n_outputs1_loss: 0.0546 Epoch 9/100 10/10 [==============================] - ETA: 0s - loss: 0.5109 - n_outputs0_loss: 0.4468 - n_outputs1_loss: 0.0642 Epoch 00009: val_loss improved from 0.49536 to 0.48741, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.5109 - n_outputs0_loss: 0.4468 - n_outputs1_loss: 0.0642 - val_loss: 0.4874 - val_n_outputs0_loss: 0.4353 - val_n_outputs1_loss: 0.0521 Epoch 10/100 10/10 [==============================] - ETA: 0s - loss: 0.5030 - n_outputs0_loss: 0.4405 - n_outputs1_loss: 0.0625 Epoch 00010: val_loss did not improve from 0.48741 10/10 [==============================] - 9s 930ms/step - loss: 0.5030 - n_outputs0_loss: 0.4405 - n_outputs1_loss: 0.0625 - val_loss: 0.4936 - val_n_outputs0_loss: 0.4351 - val_n_outputs1_loss: 0.0585 Epoch 11/100 10/10 [==============================] - ETA: 0s - loss: 0.4974 - n_outputs0_loss: 0.4310 - n_outputs1_loss: 0.0664 Epoch 00011: val_loss improved from 0.48741 to 0.47748, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 11s 1s/step - loss: 0.4974 - n_outputs0_loss: 0.4310 - n_outputs1_loss: 0.0664 - val_loss: 0.4775 - val_n_outputs0_loss: 0.4238 - val_n_outputs1_loss: 0.0536 Epoch 12/100 10/10 [==============================] - ETA: 0s - loss: 0.4887 - n_outputs0_loss: 0.4208 - n_outputs1_loss: 0.0679 Epoch 00012: val_loss did not improve from 0.47748 10/10 [==============================] - 9s 925ms/step - loss: 0.4887 - n_outputs0_loss: 0.4208 - n_outputs1_loss: 0.0679 - val_loss: 0.4836 - val_n_outputs0_loss: 0.4148 - val_n_outputs1_loss: 0.0687 Epoch 13/100 10/10 [==============================] - ETA: 0s - loss: 0.4591 - n_outputs0_loss: 0.3927 - n_outputs1_loss: 0.0664 Epoch 00013: val_loss improved from 0.47748 to 0.40567, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.4591 - n_outputs0_loss: 0.3927 - n_outputs1_loss: 0.0664 - val_loss: 0.4057 - val_n_outputs0_loss: 0.3540 - val_n_outputs1_loss: 0.0516 Epoch 14/100 10/10 [==============================] - ETA: 0s - loss: 0.4323 - n_outputs0_loss: 0.3665 - n_outputs1_loss: 0.0658 Epoch 00014: val_loss improved from 0.40567 to 0.37099, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.4323 - n_outputs0_loss: 0.3665 - n_outputs1_loss: 0.0658 - val_loss: 0.3710 - val_n_outputs0_loss: 0.3153 - val_n_outputs1_loss: 0.0556 Epoch 15/100 10/10 [==============================] - ETA: 0s - loss: 0.3754 - n_outputs0_loss: 0.3063 - n_outputs1_loss: 0.0691 Epoch 00015: val_loss improved from 0.37099 to 0.33956, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.3754 - n_outputs0_loss: 0.3063 - n_outputs1_loss: 0.0691 - val_loss: 0.3396 - val_n_outputs0_loss: 0.2853 - val_n_outputs1_loss: 0.0542 Epoch 16/100 10/10 [==============================] - ETA: 0s - loss: 0.3314 - n_outputs0_loss: 0.2723 - n_outputs1_loss: 0.0591 Epoch 00016: val_loss improved from 0.33956 to 0.30289, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.3314 - n_outputs0_loss: 0.2723 - n_outputs1_loss: 0.0591 - val_loss: 0.3029 - val_n_outputs0_loss: 0.2524 - val_n_outputs1_loss: 0.0505 Epoch 17/100 10/10 [==============================] - ETA: 0s - loss: 0.3168 - n_outputs0_loss: 0.2591 - n_outputs1_loss: 0.0576 Epoch 00017: val_loss improved from 0.30289 to 0.28694, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.3168 - n_outputs0_loss: 0.2591 - n_outputs1_loss: 0.0576 - val_loss: 0.2869 - val_n_outputs0_loss: 0.2390 - val_n_outputs1_loss: 0.0479 Epoch 18/100 10/10 [==============================] - ETA: 0s - loss: 0.2990 - n_outputs0_loss: 0.2446 - n_outputs1_loss: 0.0544 Epoch 00018: val_loss improved from 0.28694 to 0.27270, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.2990 - n_outputs0_loss: 0.2446 - n_outputs1_loss: 0.0544 - val_loss: 0.2727 - val_n_outputs0_loss: 0.2257 - val_n_outputs1_loss: 0.0470 Epoch 19/100 10/10 [==============================] - ETA: 0s - loss: 0.2706 - n_outputs0_loss: 0.2185 - n_outputs1_loss: 0.0521 Epoch 00019: val_loss improved from 0.27270 to 0.25193, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.2706 - n_outputs0_loss: 0.2185 - n_outputs1_loss: 0.0521 - val_loss: 0.2519 - val_n_outputs0_loss: 0.2099 - val_n_outputs1_loss: 0.0421 Epoch 20/100 10/10 [==============================] - ETA: 0s - loss: 0.2602 - n_outputs0_loss: 0.2112 - n_outputs1_loss: 0.0490 Epoch 00020: val_loss improved from 0.25193 to 0.23899, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.2602 - n_outputs0_loss: 0.2112 - n_outputs1_loss: 0.0490 - val_loss: 0.2390 - val_n_outputs0_loss: 0.1974 - val_n_outputs1_loss: 0.0416 Epoch 21/100 10/10 [==============================] - ETA: 0s - loss: 0.2345 - n_outputs0_loss: 0.1866 - n_outputs1_loss: 0.0479 Epoch 00021: val_loss improved from 0.23899 to 0.23396, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.2345 - n_outputs0_loss: 0.1866 - n_outputs1_loss: 0.0479 - val_loss: 0.2340 - val_n_outputs0_loss: 0.1911 - val_n_outputs1_loss: 0.0428 Epoch 22/100 10/10 [==============================] - ETA: 0s - loss: 0.2229 - n_outputs0_loss: 0.1758 - n_outputs1_loss: 0.0471 Epoch 00022: val_loss improved from 0.23396 to 0.22651, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.2229 - n_outputs0_loss: 0.1758 - n_outputs1_loss: 0.0471 - val_loss: 0.2265 - val_n_outputs0_loss: 0.1858 - val_n_outputs1_loss: 0.0407 Epoch 23/100 10/10 [==============================] - ETA: 0s - loss: 0.2175 - n_outputs0_loss: 0.1730 - n_outputs1_loss: 0.0445 Epoch 00023: val_loss improved from 0.22651 to 0.22245, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.2175 - n_outputs0_loss: 0.1730 - n_outputs1_loss: 0.0445 - val_loss: 0.2225 - val_n_outputs0_loss: 0.1806 - val_n_outputs1_loss: 0.0419 Epoch 24/100 10/10 [==============================] - ETA: 0s - loss: 0.2084 - n_outputs0_loss: 0.1624 - n_outputs1_loss: 0.0460 Epoch 00024: val_loss improved from 0.22245 to 0.20674, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.2084 - n_outputs0_loss: 0.1624 - n_outputs1_loss: 0.0460 - val_loss: 0.2067 - val_n_outputs0_loss: 0.1694 - val_n_outputs1_loss: 0.0374 Epoch 25/100 10/10 [==============================] - ETA: 0s - loss: 0.1889 - n_outputs0_loss: 0.1457 - n_outputs1_loss: 0.0432 Epoch 00025: val_loss improved from 0.20674 to 0.20416, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.1889 - n_outputs0_loss: 0.1457 - n_outputs1_loss: 0.0432 - val_loss: 0.2042 - val_n_outputs0_loss: 0.1638 - val_n_outputs1_loss: 0.0403 Epoch 26/100 10/10 [==============================] - ETA: 0s - loss: 0.1882 - n_outputs0_loss: 0.1467 - n_outputs1_loss: 0.0414 Epoch 00026: val_loss improved from 0.20416 to 0.19422, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.1882 - n_outputs0_loss: 0.1467 - n_outputs1_loss: 0.0414 - val_loss: 0.1942 - val_n_outputs0_loss: 0.1557 - val_n_outputs1_loss: 0.0385 Epoch 27/100 10/10 [==============================] - ETA: 0s - loss: 0.1706 - n_outputs0_loss: 0.1328 - n_outputs1_loss: 0.0378 Epoch 00027: val_loss did not improve from 0.19422 10/10 [==============================] - 9s 930ms/step - loss: 0.1706 - n_outputs0_loss: 0.1328 - n_outputs1_loss: 0.0378 - val_loss: 0.2016 - val_n_outputs0_loss: 0.1615 - val_n_outputs1_loss: 0.0401 Epoch 28/100 10/10 [==============================] - ETA: 0s - loss: 0.1630 - n_outputs0_loss: 0.1248 - n_outputs1_loss: 0.0382 Epoch 00028: val_loss improved from 0.19422 to 0.18035, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.1630 - n_outputs0_loss: 0.1248 - n_outputs1_loss: 0.0382 - val_loss: 0.1803 - val_n_outputs0_loss: 0.1445 - val_n_outputs1_loss: 0.0358 Epoch 29/100 10/10 [==============================] - ETA: 0s - loss: 0.1601 - n_outputs0_loss: 0.1219 - n_outputs1_loss: 0.0382 Epoch 00029: val_loss improved from 0.18035 to 0.17528, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.1601 - n_outputs0_loss: 0.1219 - n_outputs1_loss: 0.0382 - val_loss: 0.1753 - val_n_outputs0_loss: 0.1410 - val_n_outputs1_loss: 0.0343 Epoch 30/100 10/10 [==============================] - ETA: 0s - loss: 0.1483 - n_outputs0_loss: 0.1117 - n_outputs1_loss: 0.0366 Epoch 00030: val_loss improved from 0.17528 to 0.17039, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.1483 - n_outputs0_loss: 0.1117 - n_outputs1_loss: 0.0366 - val_loss: 0.1704 - val_n_outputs0_loss: 0.1372 - val_n_outputs1_loss: 0.0332 Epoch 31/100 10/10 [==============================] - ETA: 0s - loss: 0.1481 - n_outputs0_loss: 0.1114 - n_outputs1_loss: 0.0368 Epoch 00031: val_loss did not improve from 0.17039 10/10 [==============================] - 9s 915ms/step - loss: 0.1481 - n_outputs0_loss: 0.1114 - n_outputs1_loss: 0.0368 - val_loss: 0.1783 - val_n_outputs0_loss: 0.1436 - val_n_outputs1_loss: 0.0347 Epoch 32/100 10/10 [==============================] - ETA: 0s - loss: 0.1470 - n_outputs0_loss: 0.1111 - n_outputs1_loss: 0.0358 Epoch 00032: val_loss improved from 0.17039 to 0.16278, saving model to ./models/msp-car-1.f5 10/10 [==============================] - 10s 1s/step - loss: 0.1470 - n_outputs0_loss: 0.1111 - n_outputs1_loss: 0.0358 - val_loss: 0.1628 - val_n_outputs0_loss: 0.1301 - val_n_outputs1_loss: 0.0327 Epoch 33/100 10/10 [==============================] - ETA: 0s - loss: 0.1368 - n_outputs0_loss: 0.1027 - n_outputs1_loss: 0.0341 Epoch 00033: val_loss did not improve from 0.16278 10/10 [==============================] - 9s 928ms/step - loss: 0.1368 - n_outputs0_loss: 0.1027 - n_outputs1_loss: 0.0341 - val_loss: 0.1666 - val_n_outputs0_loss: 0.1345 - val_n_outputs1_loss: 0.0320 Epoch 34/100 10/10 [==============================] - ETA: 0s - loss: 0.1305 - n_outputs0_loss: 0.0971 - n_outputs1_loss: 0.0334 Epoch 00034: val_loss did not improve from 0.16278 10/10 [==============================] - 9s 929ms/step - loss: 0.1305 - n_outputs0_loss: 0.0971 - n_outputs1_loss: 0.0334 - val_loss: 0.1728 - val_n_outputs0_loss: 0.1413 - val_n_outputs1_loss: 0.0315 Epoch 35/100 10/10 [==============================] - ETA: 0s - loss: 0.1353 - n_outputs0_loss: 0.1027 - n_outputs1_loss: 0.0326 Epoch 00035: val_loss did not improve from 0.16278 10/10 [==============================] - 9s 933ms/step - loss: 0.1353 - n_outputs0_loss: 0.1027 - n_outputs1_loss: 0.0326 - val_loss: 0.1706 - val_n_outputs0_loss: 0.1391 - val_n_outputs1_loss: 0.0315 Epoch 36/100 10/10 [==============================] - ETA: 0s - loss: 0.1319 - n_outputs0_loss: 0.0989 - n_outputs1_loss: 0.0331 Epoch 00036: val_loss did not improve from 0.16278 10/10 [==============================] - 9s 936ms/step - loss: 0.1319 - n_outputs0_loss: 0.0989 - n_outputs1_loss: 0.0331 - val_loss: 0.1729 - val_n_outputs0_loss: 0.1401 - val_n_outputs1_loss: 0.0328 Epoch 37/100 10/10 [==============================] - ETA: 0s - loss: 0.1290 - n_outputs0_loss: 0.0952 - n_outputs1_loss: 0.0338 Epoch 00037: val_loss did not improve from 0.16278 10/10 [==============================] - 9s 929ms/step - loss: 0.1290 - n_outputs0_loss: 0.0952 - n_outputs1_loss: 0.0338 - val_loss: 0.1709 - val_n_outputs0_loss: 0.1381 - val_n_outputs1_loss: 0.0327 WARNING: CPU random generator seem to be failing, disable hardware random number generation WARNING: RDRND generated: 0xffffffff 0xffffffff 0xffffffff 0xffffffff (donkey) arl@arl1: ```
"},{"location":"training-logs/msp-car-1/","title":"Minnesota STEM Partners Car 1 Training Log","text":"time donkey train --tub=./data/msp-car-1 --model=./models/msp-car-1.h5\n________ ______ _________ \n___ __ \\_______________ /___________ __ __ ____/_____ ________\n__ / / / __ \\_ __ \\_ //_/ _ \\_ / / / _ / _ __ `/_ ___/\n_ /_/ // /_/ / / / / ,< / __/ /_/ / / /___ / /_/ /_ / \n/_____/ \\____//_/ /_//_/|_| \\___/_\\__, / \\____/ \\__,_/ /_/ \n /____/ \n\nusing donkey v4.2.1 ...\nloading config file: ./config.py\nloading personal config over-rides from myconfig.py\n\"get_model_by_type\" model Type is: linear\nCreated KerasLinear\n2021-07-26 21:18:57.390998: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\n2021-07-26 21:18:57.409838: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:18:57.410285: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: \npciBusID: 0000:09:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5\ncoreClock: 1.635GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s\n2021-07-26 21:18:57.410424: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2021-07-26 21:18:57.411314: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2021-07-26 21:18:57.412358: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n2021-07-26 21:18:57.412506: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n2021-07-26 21:18:57.413323: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n2021-07-26 21:18:57.413712: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n2021-07-26 21:18:57.415437: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2021-07-26 21:18:57.415619: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:18:57.416133: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:18:57.416523: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0\n2021-07-26 21:18:57.416750: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA\n2021-07-26 21:18:57.420820: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3592950000 Hz\n2021-07-26 21:18:57.421125: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5629cbee7970 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n2021-07-26 21:18:57.421136: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n2021-07-26 21:18:57.421270: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:18:57.421679: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: \npciBusID: 0000:09:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5\ncoreClock: 1.635GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s\n2021-07-26 21:18:57.421712: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2021-07-26 21:18:57.421724: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2021-07-26 21:18:57.421735: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n2021-07-26 21:18:57.421746: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n2021-07-26 21:18:57.421756: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n2021-07-26 21:18:57.421766: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n2021-07-26 21:18:57.421776: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2021-07-26 21:18:57.421840: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:18:57.422285: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:18:57.422675: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0\n2021-07-26 21:18:57.422700: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2021-07-26 21:18:57.504507: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:\n2021-07-26 21:18:57.504534: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 \n2021-07-26 21:18:57.504541: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N \n2021-07-26 21:18:57.504754: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:18:57.505207: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:18:57.505632: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 21:18:57.506019: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9892 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:09:00.0, compute capability: 7.5)\n2021-07-26 21:18:57.507379: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5629cdd66f30 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n2021-07-26 21:18:57.507389: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce RTX 2080 Ti, Compute Capability 7.5\nModel: \"model\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\nimg_in (InputLayer) [(None, 224, 224, 3) 0 \n__________________________________________________________________________________________________\nconv2d_1 (Conv2D) (None, 110, 110, 24) 1824 img_in[0][0] \n__________________________________________________________________________________________________\ndropout (Dropout) (None, 110, 110, 24) 0 conv2d_1[0][0] \n__________________________________________________________________________________________________\nconv2d_2 (Conv2D) (None, 53, 53, 32) 19232 dropout[0][0] \n__________________________________________________________________________________________________\ndropout_1 (Dropout) (None, 53, 53, 32) 0 conv2d_2[0][0] \n__________________________________________________________________________________________________\nconv2d_3 (Conv2D) (None, 25, 25, 64) 51264 dropout_1[0][0] \n__________________________________________________________________________________________________\ndropout_2 (Dropout) (None, 25, 25, 64) 0 conv2d_3[0][0] \n__________________________________________________________________________________________________\nconv2d_4 (Conv2D) (None, 23, 23, 64) 36928 dropout_2[0][0] \n__________________________________________________________________________________________________\ndropout_3 (Dropout) (None, 23, 23, 64) 0 conv2d_4[0][0] \n__________________________________________________________________________________________________\nconv2d_5 (Conv2D) (None, 21, 21, 64) 36928 dropout_3[0][0] \n__________________________________________________________________________________________________\ndropout_4 (Dropout) (None, 21, 21, 64) 0 conv2d_5[0][0] \n__________________________________________________________________________________________________\nflattened (Flatten) (None, 28224) 0 dropout_4[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 100) 2822500 flattened[0][0] \n__________________________________________________________________________________________________\ndropout_5 (Dropout) (None, 100) 0 dense_1[0][0] \n__________________________________________________________________________________________________\ndense_2 (Dense) (None, 50) 5050 dropout_5[0][0] \n__________________________________________________________________________________________________\ndropout_6 (Dropout) (None, 50) 0 dense_2[0][0] \n__________________________________________________________________________________________________\nn_outputs0 (Dense) (None, 1) 51 dropout_6[0][0] \n__________________________________________________________________________________________________\nn_outputs1 (Dense) (None, 1) 51 dropout_6[0][0] \n==================================================================================================\nTotal params: 2,973,828\nTrainable params: 2,973,828\nNon-trainable params: 0\n__________________________________________________________________________________________________\nNone\nUsing catalog /home/arl/mycar/data/msp-car-1/catalog_17.catalog\n\nRecords # Training 1265\nRecords # Validation 317\nEpoch 1/100\n2021-07-26 21:18:58.397797: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2021-07-26 21:18:58.705078: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2021-07-26 21:18:59.429125: W tensorflow/stream_executor/gpu/asm_compiler.cc:116] *** WARNING *** You are using ptxas 9.1.108, which is older than 9.2.88. ptxas 9.x before 9.2.88 is known to miscompile XLA code, leading to incorrect results or invalid-address errors.\n\nYou do not need to update to CUDA 9.2.88; cherry-picking the ptxas binary is sufficient.\n2021-07-26 21:18:59.481809: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] Internal: ptxas exited with non-zero error code 65280, output: ptxas fatal : Value 'sm_75' is not defined for option 'gpu-name'\n\nRelying on driver to perform ptx compilation. \nModify $PATH to customize ptxas location.\nThis message will be only logged once.\n10/10 [==============================] - ETA: 0s - loss: 0.6674 - n_outputs0_loss: 0.5162 - n_outputs1_loss: 0.1512\nEpoch 00001: val_loss improved from inf to 0.60297, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 3s 288ms/step - loss: 0.6674 - n_outputs0_loss: 0.5162 - n_outputs1_loss: 0.1512 - val_loss: 0.6030 - val_n_outputs0_loss: 0.4514 - val_n_outputs1_loss: 0.1516\nEpoch 2/100\n10/10 [==============================] - ETA: 0s - loss: 0.6050 - n_outputs0_loss: 0.5074 - n_outputs1_loss: 0.0976\nEpoch 00002: val_loss improved from 0.60297 to 0.51595, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 124ms/step - loss: 0.6050 - n_outputs0_loss: 0.5074 - n_outputs1_loss: 0.0976 - val_loss: 0.5160 - val_n_outputs0_loss: 0.4188 - val_n_outputs1_loss: 0.0972\nEpoch 3/100\n10/10 [==============================] - ETA: 0s - loss: 0.5707 - n_outputs0_loss: 0.4923 - n_outputs1_loss: 0.0784\nEpoch 00003: val_loss improved from 0.51595 to 0.50280, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 110ms/step - loss: 0.5707 - n_outputs0_loss: 0.4923 - n_outputs1_loss: 0.0784 - val_loss: 0.5028 - val_n_outputs0_loss: 0.4224 - val_n_outputs1_loss: 0.0804\nEpoch 4/100\n10/10 [==============================] - ETA: 0s - loss: 0.5615 - n_outputs0_loss: 0.4917 - n_outputs1_loss: 0.0698\nEpoch 00004: val_loss improved from 0.50280 to 0.49159, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 110ms/step - loss: 0.5615 - n_outputs0_loss: 0.4917 - n_outputs1_loss: 0.0698 - val_loss: 0.4916 - val_n_outputs0_loss: 0.4203 - val_n_outputs1_loss: 0.0713\nEpoch 5/100\n10/10 [==============================] - ETA: 0s - loss: 0.5541 - n_outputs0_loss: 0.4854 - n_outputs1_loss: 0.0687\nEpoch 00005: val_loss improved from 0.49159 to 0.48784, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 104ms/step - loss: 0.5541 - n_outputs0_loss: 0.4854 - n_outputs1_loss: 0.0687 - val_loss: 0.4878 - val_n_outputs0_loss: 0.4107 - val_n_outputs1_loss: 0.0772\nEpoch 6/100\n10/10 [==============================] - ETA: 0s - loss: 0.5527 - n_outputs0_loss: 0.4827 - n_outputs1_loss: 0.0701\nEpoch 00006: val_loss improved from 0.48784 to 0.48521, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 109ms/step - loss: 0.5527 - n_outputs0_loss: 0.4827 - n_outputs1_loss: 0.0701 - val_loss: 0.4852 - val_n_outputs0_loss: 0.4127 - val_n_outputs1_loss: 0.0725\nEpoch 7/100\n10/10 [==============================] - ETA: 0s - loss: 0.5405 - n_outputs0_loss: 0.4764 - n_outputs1_loss: 0.0641\nEpoch 00007: val_loss improved from 0.48521 to 0.48270, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 110ms/step - loss: 0.5405 - n_outputs0_loss: 0.4764 - n_outputs1_loss: 0.0641 - val_loss: 0.4827 - val_n_outputs0_loss: 0.4097 - val_n_outputs1_loss: 0.0730\nEpoch 8/100\n10/10 [==============================] - ETA: 0s - loss: 0.5383 - n_outputs0_loss: 0.4724 - n_outputs1_loss: 0.0659\nEpoch 00008: val_loss improved from 0.48270 to 0.47415, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 109ms/step - loss: 0.5383 - n_outputs0_loss: 0.4724 - n_outputs1_loss: 0.0659 - val_loss: 0.4741 - val_n_outputs0_loss: 0.4026 - val_n_outputs1_loss: 0.0715\nEpoch 9/100\n10/10 [==============================] - ETA: 0s - loss: 0.5288 - n_outputs0_loss: 0.4640 - n_outputs1_loss: 0.0648\nEpoch 00009: val_loss did not improve from 0.47415\n10/10 [==============================] - 1s 101ms/step - loss: 0.5288 - n_outputs0_loss: 0.4640 - n_outputs1_loss: 0.0648 - val_loss: 0.4780 - val_n_outputs0_loss: 0.4069 - val_n_outputs1_loss: 0.0711\nEpoch 10/100\n10/10 [==============================] - ETA: 0s - loss: 0.5344 - n_outputs0_loss: 0.4677 - n_outputs1_loss: 0.0667\nEpoch 00010: val_loss improved from 0.47415 to 0.45939, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 111ms/step - loss: 0.5344 - n_outputs0_loss: 0.4677 - n_outputs1_loss: 0.0667 - val_loss: 0.4594 - val_n_outputs0_loss: 0.3903 - val_n_outputs1_loss: 0.0691\nEpoch 11/100\n10/10 [==============================] - ETA: 0s - loss: 0.5014 - n_outputs0_loss: 0.4349 - n_outputs1_loss: 0.0666\nEpoch 00011: val_loss improved from 0.45939 to 0.44304, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 112ms/step - loss: 0.5014 - n_outputs0_loss: 0.4349 - n_outputs1_loss: 0.0666 - val_loss: 0.4430 - val_n_outputs0_loss: 0.3672 - val_n_outputs1_loss: 0.0758\nEpoch 12/100\n10/10 [==============================] - ETA: 0s - loss: 0.4585 - n_outputs0_loss: 0.3847 - n_outputs1_loss: 0.0738\nEpoch 00012: val_loss improved from 0.44304 to 0.36563, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 109ms/step - loss: 0.4585 - n_outputs0_loss: 0.3847 - n_outputs1_loss: 0.0738 - val_loss: 0.3656 - val_n_outputs0_loss: 0.2934 - val_n_outputs1_loss: 0.0723\nEpoch 13/100\n10/10 [==============================] - ETA: 0s - loss: 0.3922 - n_outputs0_loss: 0.3257 - n_outputs1_loss: 0.0664\nEpoch 00013: val_loss improved from 0.36563 to 0.30773, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 111ms/step - loss: 0.3922 - n_outputs0_loss: 0.3257 - n_outputs1_loss: 0.0664 - val_loss: 0.3077 - val_n_outputs0_loss: 0.2463 - val_n_outputs1_loss: 0.0614\nEpoch 14/100\n10/10 [==============================] - ETA: 0s - loss: 0.3662 - n_outputs0_loss: 0.3052 - n_outputs1_loss: 0.0610\nEpoch 00014: val_loss improved from 0.30773 to 0.27574, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 107ms/step - loss: 0.3662 - n_outputs0_loss: 0.3052 - n_outputs1_loss: 0.0610 - val_loss: 0.2757 - val_n_outputs0_loss: 0.2294 - val_n_outputs1_loss: 0.0463\nEpoch 15/100\n10/10 [==============================] - ETA: 0s - loss: 0.3233 - n_outputs0_loss: 0.2626 - n_outputs1_loss: 0.0607\nEpoch 00015: val_loss improved from 0.27574 to 0.24205, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 110ms/step - loss: 0.3233 - n_outputs0_loss: 0.2626 - n_outputs1_loss: 0.0607 - val_loss: 0.2421 - val_n_outputs0_loss: 0.1966 - val_n_outputs1_loss: 0.0454\nEpoch 16/100\n10/10 [==============================] - ETA: 0s - loss: 0.3078 - n_outputs0_loss: 0.2500 - n_outputs1_loss: 0.0577\nEpoch 00016: val_loss did not improve from 0.24205\n10/10 [==============================] - 1s 100ms/step - loss: 0.3078 - n_outputs0_loss: 0.2500 - n_outputs1_loss: 0.0577 - val_loss: 0.2473 - val_n_outputs0_loss: 0.2023 - val_n_outputs1_loss: 0.0450\nEpoch 17/100\n10/10 [==============================] - ETA: 0s - loss: 0.2959 - n_outputs0_loss: 0.2404 - n_outputs1_loss: 0.0555\nEpoch 00017: val_loss improved from 0.24205 to 0.22809, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 113ms/step - loss: 0.2959 - n_outputs0_loss: 0.2404 - n_outputs1_loss: 0.0555 - val_loss: 0.2281 - val_n_outputs0_loss: 0.1842 - val_n_outputs1_loss: 0.0438\nEpoch 18/100\n10/10 [==============================] - ETA: 0s - loss: 0.2820 - n_outputs0_loss: 0.2280 - n_outputs1_loss: 0.0540\nEpoch 00018: val_loss improved from 0.22809 to 0.21671, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 107ms/step - loss: 0.2820 - n_outputs0_loss: 0.2280 - n_outputs1_loss: 0.0540 - val_loss: 0.2167 - val_n_outputs0_loss: 0.1768 - val_n_outputs1_loss: 0.0400\nEpoch 19/100\n10/10 [==============================] - ETA: 0s - loss: 0.2568 - n_outputs0_loss: 0.2044 - n_outputs1_loss: 0.0524\nEpoch 00019: val_loss did not improve from 0.21671\n10/10 [==============================] - 1s 99ms/step - loss: 0.2568 - n_outputs0_loss: 0.2044 - n_outputs1_loss: 0.0524 - val_loss: 0.2190 - val_n_outputs0_loss: 0.1788 - val_n_outputs1_loss: 0.0402\nEpoch 20/100\n10/10 [==============================] - ETA: 0s - loss: 0.2621 - n_outputs0_loss: 0.2123 - n_outputs1_loss: 0.0499\nEpoch 00020: val_loss improved from 0.21671 to 0.21046, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 113ms/step - loss: 0.2621 - n_outputs0_loss: 0.2123 - n_outputs1_loss: 0.0499 - val_loss: 0.2105 - val_n_outputs0_loss: 0.1718 - val_n_outputs1_loss: 0.0386\nEpoch 21/100\n10/10 [==============================] - ETA: 0s - loss: 0.2521 - n_outputs0_loss: 0.2052 - n_outputs1_loss: 0.0469\nEpoch 00021: val_loss improved from 0.21046 to 0.20605, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 111ms/step - loss: 0.2521 - n_outputs0_loss: 0.2052 - n_outputs1_loss: 0.0469 - val_loss: 0.2060 - val_n_outputs0_loss: 0.1675 - val_n_outputs1_loss: 0.0385\nEpoch 22/100\n10/10 [==============================] - ETA: 0s - loss: 0.2261 - n_outputs0_loss: 0.1781 - n_outputs1_loss: 0.0480\nEpoch 00022: val_loss improved from 0.20605 to 0.20553, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 106ms/step - loss: 0.2261 - n_outputs0_loss: 0.1781 - n_outputs1_loss: 0.0480 - val_loss: 0.2055 - val_n_outputs0_loss: 0.1711 - val_n_outputs1_loss: 0.0344\nEpoch 23/100\n10/10 [==============================] - ETA: 0s - loss: 0.2222 - n_outputs0_loss: 0.1794 - n_outputs1_loss: 0.0429\nEpoch 00023: val_loss improved from 0.20553 to 0.20273, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 109ms/step - loss: 0.2222 - n_outputs0_loss: 0.1794 - n_outputs1_loss: 0.0429 - val_loss: 0.2027 - val_n_outputs0_loss: 0.1697 - val_n_outputs1_loss: 0.0331\nEpoch 24/100\n10/10 [==============================] - ETA: 0s - loss: 0.2126 - n_outputs0_loss: 0.1698 - n_outputs1_loss: 0.0428\nEpoch 00024: val_loss improved from 0.20273 to 0.19049, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 105ms/step - loss: 0.2126 - n_outputs0_loss: 0.1698 - n_outputs1_loss: 0.0428 - val_loss: 0.1905 - val_n_outputs0_loss: 0.1562 - val_n_outputs1_loss: 0.0343\nEpoch 25/100\n10/10 [==============================] - ETA: 0s - loss: 0.2062 - n_outputs0_loss: 0.1658 - n_outputs1_loss: 0.0404\nEpoch 00025: val_loss improved from 0.19049 to 0.18404, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 109ms/step - loss: 0.2062 - n_outputs0_loss: 0.1658 - n_outputs1_loss: 0.0404 - val_loss: 0.1840 - val_n_outputs0_loss: 0.1488 - val_n_outputs1_loss: 0.0352\nEpoch 26/100\n10/10 [==============================] - ETA: 0s - loss: 0.1928 - n_outputs0_loss: 0.1555 - n_outputs1_loss: 0.0372\nEpoch 00026: val_loss did not improve from 0.18404\n10/10 [==============================] - 1s 102ms/step - loss: 0.1928 - n_outputs0_loss: 0.1555 - n_outputs1_loss: 0.0372 - val_loss: 0.1907 - val_n_outputs0_loss: 0.1563 - val_n_outputs1_loss: 0.0344\nEpoch 27/100\n10/10 [==============================] - ETA: 0s - loss: 0.1834 - n_outputs0_loss: 0.1428 - n_outputs1_loss: 0.0406\nEpoch 00027: val_loss did not improve from 0.18404\n10/10 [==============================] - 1s 103ms/step - loss: 0.1834 - n_outputs0_loss: 0.1428 - n_outputs1_loss: 0.0406 - val_loss: 0.1922 - val_n_outputs0_loss: 0.1527 - val_n_outputs1_loss: 0.0396\nEpoch 28/100\n10/10 [==============================] - ETA: 0s - loss: 0.1668 - n_outputs0_loss: 0.1282 - n_outputs1_loss: 0.0386\nEpoch 00028: val_loss improved from 0.18404 to 0.17462, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 113ms/step - loss: 0.1668 - n_outputs0_loss: 0.1282 - n_outputs1_loss: 0.0386 - val_loss: 0.1746 - val_n_outputs0_loss: 0.1436 - val_n_outputs1_loss: 0.0311\nEpoch 29/100\n10/10 [==============================] - ETA: 0s - loss: 0.1654 - n_outputs0_loss: 0.1282 - n_outputs1_loss: 0.0372\nEpoch 00029: val_loss improved from 0.17462 to 0.17365, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 107ms/step - loss: 0.1654 - n_outputs0_loss: 0.1282 - n_outputs1_loss: 0.0372 - val_loss: 0.1736 - val_n_outputs0_loss: 0.1432 - val_n_outputs1_loss: 0.0305\nEpoch 30/100\n10/10 [==============================] - ETA: 0s - loss: 0.1615 - n_outputs0_loss: 0.1250 - n_outputs1_loss: 0.0364\nEpoch 00030: val_loss did not improve from 0.17365\n10/10 [==============================] - 1s 96ms/step - loss: 0.1615 - n_outputs0_loss: 0.1250 - n_outputs1_loss: 0.0364 - val_loss: 0.1799 - val_n_outputs0_loss: 0.1493 - val_n_outputs1_loss: 0.0306\nEpoch 31/100\n10/10 [==============================] - ETA: 0s - loss: 0.1495 - n_outputs0_loss: 0.1162 - n_outputs1_loss: 0.0332\nEpoch 00031: val_loss improved from 0.17365 to 0.17255, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 112ms/step - loss: 0.1495 - n_outputs0_loss: 0.1162 - n_outputs1_loss: 0.0332 - val_loss: 0.1726 - val_n_outputs0_loss: 0.1383 - val_n_outputs1_loss: 0.0342\nEpoch 32/100\n10/10 [==============================] - ETA: 0s - loss: 0.1453 - n_outputs0_loss: 0.1121 - n_outputs1_loss: 0.0333\nEpoch 00032: val_loss did not improve from 0.17255\n10/10 [==============================] - 1s 104ms/step - loss: 0.1453 - n_outputs0_loss: 0.1121 - n_outputs1_loss: 0.0333 - val_loss: 0.1764 - val_n_outputs0_loss: 0.1456 - val_n_outputs1_loss: 0.0308\nEpoch 33/100\n10/10 [==============================] - ETA: 0s - loss: 0.1346 - n_outputs0_loss: 0.1043 - n_outputs1_loss: 0.0303\nEpoch 00033: val_loss improved from 0.17255 to 0.17092, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 115ms/step - loss: 0.1346 - n_outputs0_loss: 0.1043 - n_outputs1_loss: 0.0303 - val_loss: 0.1709 - val_n_outputs0_loss: 0.1395 - val_n_outputs1_loss: 0.0315\nEpoch 34/100\n10/10 [==============================] - ETA: 0s - loss: 0.1293 - n_outputs0_loss: 0.0991 - n_outputs1_loss: 0.0302\nEpoch 00034: val_loss improved from 0.17092 to 0.16704, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 109ms/step - loss: 0.1293 - n_outputs0_loss: 0.0991 - n_outputs1_loss: 0.0302 - val_loss: 0.1670 - val_n_outputs0_loss: 0.1342 - val_n_outputs1_loss: 0.0329\nEpoch 35/100\n10/10 [==============================] - ETA: 0s - loss: 0.1196 - n_outputs0_loss: 0.0890 - n_outputs1_loss: 0.0306\nEpoch 00035: val_loss improved from 0.16704 to 0.15917, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 110ms/step - loss: 0.1196 - n_outputs0_loss: 0.0890 - n_outputs1_loss: 0.0306 - val_loss: 0.1592 - val_n_outputs0_loss: 0.1280 - val_n_outputs1_loss: 0.0311\nEpoch 36/100\n10/10 [==============================] - ETA: 0s - loss: 0.1086 - n_outputs0_loss: 0.0805 - n_outputs1_loss: 0.0281\nEpoch 00036: val_loss improved from 0.15917 to 0.15774, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 114ms/step - loss: 0.1086 - n_outputs0_loss: 0.0805 - n_outputs1_loss: 0.0281 - val_loss: 0.1577 - val_n_outputs0_loss: 0.1264 - val_n_outputs1_loss: 0.0313\nEpoch 37/100\n10/10 [==============================] - ETA: 0s - loss: 0.1032 - n_outputs0_loss: 0.0753 - n_outputs1_loss: 0.0279\nEpoch 00037: val_loss did not improve from 0.15774\n10/10 [==============================] - 1s 99ms/step - loss: 0.1032 - n_outputs0_loss: 0.0753 - n_outputs1_loss: 0.0279 - val_loss: 0.1598 - val_n_outputs0_loss: 0.1281 - val_n_outputs1_loss: 0.0317\nEpoch 38/100\n10/10 [==============================] - ETA: 0s - loss: 0.1050 - n_outputs0_loss: 0.0783 - n_outputs1_loss: 0.0266\nEpoch 00038: val_loss did not improve from 0.15774\n10/10 [==============================] - 1s 105ms/step - loss: 0.1050 - n_outputs0_loss: 0.0783 - n_outputs1_loss: 0.0266 - val_loss: 0.1586 - val_n_outputs0_loss: 0.1269 - val_n_outputs1_loss: 0.0317\nEpoch 39/100\n10/10 [==============================] - ETA: 0s - loss: 0.0983 - n_outputs0_loss: 0.0722 - n_outputs1_loss: 0.0261\nEpoch 00039: val_loss improved from 0.15774 to 0.15441, saving model to ./models/msp-car-1.h5\n10/10 [==============================] - 1s 111ms/step - loss: 0.0983 - n_outputs0_loss: 0.0722 - n_outputs1_loss: 0.0261 - val_loss: 0.1544 - val_n_outputs0_loss: 0.1243 - val_n_outputs1_loss: 0.0301\nEpoch 40/100\n10/10 [==============================] - ETA: 0s - loss: 0.0967 - n_outputs0_loss: 0.0703 - n_outputs1_loss: 0.0265\nEpoch 00040: val_loss did not improve from 0.15441\n10/10 [==============================] - 1s 103ms/step - loss: 0.0967 - n_outputs0_loss: 0.0703 - n_outputs1_loss: 0.0265 - val_loss: 0.1588 - val_n_outputs0_loss: 0.1275 - val_n_outputs1_loss: 0.0313\nEpoch 41/100\n10/10 [==============================] - ETA: 0s - loss: 0.0989 - n_outputs0_loss: 0.0736 - n_outputs1_loss: 0.0253\nEpoch 00041: val_loss did not improve from 0.15441\n10/10 [==============================] - 1s 104ms/step - loss: 0.0989 - n_outputs0_loss: 0.0736 - n_outputs1_loss: 0.0253 - val_loss: 0.1580 - val_n_outputs0_loss: 0.1271 - val_n_outputs1_loss: 0.0308\nEpoch 42/100\n10/10 [==============================] - ETA: 0s - loss: 0.1010 - n_outputs0_loss: 0.0758 - n_outputs1_loss: 0.0253\nEpoch 00042: val_loss did not improve from 0.15441\n10/10 [==============================] - 1s 107ms/step - loss: 0.1010 - n_outputs0_loss: 0.0758 - n_outputs1_loss: 0.0253 - val_loss: 0.1614 - val_n_outputs0_loss: 0.1315 - val_n_outputs1_loss: 0.0299\nEpoch 43/100\n10/10 [==============================] - ETA: 0s - loss: 0.0923 - n_outputs0_loss: 0.0680 - n_outputs1_loss: 0.0243\nEpoch 00043: val_loss did not improve from 0.15441\n10/10 [==============================] - 1s 101ms/step - loss: 0.0923 - n_outputs0_loss: 0.0680 - n_outputs1_loss: 0.0243 - val_loss: 0.1587 - val_n_outputs0_loss: 0.1298 - val_n_outputs1_loss: 0.0288\nEpoch 44/100\n10/10 [==============================] - ETA: 0s - loss: 0.0870 - n_outputs0_loss: 0.0629 - n_outputs1_loss: 0.0242\nEpoch 00044: val_loss did not improve from 0.15441\n10/10 [==============================] - 1s 105ms/step - loss: 0.0870 - n_outputs0_loss: 0.0629 - n_outputs1_loss: 0.0242 - val_loss: 0.1601 - val_n_outputs0_loss: 0.1304 - val_n_outputs1_loss: 0.0296\nWARNING: CPU random generator seem to be failing, disable hardware random number generation\nWARNING: RDRND generated: 0xffffffff 0xffffffff 0xffffffff 0xffffffff\n\nreal 1m10.563s\nuser 1m11.485s\nsys 0m39.110s\n
"},{"location":"training-logs/msp-car-2/","title":"Training run for Minneapolis STEM Partners","text":"Car #2 had 15045 images
wc -l ~/mycar/data/msp-car-2/*.catalog 15045
"},{"location":"training-logs/msp-car-2/#ls-1-mycardatamsp-car-2images-wc-l","title":"ls -1 ~/mycar/data/msp-car-2/images | wc -l","text":"15045
(donkey) arl@arl1:~/mycar$ donkey train --tub=./data/msp-car-2 --model=./models/msp-car-2.h5\n________ ______ _________ \n___ __ \\_______________ /___________ __ __ ____/_____ ________\n__ / / / __ \\_ __ \\_ //_/ _ \\_ / / / _ / _ __ `/_ ___/\n_ /_/ // /_/ / / / / ,< / __/ /_/ / / /___ / /_/ /_ / \n/_____/ \\____//_/ /_//_/|_| \\___/_\\__, / \\____/ \\__,_/ /_/ \n /____/ \n\nusing donkey v4.2.1 ...\nloading config file: ./config.py\nloading personal config over-rides from myconfig.py\n\"get_model_by_type\" model Type is: linear\nCreated KerasLinear\n2021-07-26 20:22:54.320076: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\n2021-07-26 20:22:54.338339: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 20:22:54.338783: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: \npciBusID: 0000:09:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5\ncoreClock: 1.635GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s\n2021-07-26 20:22:54.338925: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2021-07-26 20:22:54.339823: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2021-07-26 20:22:54.340834: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n2021-07-26 20:22:54.340981: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n2021-07-26 20:22:54.341775: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n2021-07-26 20:22:54.342170: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n2021-07-26 20:22:54.343898: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2021-07-26 20:22:54.344043: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 20:22:54.344546: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 20:22:54.344933: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0\n2021-07-26 20:22:54.345163: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA\n2021-07-26 20:22:54.349277: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3592950000 Hz\n2021-07-26 20:22:54.349572: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5575e341a9f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n2021-07-26 20:22:54.349585: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n2021-07-26 20:22:54.349717: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 20:22:54.350124: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: \npciBusID: 0000:09:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5\ncoreClock: 1.635GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s\n2021-07-26 20:22:54.350160: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2021-07-26 20:22:54.350171: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2021-07-26 20:22:54.350180: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n2021-07-26 20:22:54.350191: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n2021-07-26 20:22:54.350200: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n2021-07-26 20:22:54.350210: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n2021-07-26 20:22:54.350220: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2021-07-26 20:22:54.350282: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 20:22:54.350723: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 20:22:54.351106: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0\n2021-07-26 20:22:54.351127: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n2021-07-26 20:22:54.423106: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:\n2021-07-26 20:22:54.423133: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 \n2021-07-26 20:22:54.423138: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N \n2021-07-26 20:22:54.423354: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 20:22:54.423819: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 20:22:54.424248: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-07-26 20:22:54.424632: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9890 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:09:00.0, compute capability: 7.5)\n2021-07-26 20:22:54.425999: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5575e52b18b0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n2021-07-26 20:22:54.426009: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce RTX 2080 Ti, Compute Capability 7.5\nModel: \"model\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\nimg_in (InputLayer) [(None, 224, 224, 3) 0 \n__________________________________________________________________________________________________\nconv2d_1 (Conv2D) (None, 110, 110, 24) 1824 img_in[0][0] \n__________________________________________________________________________________________________\ndropout (Dropout) (None, 110, 110, 24) 0 conv2d_1[0][0] \n__________________________________________________________________________________________________\nconv2d_2 (Conv2D) (None, 53, 53, 32) 19232 dropout[0][0] \n__________________________________________________________________________________________________\ndropout_1 (Dropout) (None, 53, 53, 32) 0 conv2d_2[0][0] \n__________________________________________________________________________________________________\nconv2d_3 (Conv2D) (None, 25, 25, 64) 51264 dropout_1[0][0] \n__________________________________________________________________________________________________\ndropout_2 (Dropout) (None, 25, 25, 64) 0 conv2d_3[0][0] \n__________________________________________________________________________________________________\nconv2d_4 (Conv2D) (None, 23, 23, 64) 36928 dropout_2[0][0] \n__________________________________________________________________________________________________\ndropout_3 (Dropout) (None, 23, 23, 64) 0 conv2d_4[0][0] \n__________________________________________________________________________________________________\nconv2d_5 (Conv2D) (None, 21, 21, 64) 36928 dropout_3[0][0] \n__________________________________________________________________________________________________\ndropout_4 (Dropout) (None, 21, 21, 64) 0 conv2d_5[0][0] \n__________________________________________________________________________________________________\nflattened (Flatten) (None, 28224) 0 dropout_4[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 100) 2822500 flattened[0][0] \n__________________________________________________________________________________________________\ndropout_5 (Dropout) (None, 100) 0 dense_1[0][0] \n__________________________________________________________________________________________________\ndense_2 (Dense) (None, 50) 5050 dropout_5[0][0] \n__________________________________________________________________________________________________\ndropout_6 (Dropout) (None, 50) 0 dense_2[0][0] \n__________________________________________________________________________________________________\nn_outputs0 (Dense) (None, 1) 51 dropout_6[0][0] \n__________________________________________________________________________________________________\nn_outputs1 (Dense) (None, 1) 51 dropout_6[0][0] \n==================================================================================================\nTotal params: 2,973,828\nTrainable params: 2,973,828\nNon-trainable params: 0\n__________________________________________________________________________________________________\nNone\nUsing catalog /home/arl/mycar/data/msp-car-2/catalog_22.catalog\n\nRecords # Training 11696\nRecords # Validation 2924\nEpoch 1/100\n2021-07-26 20:22:55.471623: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n2021-07-26 20:22:55.802565: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n2021-07-26 20:22:56.515413: W tensorflow/stream_executor/gpu/asm_compiler.cc:116] *** WARNING *** You are using ptxas 9.1.108, which is older than 9.2.88. ptxas 9.x before 9.2.88 is known to miscompile XLA code, leading to incorrect results or invalid-address errors.\n\nYou do not need to update to CUDA 9.2.88; cherry-picking the ptxas binary is sufficient.\n2021-07-26 20:22:56.559204: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] Internal: ptxas exited with non-zero error code 65280, output: ptxas fatal : Value 'sm_75' is not defined for option 'gpu-name'\n\nRelying on driver to perform ptx compilation. \nModify $PATH to customize ptxas location.\nThis message will be only logged once.\n92/92 [==============================] - ETA: 0s - loss: 0.6304 - n_outputs0_loss: 0.3238 - n_outputs1_loss: 0.3066 \nEpoch 00001: val_loss improved from inf to 0.59133, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 29s 310ms/step - loss: 0.6304 - n_outputs0_loss: 0.3238 - n_outputs1_loss: 0.3066 - val_loss: 0.5913 - val_n_outputs0_loss: 0.3121 - val_n_outputs1_loss: 0.2793\nEpoch 2/100\n92/92 [==============================] - ETA: 0s - loss: 0.5150 - n_outputs0_loss: 0.2730 - n_outputs1_loss: 0.2419\nEpoch 00002: val_loss improved from 0.59133 to 0.39368, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 11s 117ms/step - loss: 0.5150 - n_outputs0_loss: 0.2730 - n_outputs1_loss: 0.2419 - val_loss: 0.3937 - val_n_outputs0_loss: 0.2108 - val_n_outputs1_loss: 0.1828\nEpoch 3/100\n92/92 [==============================] - ETA: 0s - loss: 0.3885 - n_outputs0_loss: 0.2088 - n_outputs1_loss: 0.1797\nEpoch 00003: val_loss improved from 0.39368 to 0.34087, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 11s 117ms/step - loss: 0.3885 - n_outputs0_loss: 0.2088 - n_outputs1_loss: 0.1797 - val_loss: 0.3409 - val_n_outputs0_loss: 0.1923 - val_n_outputs1_loss: 0.1486\nEpoch 4/100\n92/92 [==============================] - ETA: 0s - loss: 0.3449 - n_outputs0_loss: 0.1870 - n_outputs1_loss: 0.1579\nEpoch 00004: val_loss improved from 0.34087 to 0.30588, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 11s 119ms/step - loss: 0.3449 - n_outputs0_loss: 0.1870 - n_outputs1_loss: 0.1579 - val_loss: 0.3059 - val_n_outputs0_loss: 0.1771 - val_n_outputs1_loss: 0.1288\nEpoch 5/100\n92/92 [==============================] - ETA: 0s - loss: 0.3161 - n_outputs0_loss: 0.1763 - n_outputs1_loss: 0.1397\nEpoch 00005: val_loss improved from 0.30588 to 0.28650, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 113ms/step - loss: 0.3161 - n_outputs0_loss: 0.1763 - n_outputs1_loss: 0.1397 - val_loss: 0.2865 - val_n_outputs0_loss: 0.1722 - val_n_outputs1_loss: 0.1143\nEpoch 6/100\n92/92 [==============================] - ETA: 0s - loss: 0.2876 - n_outputs0_loss: 0.1633 - n_outputs1_loss: 0.1243\nEpoch 00006: val_loss improved from 0.28650 to 0.26754, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 110ms/step - loss: 0.2876 - n_outputs0_loss: 0.1633 - n_outputs1_loss: 0.1243 - val_loss: 0.2675 - val_n_outputs0_loss: 0.1623 - val_n_outputs1_loss: 0.1053\nEpoch 7/100\n92/92 [==============================] - ETA: 0s - loss: 0.2612 - n_outputs0_loss: 0.1508 - n_outputs1_loss: 0.1103\nEpoch 00007: val_loss improved from 0.26754 to 0.25034, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 111ms/step - loss: 0.2612 - n_outputs0_loss: 0.1508 - n_outputs1_loss: 0.1103 - val_loss: 0.2503 - val_n_outputs0_loss: 0.1551 - val_n_outputs1_loss: 0.0952\nEpoch 8/100\n92/92 [==============================] - ETA: 0s - loss: 0.2476 - n_outputs0_loss: 0.1435 - n_outputs1_loss: 0.1041\nEpoch 00008: val_loss improved from 0.25034 to 0.24291, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 109ms/step - loss: 0.2476 - n_outputs0_loss: 0.1435 - n_outputs1_loss: 0.1041 - val_loss: 0.2429 - val_n_outputs0_loss: 0.1524 - val_n_outputs1_loss: 0.0905\nEpoch 9/100\n92/92 [==============================] - ETA: 0s - loss: 0.2283 - n_outputs0_loss: 0.1323 - n_outputs1_loss: 0.0960\nEpoch 00009: val_loss improved from 0.24291 to 0.22718, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 110ms/step - loss: 0.2283 - n_outputs0_loss: 0.1323 - n_outputs1_loss: 0.0960 - val_loss: 0.2272 - val_n_outputs0_loss: 0.1450 - val_n_outputs1_loss: 0.0821\nEpoch 10/100\n92/92 [==============================] - ETA: 0s - loss: 0.2183 - n_outputs0_loss: 0.1267 - n_outputs1_loss: 0.0916\nEpoch 00010: val_loss did not improve from 0.22718\n92/92 [==============================] - 10s 109ms/step - loss: 0.2183 - n_outputs0_loss: 0.1267 - n_outputs1_loss: 0.0916 - val_loss: 0.2305 - val_n_outputs0_loss: 0.1471 - val_n_outputs1_loss: 0.0834\nEpoch 11/100\n92/92 [==============================] - ETA: 0s - loss: 0.2022 - n_outputs0_loss: 0.1187 - n_outputs1_loss: 0.0835\nEpoch 00011: val_loss improved from 0.22718 to 0.21581, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 110ms/step - loss: 0.2022 - n_outputs0_loss: 0.1187 - n_outputs1_loss: 0.0835 - val_loss: 0.2158 - val_n_outputs0_loss: 0.1375 - val_n_outputs1_loss: 0.0783\nEpoch 12/100\n92/92 [==============================] - ETA: 0s - loss: 0.1921 - n_outputs0_loss: 0.1085 - n_outputs1_loss: 0.0836\nEpoch 00012: val_loss did not improve from 0.21581\n92/92 [==============================] - 10s 110ms/step - loss: 0.1921 - n_outputs0_loss: 0.1085 - n_outputs1_loss: 0.0836 - val_loss: 0.2185 - val_n_outputs0_loss: 0.1382 - val_n_outputs1_loss: 0.0802\nEpoch 13/100\n92/92 [==============================] - ETA: 0s - loss: 0.1826 - n_outputs0_loss: 0.1056 - n_outputs1_loss: 0.0770\nEpoch 00013: val_loss did not improve from 0.21581\n92/92 [==============================] - 10s 110ms/step - loss: 0.1826 - n_outputs0_loss: 0.1056 - n_outputs1_loss: 0.0770 - val_loss: 0.2198 - val_n_outputs0_loss: 0.1394 - val_n_outputs1_loss: 0.0804\nEpoch 14/100\n92/92 [==============================] - ETA: 0s - loss: 0.1771 - n_outputs0_loss: 0.1009 - n_outputs1_loss: 0.0762\nEpoch 00014: val_loss did not improve from 0.21581\n92/92 [==============================] - 10s 111ms/step - loss: 0.1771 - n_outputs0_loss: 0.1009 - n_outputs1_loss: 0.0762 - val_loss: 0.2167 - val_n_outputs0_loss: 0.1389 - val_n_outputs1_loss: 0.0778\nEpoch 15/100\n92/92 [==============================] - ETA: 0s - loss: 0.1676 - n_outputs0_loss: 0.0959 - n_outputs1_loss: 0.0718\nEpoch 00015: val_loss improved from 0.21581 to 0.20899, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 112ms/step - loss: 0.1676 - n_outputs0_loss: 0.0959 - n_outputs1_loss: 0.0718 - val_loss: 0.2090 - val_n_outputs0_loss: 0.1345 - val_n_outputs1_loss: 0.0745\nEpoch 16/100\n92/92 [==============================] - ETA: 0s - loss: 0.1608 - n_outputs0_loss: 0.0910 - n_outputs1_loss: 0.0698\nEpoch 00016: val_loss did not improve from 0.20899\n92/92 [==============================] - 10s 110ms/step - loss: 0.1608 - n_outputs0_loss: 0.0910 - n_outputs1_loss: 0.0698 - val_loss: 0.2097 - val_n_outputs0_loss: 0.1348 - val_n_outputs1_loss: 0.0748\nEpoch 17/100\n92/92 [==============================] - ETA: 0s - loss: 0.1534 - n_outputs0_loss: 0.0870 - n_outputs1_loss: 0.0664\nEpoch 00017: val_loss improved from 0.20899 to 0.20324, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 112ms/step - loss: 0.1534 - n_outputs0_loss: 0.0870 - n_outputs1_loss: 0.0664 - val_loss: 0.2032 - val_n_outputs0_loss: 0.1329 - val_n_outputs1_loss: 0.0703\nEpoch 18/100\n92/92 [==============================] - ETA: 0s - loss: 0.1490 - n_outputs0_loss: 0.0846 - n_outputs1_loss: 0.0644\nEpoch 00018: val_loss improved from 0.20324 to 0.19965, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 111ms/step - loss: 0.1490 - n_outputs0_loss: 0.0846 - n_outputs1_loss: 0.0644 - val_loss: 0.1997 - val_n_outputs0_loss: 0.1309 - val_n_outputs1_loss: 0.0688\nEpoch 19/100\n92/92 [==============================] - ETA: 0s - loss: 0.1452 - n_outputs0_loss: 0.0828 - n_outputs1_loss: 0.0624\nEpoch 00019: val_loss improved from 0.19965 to 0.19877, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 111ms/step - loss: 0.1452 - n_outputs0_loss: 0.0828 - n_outputs1_loss: 0.0624 - val_loss: 0.1988 - val_n_outputs0_loss: 0.1294 - val_n_outputs1_loss: 0.0694\nEpoch 20/100\n92/92 [==============================] - ETA: 0s - loss: 0.1353 - n_outputs0_loss: 0.0747 - n_outputs1_loss: 0.0606\nEpoch 00020: val_loss did not improve from 0.19877\n92/92 [==============================] - 10s 110ms/step - loss: 0.1353 - n_outputs0_loss: 0.0747 - n_outputs1_loss: 0.0606 - val_loss: 0.2004 - val_n_outputs0_loss: 0.1312 - val_n_outputs1_loss: 0.0692\nEpoch 21/100\n92/92 [==============================] - ETA: 0s - loss: 0.1319 - n_outputs0_loss: 0.0731 - n_outputs1_loss: 0.0588\nEpoch 00021: val_loss improved from 0.19877 to 0.19564, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 111ms/step - loss: 0.1319 - n_outputs0_loss: 0.0731 - n_outputs1_loss: 0.0588 - val_loss: 0.1956 - val_n_outputs0_loss: 0.1252 - val_n_outputs1_loss: 0.0704\nEpoch 22/100\n92/92 [==============================] - ETA: 0s - loss: 0.1299 - n_outputs0_loss: 0.0713 - n_outputs1_loss: 0.0585\nEpoch 00022: val_loss improved from 0.19564 to 0.19422, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 111ms/step - loss: 0.1299 - n_outputs0_loss: 0.0713 - n_outputs1_loss: 0.0585 - val_loss: 0.1942 - val_n_outputs0_loss: 0.1259 - val_n_outputs1_loss: 0.0683\nEpoch 23/100\n92/92 [==============================] - ETA: 0s - loss: 0.1231 - n_outputs0_loss: 0.0684 - n_outputs1_loss: 0.0548\nEpoch 00023: val_loss improved from 0.19422 to 0.19270, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 111ms/step - loss: 0.1231 - n_outputs0_loss: 0.0684 - n_outputs1_loss: 0.0548 - val_loss: 0.1927 - val_n_outputs0_loss: 0.1245 - val_n_outputs1_loss: 0.0682\nEpoch 24/100\n92/92 [==============================] - ETA: 0s - loss: 0.1239 - n_outputs0_loss: 0.0673 - n_outputs1_loss: 0.0566\nEpoch 00024: val_loss did not improve from 0.19270\n92/92 [==============================] - 10s 110ms/step - loss: 0.1239 - n_outputs0_loss: 0.0673 - n_outputs1_loss: 0.0566 - val_loss: 0.1969 - val_n_outputs0_loss: 0.1283 - val_n_outputs1_loss: 0.0686\nEpoch 25/100\n92/92 [==============================] - ETA: 0s - loss: 0.1200 - n_outputs0_loss: 0.0650 - n_outputs1_loss: 0.0550\nEpoch 00025: val_loss did not improve from 0.19270\n92/92 [==============================] - 10s 111ms/step - loss: 0.1200 - n_outputs0_loss: 0.0650 - n_outputs1_loss: 0.0550 - val_loss: 0.1990 - val_n_outputs0_loss: 0.1284 - val_n_outputs1_loss: 0.0706\nEpoch 26/100\n92/92 [==============================] - ETA: 0s - loss: 0.1171 - n_outputs0_loss: 0.0636 - n_outputs1_loss: 0.0535\nEpoch 00026: val_loss did not improve from 0.19270\n92/92 [==============================] - 10s 111ms/step - loss: 0.1171 - n_outputs0_loss: 0.0636 - n_outputs1_loss: 0.0535 - val_loss: 0.1929 - val_n_outputs0_loss: 0.1250 - val_n_outputs1_loss: 0.0678\nEpoch 27/100\n92/92 [==============================] - ETA: 0s - loss: 0.1167 - n_outputs0_loss: 0.0638 - n_outputs1_loss: 0.0529\nEpoch 00027: val_loss did not improve from 0.19270\n92/92 [==============================] - 10s 112ms/step - loss: 0.1167 - n_outputs0_loss: 0.0638 - n_outputs1_loss: 0.0529 - val_loss: 0.1937 - val_n_outputs0_loss: 0.1269 - val_n_outputs1_loss: 0.0668\nEpoch 28/100\n92/92 [==============================] - ETA: 0s - loss: 0.1123 - n_outputs0_loss: 0.0610 - n_outputs1_loss: 0.0513\nEpoch 00028: val_loss improved from 0.19270 to 0.19161, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 112ms/step - loss: 0.1123 - n_outputs0_loss: 0.0610 - n_outputs1_loss: 0.0513 - val_loss: 0.1916 - val_n_outputs0_loss: 0.1230 - val_n_outputs1_loss: 0.0686\nEpoch 29/100\n92/92 [==============================] - ETA: 0s - loss: 0.1086 - n_outputs0_loss: 0.0584 - n_outputs1_loss: 0.0501\nEpoch 00029: val_loss improved from 0.19161 to 0.18655, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 110ms/step - loss: 0.1086 - n_outputs0_loss: 0.0584 - n_outputs1_loss: 0.0501 - val_loss: 0.1865 - val_n_outputs0_loss: 0.1216 - val_n_outputs1_loss: 0.0650\nEpoch 30/100\n92/92 [==============================] - ETA: 0s - loss: 0.1093 - n_outputs0_loss: 0.0593 - n_outputs1_loss: 0.0500\nEpoch 00030: val_loss did not improve from 0.18655\n92/92 [==============================] - 10s 109ms/step - loss: 0.1093 - n_outputs0_loss: 0.0593 - n_outputs1_loss: 0.0500 - val_loss: 0.1936 - val_n_outputs0_loss: 0.1240 - val_n_outputs1_loss: 0.0696\nEpoch 31/100\n92/92 [==============================] - ETA: 0s - loss: 0.1077 - n_outputs0_loss: 0.0578 - n_outputs1_loss: 0.0499\nEpoch 00031: val_loss did not improve from 0.18655\n92/92 [==============================] - 10s 110ms/step - loss: 0.1077 - n_outputs0_loss: 0.0578 - n_outputs1_loss: 0.0499 - val_loss: 0.1889 - val_n_outputs0_loss: 0.1222 - val_n_outputs1_loss: 0.0667\nEpoch 32/100\n92/92 [==============================] - ETA: 0s - loss: 0.1026 - n_outputs0_loss: 0.0551 - n_outputs1_loss: 0.0475\nEpoch 00032: val_loss improved from 0.18655 to 0.18343, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 111ms/step - loss: 0.1026 - n_outputs0_loss: 0.0551 - n_outputs1_loss: 0.0475 - val_loss: 0.1834 - val_n_outputs0_loss: 0.1206 - val_n_outputs1_loss: 0.0629\nEpoch 33/100\n92/92 [==============================] - ETA: 0s - loss: 0.1022 - n_outputs0_loss: 0.0545 - n_outputs1_loss: 0.0477\nEpoch 00033: val_loss did not improve from 0.18343\n92/92 [==============================] - 10s 110ms/step - loss: 0.1022 - n_outputs0_loss: 0.0545 - n_outputs1_loss: 0.0477 - val_loss: 0.1843 - val_n_outputs0_loss: 0.1191 - val_n_outputs1_loss: 0.0652\nEpoch 34/100\n92/92 [==============================] - ETA: 0s - loss: 0.0995 - n_outputs0_loss: 0.0529 - n_outputs1_loss: 0.0466\nEpoch 00034: val_loss improved from 0.18343 to 0.18117, saving model to ./models/msp-car-2.h5\n92/92 [==============================] - 10s 110ms/step - loss: 0.0995 - n_outputs0_loss: 0.0529 - n_outputs1_loss: 0.0466 - val_loss: 0.1812 - val_n_outputs0_loss: 0.1166 - val_n_outputs1_loss: 0.0646\nEpoch 35/100\n92/92 [==============================] - ETA: 0s - loss: 0.0989 - n_outputs0_loss: 0.0526 - n_outputs1_loss: 0.0463\nEpoch 00035: val_loss did not improve from 0.18117\n92/92 [==============================] - 10s 110ms/step - loss: 0.0989 - n_outputs0_loss: 0.0526 - n_outputs1_loss: 0.0463 - val_loss: 0.1835 - val_n_outputs0_loss: 0.1177 - val_n_outputs1_loss: 0.0657\nEpoch 36/100\n92/92 [==============================] - ETA: 0s - loss: 0.0972 - n_outputs0_loss: 0.0514 - n_outputs1_loss: 0.0458\nEpoch 00036: val_loss did not improve from 0.18117\n92/92 [==============================] - 10s 111ms/step - loss: 0.0972 - n_outputs0_loss: 0.0514 - n_outputs1_loss: 0.0458 - val_loss: 0.1838 - val_n_outputs0_loss: 0.1198 - val_n_outputs1_loss: 0.0641\nEpoch 37/100\n92/92 [==============================] - ETA: 0s - loss: 0.0959 - n_outputs0_loss: 0.0509 - n_outputs1_loss: 0.0450\nEpoch 00037: val_loss did not improve from 0.18117\n92/92 [==============================] - 10s 109ms/step - loss: 0.0959 - n_outputs0_loss: 0.0509 - n_outputs1_loss: 0.0450 - val_loss: 0.1830 - val_n_outputs0_loss: 0.1191 - val_n_outputs1_loss: 0.0639\nEpoch 38/100\n92/92 [==============================] - ETA: 0s - loss: 0.0934 - n_outputs0_loss: 0.0496 - n_outputs1_loss: 0.0438\nEpoch 00038: val_loss did not improve from 0.18117\n92/92 [==============================] - 10s 110ms/step - loss: 0.0934 - n_outputs0_loss: 0.0496 - n_outputs1_loss: 0.0438 - val_loss: 0.1845 - val_n_outputs0_loss: 0.1185 - val_n_outputs1_loss: 0.0660\nEpoch 39/100\n92/92 [==============================] - ETA: 0s - loss: 0.0923 - n_outputs0_loss: 0.0477 - n_outputs1_loss: 0.0446\nEpoch 00039: val_loss did not improve from 0.18117\n92/92 [==============================] - 10s 110ms/step - loss: 0.0923 - n_outputs0_loss: 0.0477 - n_outputs1_loss: 0.0446 - val_loss: 0.1818 - val_n_outputs0_loss: 0.1186 - val_n_outputs1_loss: 0.0632\nWARNING: CPU random generator seem to be failing, disable hardware random number generation\nWARNING: RDRND generated: 0xffffffff 0xffffffff 0xffffffff 0xffffffff\n(donkey) arl@arl1:~/mycar$\n
"},{"location":"training-logs/msp-car-2/#checking-the-models","title":"Checking the models","text":"ls -l models/*\n
returns
ls -l models/*\n-rw-r--r-- 1 arl arl 32317 Jul 26 20:30 models/database.json\n-rw-r--r-- 1 arl arl 35773936 Jul 26 20:17 models/msp-car-1-gpu.h5\n-rw-r--r-- 1 arl arl 27506 Jul 26 20:17 models/msp-car-1-gpu.png\n-rw-r--r-- 1 arl arl 23659 Jul 26 19:57 models/msp-car-1.png\n-rw-r--r-- 1 arl arl 35773936 Jul 26 20:29 models/msp-car-2.h5\n-rw-r--r-- 1 arl arl 25670 Jul 26 20:30 models/msp-car-2.png\n-rw-r--r-- 1 arl arl 22616 Feb 2 2020 models/mypilot.h5_loss_acc_0.040245.png\n-rw-r--r-- 1 arl arl 26687 Feb 2 2020 models/mypilot.h5_loss_acc_0.042222.png\n-rw-r--r-- 1 arl arl 11939744 Feb 2 2020 models/ref-model.h5\n
"},{"location":"umn-curriculum/attribution/","title":"Attribution","text":"University of Minnesota - Twin Cities
Department of Electrical and Computer Engineering
Senior Design - Spring 2024
Authors:
Before we start, let's clarify what we're doing. We've created a program (or \"model\") that can control a car to drive by itself, kind of like a video game character that knows how to navigate obstacles without you touching the controls. Now, we want to take this program from our main computer (let's call it the \"Host PC\") and put it onto the smaller computer inside the car (we'll use \"Raspberry Pi\" for this). The Raspberry Pi is like the car's brain, and we're going to teach it how to drive on autopilot.
"},{"location":"umn-curriculum/autopilot/#step-1-connect-both-computers-to-the-same-network","title":"Step 1: Connect Both Computers to the Same Network","text":"Why? - For the Host PC to talk to the Raspberry Pi, they need to be on the same Wi-Fi network, like how two phones need to be on the same network to share photos.
How? - Follow the instructions in the \"Networking\" section to connect both devices to the same Wi-Fi. This is similar to how you connect your phone or laptop to your home Wi-Fi.
How? - Open the \"Terminal\" on the Host PC. The Terminal is a place where you can type commands for your computer to follow.
What to Type: - Enter ping <hostname>.local
but replace <hostname>
; with your Raspberry Pi's hostname. This is like asking, \"Hey, are you there?\" to the Raspberry Pi.
Expected Outcome: - If everything is set up right, your computer will reply back with messages that show it's connected to the Raspberry Pi.
What We're Doing: - We're going to send the driving program (the model) from the Host PC to the Raspberry Pi.
How? - We use a command that packages up the model and sends it over the Wi-Fi to the Raspberry Pi.
The Command: - rsync -rv --progress --partial <file path to model on Host PC> <username>@<hostname>.local:<(path to models/ folder on Pi)>
Breakdown:
rsync is the tool we're using to send the file.
-rv --progress --partial
are options that make rsync show us what's happening and not quit halfway if there's a problem.
<file path to model on Host PC>
is where your model file is on your computer. You have to replace this with the actual path.
<username>@<hostname>.local
is the address of your Raspberry Pi on the network. Replace <username>
; with your Raspberry Pi's username and <hostname>
with its hostname.
<(path to models/ folder on Pi)>
tells rsync where on the Raspberry Pi to put the model. You'll replace this part with the actual location.
rsync -rv --progress --partial <file path to model on Host PC> <username>@<hostname>.local:<(path to models/ folder on Pi)>\n
Important Note:
Example Command:
Here's an example of what the command might look like:
rsync -rv --progress --partial /home/arl/mycar/models/jon_model_1.tflite arl@donkeypi.local:~/mycar/models/
In this example, we're sending a model named jon_model_1.tflite
from the Host PC to the Raspberry Pi, which is named donkeypi.
By following these steps, you'll transfer your autopilot model from your Host PC to the Raspberry Pi, getting your model car ready to drive itself!
"},{"location":"umn-curriculum/autopilot/#driving-autonomously","title":"Driving Autonomously","text":"Now that the models are on the Raspberry Pi, we are able to use them to autonomously drive the Donkey Car. To do this, we follow steps similar to the collection of the training data. However, we need to supply the path to the model and the type of model to the Python program. Hence we follow these steps:
"},{"location":"umn-curriculum/autopilot/#step-1-connecting-the-devices-to-the-same-wi-fi-network","title":"Step 1: Connecting the Devices to the Same Wi-Fi Network","text":"Why is this important? Just like how your smartphone needs to connect to the internet to send pictures to your friends, the computer (which we'll call the Host PC) needs to be on the same Wi-Fi network as the Raspberry Pi to communicate with it.
How to do it: Check the section titled \"Networking\" in your materials. It will guide you on how to connect both your Raspberry Pi and Host PC to the same Wi-Fi network, similar to how you connect any device to your home internet.
How to check: Open a program called \"Terminal\" on your Host PC. It's a tool where you can type commands to talk to your computer.
What to do: Type the command ping <hostname>.local
, but replace <hostname>
with your Raspberry Pi's unique name. This command is like saying, \"Hey Raspberry Pi, can you hear me?\".
What to expect: If everything is correct, you'll see messages that confirm your Host PC is talking to the Raspberry Pi.
What to do: On your Raspberry Pi, open its Terminal and enter this command: python manage.py drive --js --model <path to your model> --type <model type>
You'll need to replace <path to your model>
with the location of the AI model file you're using (usually found in the models/
folder). Replace <model type>
with the kind of AI model you have, such as \"Linear\" or \"Categorical\".
python manage.py drive --js --model <path to your model> --type <model type>\n
For example: python manage.py drive --model ~/mycar/models/mypilot.tflite --type tflite_linear
How to access: On your Host PC, open a web browser and go to http://<hostname>.local:8887
\u2014 make sure to substitute <hostname>
with your Raspberry Pi's name.
Choosing the driving mode: You'll see options like \"User\" for manual control, \"Auto Steer\" for the AI to only steer, and \"Full Auto\" for the AI to drive completely on its own. Select one and click \u201cStart\u201d to begin. If you're using a joystick, you can also start by pressing the start button.
Safety first: Before starting, ensure the Donkey Car is on its track with no obstacles around to prevent any accidents.
By following these steps, you'll be able to watch your Donkey Car navigate its surroundings all by itself, thanks to the AI model you've installed on the Raspberry Pi. It's a great way to see AI and machine learning in action!
"},{"location":"umn-curriculum/donkey-car/","title":"Getting Familiar with Donkey Car","text":"Welcome to Donkey Car! The first section of this curriculum will familiarize you with each of the components of the pre-assembled car and how they interact, and prepare you for the minimal hardware setup that may be required.
"},{"location":"umn-curriculum/donkey-car/#donkey-car-parts","title":"Donkey Car Parts","text":"There are four main components to be concerned with.
"},{"location":"umn-curriculum/donkey-car/#the-raspberry-pi","title":"The Raspberry Pi","text":"The Raspberry Pi 4 Model B is mounted to the top of the car, and it acts as the brain of the car. It is responsible for communicating between itself, the controller, and your devices. It also sends driving signals to the PWM board, which we will explore next. It records data through the camera, and it\u2019s where you will eventually upload your trained model for the car to drive itself!
"},{"location":"umn-curriculum/donkey-car/#the-pulse-width-modulation-board-pwm","title":"The Pulse Width Modulation Board (PWM)","text":"The Adafruit PCA9685 is responsible for communication between the Raspberry Pi and the electronic speed controller (ESC) and steering servo. The Pi communicates with the Adafruit board using a protocol called I2C. From there, the Adafruit board sends bursts of square waves with varying duty cycles (called a PWM signal) to the ESC and servo.
"},{"location":"umn-curriculum/donkey-car/#the-motor-and-servo","title":"The Motor and Servo","text":"
The motor and servo are fairly simple components which shouldn't need much setup (remember to flip the \"ON\" switch on the ESC!). The motor spins the wheels at a rate according to the PWM signal it received. The steering servo does the same, and rotates an arm to turn the wheels.
"},{"location":"umn-curriculum/donkey-car/#the-power-system","title":"The Power System","text":"Two separate batteries are powering different items in the car. Since the Raspberry Pi operates at 5 volts, a small portable battery affixed to the top of the car powers it via a USB Type C cable.
The second battery powers the servo and motors, which operate at 7.2 volts. This battery is plugged into the car and placed to the side in a small section designed for it to sit.
IMPORTANT NOTE: Unlike the portable battery, it matters which way you plug this battery in. The red wire should connect to the red wire, and the black wire will connect to the black wire. The wire clips generally restrict you from connecting it the wrong way, but it can still be connected with enough force. This will cause irreversible damage to the battery and potentially damage the motor and servos. This is especially important when plugging in the battery to charge.
"},{"location":"umn-curriculum/donkey-car/#hardware-setup","title":"Hardware Setup","text":"There should be little to no hardware setup, save from connecting a few wires if they are not already. The wires will connect the PWM and the Raspberry PI
"},{"location":"umn-curriculum/donkey-car/#wiring-diagram","title":"Wiring Diagram","text":"Sometimes the cars given to you will be correctly wired and set up for you. If, for some reason, it isn\u2019t, or if you\u2019d like to change it, here's how.
Typically, the motor is connected to PWM Pin 0, or the far left red-white-black pins. The servo is most often connected to PWM Pin 1, one column to the right of Pin 0. These pins can be specified in myconfig.py under:
PWM_THROTTLE_PIN
PWM_STEERING_PIN
By default, these are set to pins 0 and 1, respectively
PCA9685.1:40.0
PCA9685.1:40.1
If they are moved to any other pin, simply read the number above the desired column of pins on the PWM and adjust the value accordingly, i.e. if the motor is connected to the farthest right column of pins, column #15, you would update myconfig.py
the following way:
PWM_THROTTLE_PIN
\": \"PCA9685.1:40.15
\"You must also connect the GND, SDA, SCL, and V+ and lines of the PWM
Figure 1: PCA 9685 16-Channel Servo Driver
Most cars are set for the throttle wire output to be in PWM output 0 and the steering output to be in PWM output 1. If this is not the case look at the PWM
donkey calibrate --pwm-pin=PCA9685.1:40.0
(throttle)
donkey calibrate --pwm-pin=PCA9685.1:40.1
(steering)
These pins are also the default in myconfig.py
under PWM_STEERING_THROTTLE
Set USE_JOYSTICK_AS_DEFAULT=True
, don\u2019t need to pass --js
Imagine your donkey car is a small, smart robot car that you can control with a remote. To get it moving and learn how to switch between different driving modes, you'll start with a simple command on your computer. Here's how you do it:
Start the Car: Open your computer's command line tool, type python manage.py drive --js
, and press Enter. This wakes up your donkey car and gets it ready to follow your commands.
python manage.py drive --js\n
Understanding the Controls: Right after you run the command, you'll see instructions pop up on your screen. These tell you which buttons on your controller do what. It's like learning the controls of a new video game. Make sure to jot these down or take a picture with your phone - you'll need to refer back to them!
Switching Driving Modes: Your donkey car has a cool feature - it can drive in different ways! There's a special button (usually the start button) that lets you switch between:
Fully Remote-Controlled Mode: You control everything, just like driving an RC car.
Fully Autonomous Mode: The car drives all by itself, making its own decisions on turning and speed.
We'll focus on these two modes. If your controller doesn't seem to respond, hitting the start button is a good first troubleshooting step.
Now that your car can move around, it's time to teach it how to drive on its own. This is done by collecting data - basically, you drive the car around and it remembers how you did it. Here's how to gather good learning material for your car:
Drive Around: You'll need to drive your car around the track in both directions. Aim for about 10 laps each way. This gives your car a variety of examples to learn from.
It's Okay to Make Mistakes: Try to keep the car within the track lines, but don't worry about staying perfectly centered all the time. In fact, it's good for your car to see how to recover from being off-center. This helps it learn to correct itself and makes it smarter at handling different situations.
Remember, the goal isn't to collect flawless data but to give your car a rich learning experience, full of different scenarios and recoveries. This way, your car becomes more adaptable and can handle the track like a pro, even when things don't go exactly as planned.
"},{"location":"umn-curriculum/networking/","title":"Networking","text":"A Raspberry Pi is like a mini-computer that's both affordable and powerful, perfect for learning how to code and for creative projects, such as making a robot car move. To use a Raspberry Pi, it and your main computer need to be able to \"talk\" to each other, which means they have to be on the same digital network.
"},{"location":"umn-curriculum/networking/#how-to-connect-your-raspberry-pi-and-computer","title":"How to Connect Your Raspberry Pi and Computer","text":""},{"location":"umn-curriculum/networking/#option-1-joining-the-same-wi-fi-network","title":"Option 1: Joining the Same Wi-Fi Network","text":"The Basics: The simplest way to link your computer with the Raspberry Pi is by connecting both to the same Wi-Fi network. This can be the Wi-Fi at your home, school, or anywhere else you have access.
Troubleshooting Tips: If you're at school or work, you might not see your Raspberry Pi on the network due to extra security measures. If this happens (known as not being able to \"ping\" the Raspberry Pi), don't worry; just try the next solution.
Creating Your Network: If the first method doesn't work, you can use your smartphone or computer to create a Wi-Fi network through a feature known as a hotspot.
For Windows Users: Look up \"How to set up a mobile hotspot in Windows 10/11\" for detailed instructions.
For Linux Users:
Go to the system menu at the top right.
Click the Wi-Fi symbol, then \"All Networks\".
Click the menu in the top-right and choose \"Turn On Wi-Fi Hotspot\u2026\".
If asked to disconnect from your current Wi-Fi, click \"Turn On\" to proceed.
Pro Tip: If your Raspberry Pi struggles to connect, change the hotspot to use the 2.4GHz band for better device compatibility.
For Setup Only: You can connect your computer to the Raspberry Pi with an Ethernet cable, perfect for initial setups. However, Wi-Fi is needed for projects like controlling a robot car.
Steps:
Connect one end of the Ethernet cable to your computer and the other to the Raspberry Pi.
Use a command line or terminal to type ping donkeypi.local
(substitute \"donkeypi.local\" with your Raspberry Pi's actual name).
If possible, enable Internet Connection Sharing (ICS) on Windows or Ubuntu. This will allow you to share your WiFi connection from your Host PC to your Raspberry Pi over ethernet.
Once connected to the same network, you can control your Raspberry Pi from your computer, using either the command line (SSH) or a graphical interface (VNC).
"},{"location":"umn-curriculum/networking/#ssh-secure-shell","title":"SSH (Secure Shell)","text":"What It Is: SSH lets you send commands to your Raspberry Pi from your computer's terminal.
How to Use It:
Make sure both devices are on the same network.
Open a terminal and type ssh <username>@<hostname>.local
(replace with your details, like ssh donkey@donkeypi.local
).
What It Is: VNC allows you to view and interact with your Raspberry Pi's desktop from your computer, making graphical tasks easier.
Setting It Up:
Download and install RealVNC viewer from their official website, selecting the version for your operating system.
When installing, you can skip the sign-in step.
Open VNC Viewer, create a new connection and type your Raspberry Pi's IP address or hostname. Use the password you set on your Raspberry Pi if prompted.
Remember: The hostname is a unique name you give your Raspberry Pi during setup. It makes finding and connecting to your Raspberry Pi easier on a network.
By following these simplified steps, you're now ready to embark on exciting projects with your Raspberry Pi, from programming to building and controlling your own robot car!
"},{"location":"umn-curriculum/networking/#changing-wifi-settings","title":"Changing WiFi Settings","text":"There are two main ways to adjust the network settings: through a graphical interface (like the desktop you're accustomed to) or the command line. We'll focus on the command line for its simplicity and ease of use remotely via SSH (Secure Shell), which lets you control your Pi from another computer.
"},{"location":"umn-curriculum/networking/#basic-networking-commands","title":"Basic Networking Commands","text":"Here's a straightforward guide to some basic networking commands. Remember, the command line takes what you type literally, so precision is key.
Check NetworkManager Status
The command systemctl status NetworkManager
is essentially asking, \"Is the NetworkManager service active?\" NetworkManager is crucial as it handles all your network connections.
systemctl status NetworkManager\n
Start NetworkManager
Using sudo systemctl start NetworkManager
starts NetworkManager if it's not already running. The sudo signifies you're requesting to perform an action that requires administrator rights.
sudo systemctl start NetworkManager\n
Restart NetworkManager
Sometimes, network connections can be finicky. Issuing sudo systemctl restart NetworkManager
can help by resetting your network connections, akin to toggling WiFi on your smartphone.
sudo systemctl restart NetworkManager\n
List Available WiFi Networks
The command nmcli dev wifi
prompts your device to list all WiFi networks within range, useful for spotting available connections.
nmcli dev wifi\n
Connect to a WiFi Network
With sudo nmcli dev wifi connect \"SSID\" password \"PASSWORD\"
, you can connect to a specific WiFi by replacing \"SSID\" with the network's name and \"PASSWORD\" with the network's password. Keep the quotation marks if your WiFi's name or password includes spaces or special characters.
sudo nmcli dev wifi connect \"SSID\" password \"PASSWORD\"\n
Check Your Connection Status
sudo systemctl restart NetworkManager
lets you verify your network connection's status, ensuring everything is functioning as intended.
sudo systemctl restart NetworkManager\n
A Raspberry Pi 4 or 5 device.
A high-speed Micro SD card with a capacity of 64GB or more.
Speed Class: Class U3 Recommended.
Sandisk Extreme/Extreme Pro/Extreme Plus
Samsung Evo Plus/Evo Select
Kingston Canvas Select Plus/Canvas Go! Plus
Capacity: A minimum capacity of 64GB is preferred. 128GB is ideal
A computer with an SD card reader or SD card to USB adapter.
Download the Raspberry Pi Imager: This software helps you set up the Raspberry Pi OS onto your SD card. You can find it by searching \"Raspberry Pi Imager\" online. Follow the link from the official Raspberry Pi website to download and install it.
Prepare Your SD Card: Insert your Micro SD card into your computer's SD card reader slot.
Launch the Raspberry Pi Imager: Open the application you just installed.
Edit Settings Before Installation:
Click \"Next,\" then \"Edit Settings.\"
Hostname: This is your Raspberry Pi's name on a network, helping you identify it. Create a unique name by adding your initials or a group name to \"donkeycar-\" (e.g., \"donkeycar-JD\"). Write this name down as you'll need it later.
Username and Password: Create a username and password you'll remember. Example: Username: donkeycar, Password: raspberry. Note these down.
Wireless LAN: Enter your WiFi's name (SSID) and password here.
Local Settings: Your time zone and keyboard layout are usually set automatically, but double-check to make sure they're correct.
SSH: Enable SSH and select \"Use password authentication.\" SSH allows you to connect to your Raspberry Pi remotely.
Insert the Micro SD Card into Your Raspberry Pi:
Look for the Micro SD card slot on the Raspberry Pi. It's usually on the underside of the board.
Gently insert the Micro SD card into the slot. Make sure the metal contacts on the card face downwards and towards the Raspberry Pi. You should feel a slight click when it's in place.
Powering On Your Raspberry Pi:
Grab a USB-C cable. This is the Raspberry Pi's power cord, similar to a charger for many modern smartphones.
Connect one end of the USB-C cable to your Raspberry Pi and the other end to a power source (like a USB plug adapter or a powered USB port on a computer).
Once connected, your Raspberry Pi will start to boot up. You won\u2019t need to press any buttons to turn it on, just giving it power is enough!
To interact with your Raspberry Pi, you have two main options: connecting through SSH (Secure Shell) for remote access or hooking it up directly with a monitor, keyboard, and mouse. Here's how to do both:
"},{"location":"umn-curriculum/raspberry-pi/#option-1-direct-connection","title":"Option 1: Direct Connection","text":"What is SSH?: SSH stands for Secure Shell. It's a secure method to remotely access another computer's command line interface. This is useful if you can't or don't want to connect a monitor and peripherals directly to the Pi.
Setting Up SSH:
Enable SSH on Your Raspberry Pi: This can usually be done from the Raspberry Pi's configuration settings. You might need to do this with a monitor and keyboard attached if you're setting up the Pi for the first time.
Find Your Pi's Address: You need to know your Raspberry Pi's IP address or hostname to connect to it. If you're on the same network, the hostname is often raspberrypi.local, unless you've changed it.
Connect from Your Computer:
Open the Terminal application on your computer. This is Command Prompt on Windows or Terminal on macOS and Linux.
Type the SSH command: ssh username@hostname.local
, replacing username with your Raspberry Pi's username (the default is pi) and hostname with your Pi's actual hostname or IP address. For example, if your username is pi and your hostname hasn't been changed, you'd type ssh pi@raspberrypi.local
. SSH Command
ssh username@hostname.local\n
Press Enter. The first time you connect, you'll be asked if you trust the connection. Type yes, then enter your Raspberry Pi's password when prompted.
In this step, you're going to make sure your Raspberry Pi has the latest software. This is a bit like ensuring your smartphone installs the latest updates to run smoothly and securely. First, you'll need to connect your Raspberry Pi to a monitor and keyboard (or access it remotely if you know how). Once that's done, you'll use something called the command line, which is a way to interact with your Raspberry Pi by typing commands. Here\u2019s how to get your Raspberry Pi updated:
Open the Terminal: This is where you'll type your commands. It's a window where you can tell your Raspberry Pi what to do without using the mouse.
Type the Update Commands: You will need to type two commands into the terminal. Here's what they are and what they do:
sudo apt update
: This command checks for updates. It's like asking, \"Is there anything new that I should know about?\" It doesn\u2019t install the updates; it just looks for them and prepares a list. Update Version History
sudo apt update\n
sudo apt upgrade
: After checking for updates, this command installs them. It\u2019s like saying, \"Okay, now that we know what\u2019s new, let\u2019s get it installed.\" Update Packages
sudo apt upgrade\n
Use the command sudo raspi-config
to open the configuration utility.
sudo raspi-config\n
System Options: In the System Options menu, you'll find settings that affect the whole system. Here are a few key ones you might want to adjust:
Change Password: It's a good idea to change the default password to keep your Raspberry Pi secure.
Hostname: This is your Raspberry Pi's name on a network. You can change it to something unique to easily identify your device.
WiFi Network: Here, you can switch the WiFi network your Raspberry Pi connects to. This is useful if you're moving your device to a different location or if you need to connect it to a different network.
Interface Options:
Enable VNC: Turning this on allows you to control your Raspberry Pi from another computer as if you were sitting right in front of it. It's like having a remote desktop.
Enable I2C: This option turns on the ability for your Raspberry Pi to talk to certain other hardware pieces, like a PWM (Pulse Width Modulation) chip. PWM chips are used to control motors and servos, letting your projects move.
Advanced Options:
Reboot your Raspberry Pi as prompted
sudo reboot
into the terminal. This makes sure all your settings take effect. Reboot Raspberry Pisudo reboot\n
In this step, we'll create a special workspace on your computer where you can write and run Python code. This workspace is known as a Virtual Environment. Think of it as your personal lab where you can experiment with Python projects without affecting the rest of your computer's setup.
"},{"location":"umn-curriculum/raspberry-pi/#create-a-virtual-environment","title":"Create a Virtual Environment","text":"A Virtual Environment is like a sandbox for your project. It keeps all the tools and materials (in our case, code and libraries) you need separate from other projects. This way, if one project needs a specific tool version, it won't clash with another project's needs.Here\u2019s how you set it up:
Open your Terminal or Command Prompt: This is where you type commands to talk directly to your computer.
Create the Virtual Environment: Type the following command and press Enter:
Create the enviroment. Create Virtual Enviroment
python3 -m venv donkey --system-site-packages\n
You can replace donkey with any name you like, but using donkey here makes it easier to follow along with later parts of this course that use a system called Conda on Linux computers.
The --system-site-packages
option allows this environment to access some Python packages already installed on your computer globally, saving space and time.
Activate Your Environment: To start using your newly created environment, you need to activate it. Type the commands below:
Activate Virtual Enviromentecho \"source ~/donkey/bin/activate\" >> ~/.bashrc\nsource ~/.bashrc\n
This tells your Terminal to always use your donkey environment by default, making things simpler for future projects.
Before you can start building projects, you need some tools and materials, known in Python as libraries.
Install System Libraries: Some Python libraries depend on non-Python code. To prepare for this, install the required system library by typing:
Install Dependencies
sudo apt install libcap-dev\n
Note: This command might ask for your password. It's the same one you use to log into your computer. Install Python Packages: Now, let's get the specific Python packages you'll need.
For most users, type:
Install Donkey Car Softwarepip install donkeycar[pi]\n
For developers, there's a more detailed setup. Please see Appendix 2 in our course materials.
Let's make sure everything is set up correctly by checking if a key library, TensorFlow, is installed. TensorFlow is a powerful tool for machine learning projects. Type the following command and press Enter:
Check Tensorflow Installationpython -c \"import tensorflow; print(tensorflow.version)\"\n
Note: If you see a version number appear, congratulations! You're all set up and ready to dive into Python and machine learning projects.
"},{"location":"umn-curriculum/setup/","title":"Setting Up and Calibrating Your Donkey Car","text":""},{"location":"umn-curriculum/setup/#step-1-connecting-to-your-car","title":"Step 1: Connecting to Your Car","text":"Objective: Our first task is to create a digital conduit through which your computer will communicate with your miniature autonomous vehicle, facilitating commands and data exchange.
"},{"location":"umn-curriculum/setup/#procedure","title":"Procedure","text":"Secure Connection Establishment: Utilize SSH (Secure Shell) to establish a secure command line interface with your vehicle's computing unit, the Raspberry Pi. Detailed instructions for this process can be found under the \"Networking\" section.
Creating the Project Directory:
Execute the command: donkey createcar --path ~/mycar
Create Donkey Car Project Directory
donkey createcar --path ~/mycar\n
This action constructs a dedicated project directory for your vehicle, organizing essential files and configurations necessary for its operation.
Upon completion, the project directory will comprise several critical components:
mycar/
: The root directory.
data/
: Contains captured imagery and metadata from driving sessions, acting as the vehicle's experiential memory.
models/
: Stores machine learning models that enable autonomous navigation, essentially forming the vehicle's \"cognitive\" abilities.
logs/
: A repository for operational logs, useful for troubleshooting and performance analysis.
config.py
& myconfig.py
: Configuration files for project defaults and personalized settings, respectively. While config.py remains standard, myconfig.py is where specific adjustments and preferences are defined.
calibrate.py
, manage.py
, train.py
: Specialized scripts for calibration, management, and training processes, though they are not directly interacted with at this stage.
Customization Note: While mycar is the suggested name for simplicity, you may choose any directory name that suits your project's identity.
Power Requirements: Ensure that the vehicle's Raspberry Pi is adequately powered via a portable battery and that the car itself is powered by an appropriate RC battery. Verify that any switches on the electronic speed controller (ESC) or servo are in the 'on' position to enable operation.
"},{"location":"umn-curriculum/setup/#step-2-preparing-for-calibration","title":"Step 2: Preparing for Calibration","text":"Calibration is a fancy term for making sure your car understands how to move correctly when you tell it to. Before we start, place your car on a box or a similar object so the wheels are in the air. This way, it won't drive away during calibration.
About Your Car's Controls:
Your car uses something called a PCA 9685 Servo Driver to control its wheels. Imagine it as the car's steering wheel and gas pedal. Normally, the \"gas pedal\" is connected to output 0, and the \"steering wheel\" to output 1. If yours is different, you'll need to adjust the instructions accordingly.
"},{"location":"umn-curriculum/setup/#step-3-connecting-a-joystick-controller-to-the-donkey-car","title":"Step 3: Connecting A Joystick Controller to the Donkey Car","text":"Why use a joystick? If you've ever played video games, you know how intuitive a joystick can be for controlling movement. We can apply this same intuitive control to our Donkey Car project, allowing for more precise and responsive control compared to using a web interface.
Choosing the right controller: Not all controllers are created equal, and the steps to connect them can vary. However, most controllers that connect via a USB adapter will follow a similar setup process. We'll be using the Logitech F710 as our example, but if you have a different controller, check the Donkey Car documentation for specific instructions.
"},{"location":"umn-curriculum/setup/#getting-started-with-the-logitech-f710-controller","title":"Getting Started with the Logitech F710 Controller","text":"Prepare the Controller:
Ensure your Logitech F710 controller has fresh/charged batteries installed.
If it comes with a USB adapter (a small plug that connects to your computer or Raspberry Pi), make sure it's on hand.
Connect to the Raspberry Pi:
Configure the Settings:
Locate the file named myconfig.py
on your Raspberry Pi. This file contains various settings for your Donkey Car.
Find the section labeled \"Joystick\" and make the following changes:
Uncomment (remove the # symbol from the start of) the line USE_JOYSTICK_AS_DEFAULT = True
. This tells the system to use the joystick as the primary control method.
Change CONTROLLER_TYPE
to F710
to specify which controller you're using.
Pairing the Controller (If Needed):
Some users have reported issues when first trying to pair their F710 controller with the USB dongle. If this happens, you'll need to use Logitech's Pairing Utility to fix the connection. Here's how:
Attach the USB dongle to your computer (not the Raspberry Pi for now).
Download and open Logitech's Pairing Utility software.
Follow the on-screen instructions to pair the dongle with the controller.
Once paired successfully, reconnect the dongle to the Raspberry Pi.
In this step, we're going to learn how to control the car's speed and direction. This might sound complex, but it's really about telling the car how fast to go and where to turn. We'll use some special commands and adjustments to do this, like tuning a musical instrument to get the right notes.
"},{"location":"umn-curriculum/setup/#speed-control-making-your-car-move","title":"Speed Control: Making Your Car Move","text":"Starting the Calibration Process:
First, we need to start the calibration for your car's speed. On your computer, type the command donkey calibrate --pwm-pin=PCA9685.1:40.0
. This command wakes up your car's speed control system and gets it ready for tuning.
donkey calibrate --pwm-pin=PCA9685.1:40.0\n
Finding the Neutral Speed:
Testing Movement:
Fine-Tuning Speed:
Saving Your Settings:
~/mycar/myconfig.py
on your computer. Look for the section PWM_STEERING_THROTTLE
and add your numbers for moving forward, staying still, and reversing. You'll remove the # symbol from in front of these lines to make sure the car uses these new settings.Starting Steering Calibration:
Similar to speed, we begin calibrating steering by typing donkey calibrate --pwm-pin=PCA9685.1:40.1
. This gets your car's steering system ready for tuning.
donkey calibrate --pwm-pin=PCA9685.1:40.1\n
Adjusting Steering:
Saving Steering Settings:
~/mycar/myconfig.py
file with your steering adjustments. You'll specify values for turning left and right. Remember to remove the # symbol from these lines to activate them.Through these steps, you're not just learning how to make a car move and turn. You're learning about calibration, which is a key concept in many areas of technology and engineering. It's all about making small adjustments to get things working just right, whether it's a toy car or a real-life project in computer science or robotics.
"},{"location":"umn-curriculum/setup/#step-5-final-adjustments","title":"Step 5: Final Adjustments","text":"Now, it's time to see how well your car drives:
Place the Donkey Car on level ground and make sure it's got space to move.
Run python manage.py drive \u2013js
and open http://<hostname>.local:8887
on a browser after the terminal prompts you to see through your car's camera.
python manage.py drive \u2013js\n
Start with pressing up on the joystick to move forward. If the car veers off straight, adjust the PWM values as instructed, then test again until it moves straight.
Ensure USE_JOYSTICK_AS_DEFAULT
is uncommented and set to True
in your myconfig.py
file for easier control in the future.
And that's it! You've now calibrated your car. Remember, this is a process of trial and error, so take your time and enjoy the learning experience.
"},{"location":"umn-curriculum/training/","title":"Training","text":""},{"location":"umn-curriculum/training/#installing-donkey-car-software-on-the-host-pc","title":"Installing Donkey Car Software on the Host PC","text":"In this module, we'll guide you through the process of setting up your computer to work with Donkey Car, a powerful suite of tools that let you dive into AI, computer science, and machine learning.
"},{"location":"umn-curriculum/training/#option-1-windows","title":"Option 1: Windows","text":"We'll start by preparing your Windows computer to run Linux, a preferred operating system for many developers and AI enthusiasts. This is because Linux offers great support for programming and AI tools, including Donkey Car.
Enable Linux on Windows: We'll use something called the Windows Subsystem for Linux (WSL). This lets your Windows machine run Linux applications without needing a separate computer. Follow the instructions provided by Microsoft to install WSL. Just a heads-up, the graphical interface (GUI) of Donkey Car might not work perfectly with WSL, so we suggest using Ubuntu Linux directly if you can.
Check Your Graphics Card Drivers: If you're using an Nvidia graphics card, ensure you have the latest drivers installed. Drivers are software that lets your computer's hardware work properly. You can find and install the latest drivers from Nvidia's website.
Ubuntu is a popular, user-friendly version of Linux. Installing it will let you run the Donkey Car software.
Get Ubuntu: Go to the Microsoft Store on your Windows computer and download \"Ubuntu 22.04.3 LTS\". LTS stands for Long Term Support, which means it gets updates for 5 years, making it a stable choice for your projects.
Set Up Ubuntu: Once Ubuntu is installed, launch the app. The first time you open it, it'll ask you to set up a few things like creating a username and a password. Remember these details as you'll need them later!
Update Ubuntu: Before we install the Donkey Car software, let's make sure your Ubuntu system is up-to-date. Open the Ubuntu terminal (a place where you type commands) and enter the following:
To update your list of available software: sudo apt update
sudo apt update\n
To upgrade the software on your system: sudo apt upgrade
sudo apt upgrade\n
Install Necessary Software: Now, we'll install some programs that Donkey Car needs to run smoothly. Type these commands into the Ubuntu terminal:
For Python (the programming language we'll use): sudo apt install python3-pip
sudo apt install python3-pip\n
For other required tools: sudo apt install libmtdev1 libgl1 xclip
sudo apt install libmtdev1 libgl1 xclip\n
To fix a common error with the graphical interface, add a line to a configuration file: echo \u2018LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libstdc++.so.6\u2019 >> ~/.bashrc
echo \u2018LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libstdc++.so.6\u2019 >> ~/.bashrc\n
After you've completed these steps, you're ready to install the Donkey Car software itself. For that, please follow the instructions provided in the Ubuntu installation section of our curriculum.
"},{"location":"umn-curriculum/training/#option-2-ubuntu-linux","title":"Option 2: Ubuntu Linux","text":""},{"location":"umn-curriculum/training/#step-1-setting-up-your-computer-with-ubuntu","title":"Step 1: Setting Up Your Computer with Ubuntu","text":"What is Ubuntu? Ubuntu is a popular, user-friendly version of Linux, great for beginners and pros alike.
How to Install Ubuntu: Follow the instructions on the official Ubuntu website here. After installation, make sure your Ubuntu system is running smoothly.
Why Update? Updating ensures your system has the latest features and security improvements.
How to Update:
Open the Terminal application in Ubuntu.
Type sudo apt update
and press Enter to check for updates.
sudo apt update\n
Then, type sudo apt upgrade
and press Enter to apply the updates. Confirm if asked.
sudo apt upgrade\n
What is MiniConda? MiniConda is a smaller version of Anaconda, a toolkit for scientific programming. It helps manage different versions of Python and packages.
Installation Steps:
Create a folder for MiniConda by typing \"mkdir -p ~/miniconda3\" in the Terminal.
Download MiniConda for Python 3.11 by running: wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
Install MiniConda with: bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
Clean up the installer by running rm -rf ~/miniconda3/miniconda.sh
.
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh\nbash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3\nrm -rf ~/miniconda3/miniconda.sh\n
What\u2019s a Conda Environment? It's like a separate workspace for each project, with its own set of Python and packages, avoiding conflicts.
Creating Your Environment:
Choose a name for your environment; we'll use \"donkey\" here.
Create it with conda create -n donkey python=3.11
.
Activate your new environment by typing conda activate donkey
.
conda create -n donkey python=3.11\nconda activate donkey\n
About Donkey Car Software: This software suite allows you to control and customize your Donkey Car, a DIY autonomous driving project.
Installation:
For a standard user installation, type pip install donkeycar[pc]
in the Terminal.
pip install donkeycar[pc]\n
Updating Existing Installation:
If you've installed it before and need an update:
Update Conda with conda update -n base -c defaults conda
.
Remove the old environment with conda env remove -n donkey
.
Reinstall the Donkey Car software as described above.
conda update -n base -c defaults conda\nconda env remove -n donkey\n
Choosing Between PyTorch and TensorFlow: If you prefer using PyTorch over TensorFlow for your projects:
Install PyTorch and its necessary toolkit with conda install cudatoolkit=11 -c pytorch
.
conda install cudatoolkit=11 -c pytorch\n
Note: For those interested in more advanced setups or contributing to the Donkey Car software, see Appendix 2 for developer installation instructions.
"},{"location":"umn-curriculum/training/#preparing-for-training-gathering-data-from-your-donkey-car","title":"Preparing for Training: Gathering Data from Your Donkey Car","text":"To train your artificial intelligence (AI) model, we need specific files from your Donkey Car\u2014a miniature car powered by a Raspberry Pi (a small, affordable computer). These files include images (what the car sees), metadata (information about the images), and configuration settings. We'll transfer these files from the Donkey Car to a larger computer (we'll call this the \"Host PC\") where the training will happen.
"},{"location":"umn-curriculum/training/#step-1-ensure-both-devices-can-communicate","title":"Step 1: Ensure Both Devices Can Communicate","text":"Connect both the Donkey Car and the Host PC to the same Wi-Fi network. This could be your home network or any mobile hotspot. This step is crucial for the two devices to 'talk' to each other.
On your Host PC, open a program called 'Terminal'. Think of the Terminal as a way to talk to your computer using text commands instead of clicking on icons.
Type the command ping <hostname>.local
and press Enter.
Replace <hostname>
with the name of your Raspberry Pi. This command checks if your Host PC can communicate with the Donkey Car.
If you see replies coming back, congrats! Your PC and the Donkey Car are on speaking terms.
In the Terminal, type mkdir <dest_dir>
and press Enter.
<dest_dir>
with any name you like for your folder.Now, type the command rsync -rv --progress --partial <userrname>@<hostname>.local:<car_dir> <dest_dir>
and press Enter.
Here, you'll replace <userrname>
with the username of your Raspberry Pi, <hostname>
with its hostname, and <car_dir>
with the path to the Donkey Car project folder (usually it's ~/mycar
).
This command may look complex, but it's essentially instructing your computer to carefully copy all the necessary files from the Donkey Car into the folder you just created.
rsync -rv --progress --partial <userrname>@<hostname>.local:<car_dir> <dest_dir>\n
To ensure our AI models learn effectively and quickly, preparing our training data is crucial. Properly prepared data improves the model's learning efficiency and its eventual performance and accuracy. Preparing the data involves removing irrelevant or problematic elements such as images and metadata that don't contribute to the learning objective, are captured during non-active moments (like breaks), or are simply errors.
Methodologies for Data Preparation: There are two main ways to prepare your data:
"},{"location":"umn-curriculum/training/#option-1-graphic-interface-gui","title":"Option 1: Graphic Interface (GUI)","text":"For those preferring a visual approach, utilizing the GUI is recommended. This method is particularly effective when working directly on a Ubuntu system. Working on Windows via the Windows Subsystem for Linux (WSL) might not offer the same smooth experience.
For individuals comfortable with command line operations, the Donkey Car Software provides tools for data cleaning through a command line and web interface process. Here's how to do it:
Step 1: Open the terminal on your computer and execute the command to clean your data. Type: donkey tubclean <Donkey Car folder>
followed by the path to your Donkey Car project folder. Remember to replace <Donkey Car folder>
with the actual path to your project folder.
donkey tubclean <Donkey Car folder>\n
Step 2: This command sets up a local web server accessible through your browser. Open your preferred web browser and visit http://localhost:8886
.
Step 3: Navigate to the data folder by clicking on data/
. You'll see an interface where you can review your images.
Step 4: In this interface, you can play through your images, organize them into different categories or tracks, and delete any that aren't needed.
Step 5: After you've selected and deleted unnecessary images, return to the terminal and press Ctrl+C to stop the web server and conclude the data cleaning process.
Remember, taking the time to prepare your data properly can significantly impact the success of your AI project. Choose the method that best suits your setup and preferences.
"},{"location":"umn-curriculum/training/#training-the-model","title":"Training the Model","text":"After preparing our data by cleaning it, we're ready to take the next exciting step: training our artificial intelligence (AI) model. This process teaches the AI how to make decisions based on the data we've given it. We can use two methods to train our model, depending on what's more comfortable for you: using a graphical user interface (GUI) or typing commands into the command line.
"},{"location":"umn-curriculum/training/#option-1-using-the-gui-graphical-user-interface","title":"Option 1: Using the GUI (Graphical User Interface)","text":"The GUI is a visual tool that lets you interact with the computer more intuitively. Here\u2019s how you can use it to train your model:
Start the Tool:
Prepare Your Data:
In the tool, find and click on the \u2018Tub Manager\u2019 menu.
Choose \u201cLoad car directory\u201d and navigate to your car\u2019s data directory.
Click \u201cLoad Tub\u201d and select the \u201cdata\u201d folder within that directory.
Train Your Model:
Go to the \u201cTrainer\u201d tab.
Click on \u201cTrain\u201d and wait patiently for the training to complete.
Customizing Your Training:
If you wish to change the type of AI model you're training, you can select a different model type from the \u201cModel Type\u201d dropdown menu.
To tweak the training process, such as how fast the AI learns, modify the \u201cTraining\u201d section in the myconfig.py file.
You can label your model with a comment for easy identification later on.
If you're more comfortable with typing commands, this method is for you:
Open the Terminal:
Type the Training Command:
Enter the following command, replacing placeholders with your specific information:
donkey train --tub=<path_to_your_data> [--config=<configuration_file>] [--model=<path_to_save_model>] [--type=<model_type>] [--transfer=<path_to_pretrained_model>]
--tub
specifies the location of your data. You can list multiple locations by separating them with commas or spaces.
--config
(optional) allows you to use a specific configuration file.
--model
(optional) sets where to save the trained model. If you don't specify a name, one will be automatically generated.
--type
lets you choose the type of model to train.
--transfer
(optional) continues training an already partially trained model.
You can also filter which data records to use for training by adjusting settings in the myconfig.py
file.
donkey train --tub=<path_to_your_data> [--config=<configuration_file>] [--model=<path_to_save_model>] [--type=<model_type>] [--transfer=<path_to_pretrained_model>]\n
Completion:
After training your model, you're ready to upload it to the Donkey Car for autonomous driving tests. This step will allow you to see how well your AI has learned to make decisions on its own. Remember, the goal here is to teach the AI to navigate a course independently, improving over time as it learns from more data.
"},{"location":"umn-curriculum/appendix/developer-install/","title":"Developer Install","text":"For those interested in contributing to the Donkey Car project or needing a development setup:
git clone https://github.com/autorope/donkeycar.git
git checkout dev
for the latest features.pip install -e .[dev]
to install all necessary dependencies for development.Configuring your training environment and parameters can significantly impact the performance of your AI model:
linear
, categorical
, or rnn
.myconfig.py
file to optimize training.Proper data preparation is essential for training a reliable model. Data tubs contain the collected driving data, but not all data points are useful for training. Here\u2019s how to trim data tubs to enhance model training:
donkey tubclean --tub <path_to_tub>
to launch a web interface for easier data selection and trimming.Encountering and resolving common issues during setup, training, or driving:
Visual Studio Code is a free text editor with useful features and extensions for nearly any kind of software development. One such extension is Remote - SSH.
As mentioned on the Networking page, SSH allows you to access your Raspberry Pi's command line remotely (from your own laptop/PC). The Remote - SSH extension pushes this further by allowing you to open folders, edit files, use the integrated terminal, and take full advantage of VS Code.
"},{"location":"umn-curriculum/appendix/vsc-remote/#installation","title":"Installation","text":"ssh <username>@<hostname>.local\n
Welcome to the fascinating world of Artificial Intelligence (AI)! AI has become a buzzword across various industries, impacting everything from the way we shop to how we interact with technology daily. But what exactly is AI? Let's dive in and explore the realm of artificial intelligence together.
"},{"location":"umn-curriculum/background/foundations-of-ai/#what-is-artificial-intelligence","title":"What is Artificial Intelligence?","text":"Artificial Intelligence, or AI, refers to the capability of a machine or computer program to think, learn, and make decisions similarly to a human. It's a broad field of computer science dedicated to creating systems capable of performing tasks that typically require human intelligence. These tasks include problem-solving, recognizing speech, understanding languages, and more.
"},{"location":"umn-curriculum/background/foundations-of-ai/#history-of-ai","title":"History of AI","text":"The concept of AI is not new. It dates back to ancient myths and stories of artificial beings endowed with intelligence or consciousness by master craftsmen. However, the academic field of AI was only born in the mid-20th century, during a workshop at Dartmouth College in 1956. This workshop is often considered the founding event of AI as a field. Since then, AI has evolved from simple algorithms to complex machine learning and neural networks, becoming an integral part of the modern world.
"},{"location":"umn-curriculum/background/foundations-of-ai/#applications-of-ai-in-daily-life","title":"Applications of AI in Daily Life","text":"AI is no longer just a futuristic concept; it's a part of our everyday lives. Here are a few examples of how AI impacts our daily activities:
Personal Assistants: Siri, Alexa, and Google Assistant use AI to understand natural language and perform tasks for us.
Navigation: GPS and mapping services like Google Maps use AI to analyze the best routes and traffic conditions.
Recommendation Systems: Netflix, Spotify, and Amazon use AI to recommend movies, music, and products based on our previous choices.
Healthcare: AI is used in predictive analysis for patient care, medical imaging, and drug discovery.
At its core, AI works by processing large amounts of data and learning from it, much like a human learns from experience. Through various algorithms and models, AI systems can identify patterns, make decisions, and improve over time. The ability to learn and adapt makes AI incredibly powerful and versatile.
"},{"location":"umn-curriculum/background/introduction-to-linux/","title":"Introduction to Linux","text":"Linux is an open-source, Unix-like operating system that powers a significant portion of the internet, including most web servers and is the foundation for many other devices and technologies, including Android smartphones. It's known for its stability, security, and flexibility, making it a preferred choice for developers and system administrators. Linux comes in various distributions (distros), such as Ubuntu, Fedora, and Debian, each tailored for different needs and preferences.
"},{"location":"umn-curriculum/background/introduction-to-linux/#why-linux-for-education","title":"Why Linux for Education?","text":"Accessibility: Linux can be installed on a wide range of hardware, from old desktops to the most modern laptops, making it accessible to students without the need for high-end equipment.
Cost-Effective: Being open-source, it's free to use, which significantly reduces the costs associated with software licensing.
Learning Environment: It offers a command-line interface (CLI) that encourages the understanding of the fundamentals of operating systems, file systems, and programming.
Community Support: There's a vast community of Linux users and developers ready to offer assistance, making it an excellent resource for learners.
Affordability: Raspberry Pi devices are very affordable, making them accessible for classroom settings and individual students.
Versatility: Raspberry Pi can be used for a wide range of projects, from simple programming exercises to complex hardware/software integration projects.
Educational Resources: The Raspberry Pi Foundation provides extensive educational materials and a supportive community for educators and learners.
Practical Learning: Through projects that involve both software and hardware, students gain hands-on experience that is invaluable for understanding key concepts in computer science and electronics.
pwd
: Print Working Directory. Shows the path of the current directory you are in.
ls
: Lists files and directories in the current directory. Can be combined with flags like -l for detailed listing and -a to show hidden files.
cd
: Change Directory. Used to move between directories.
mkdir
: Make Directory. Creates a new directory.
rmdir
: Remove Directory. Deletes an empty directory.
rm
: Remove. Deletes files or directories. Use -r for recursive deletion.
touch
: Creates a new empty file or updates the timestamp on an existing file.
cat
: Concatenate and display files.
echo
: Displays a line of text/string that is passed as an argument.
grep
: Global Regular Expression Print. Searches for patterns within files.
find
: Searches for files and directories based on conditions provided.
sudo
: SuperUser DO. Executes commands with the security privileges of the superuser (root).
chmod
: Change Mode. Modifies the file permissions.
chown
: Change Owner. Changes the owner and group of a file or directory.
wget
: Retrieves content from web servers.
curl
: Transfers data from or to a server, similar to wget
but with more capabilities.
tar
: Tape Archive. Used to create, maintain, modify, and extract files that are archived in the tar format.
zip/unzip
: Compresses files into a zip format and extracts them.
Variables: Storing and using values.
Control Structures: if statements, loops (for, while).
Input/Output: Reading from and writing to files, reading user input.
Functions: Writing reusable code blocks.
Pipelines and Redirection: Using | to direct the output of one command as input to another, > and >> to write output to files.
Networking Basics:
Explanation of how devices communicate over a network.
Introduction to IP addresses, DNS, routers, and modems.
What is a Wi-Fi Hotspot?
Definition and how it works.
Use cases for creating a Wi-Fi hotspot.
For Windows:
Step 1: Open 'Settings' > 'Network & Internet' > 'Mobile hotspot'.
Step 2: Turn on 'Share my Internet connection with other devices'.
Step 3: Configure your network by selecting 'Edit' to set a new network name (SSID) and password.
Step 4: Connect your device to the new Wi-Fi network using the credentials you have set.
For macOS:
Step 1: Click the Apple menu > 'System Preferences' > 'Sharing'.
Step 2: Select 'Internet Sharing' from the list on the left.
Step 3: Choose the way you want to share your connection (e.g., share your Ethernet connection via Wi-Fi).
Step 4: Click 'Wi-Fi Options...' to configure your network name (SSID) and password.
Step 5: Check the 'Internet Sharing' box to turn on your hotspot.
Step 6: Connect your device to the new Wi-Fi network.
For Linux (Ubuntu):
Step 1: Click the system menu > 'Wi-Fi Not Connected' > 'Wi-Fi Settings'.
Step 2: Click the menu button in the Wi-Fi settings window, then select 'Turn On Wi-Fi Hotspot...'.
Step 3: Adjust your network settings including SSID and password as prompted.
Step 4: Activate the hotspot and connect your device using the newly created network credentials.
Machine learning is a method of teaching computers to learn from data, identify patterns, and make decisions with minimal human intervention. One of the most exciting tools in machine learning is the neural network.
"},{"location":"umn-curriculum/background/introduction-to-nn/#what-is-a-neural-network","title":"What is a Neural Network?","text":"Imagine your brain. It's made up of billions of cells called neurons, connected by synapses. Each neuron receives signals, processes them, and passes signals to other neurons. A neural network in machine learning is inspired by this concept. It's a collection of artificial neurons or \"nodes\" that work together to understand data, learn patterns, and make decisions.
"},{"location":"umn-curriculum/background/introduction-to-nn/#the-basics-of-neural-networks","title":"The Basics of Neural Networks:","text":"Artificial Neurons: Just like neurons in the brain, artificial neurons receive input, process it, and generate an output. Each neuron's output is determined by a function of its input.
Layers: Neural networks have layers of neurons:
Input Layer: Receives the data.
Hidden Layers: The layers in between that process the data. There can be one or many hidden layers.
Output Layer: Produces the final result or prediction.
Learning: Neural networks learn by adjusting the importance (weights) of connections between neurons. This process is called training. During training, the network makes predictions, compares them to the correct answers, and adjusts the weights to improve future predictions.
Neural networks can learn and adapt. They're capable of handling complex problems like recognizing speech, identifying images, and predicting trends. Their ability to learn from data makes them incredibly powerful in today\u2019s data-driven world.
"},{"location":"umn-curriculum/background/introduction-to-os/","title":"Introduction to Operating Systems","text":"Welcome to the module on Operating Systems (OS) - an essential cornerstone for understanding the broader fields of Computer Science, Artificial Intelligence (AI), and Machine Learning (ML). This module is designed to introduce you to the basics of operating systems in a straightforward and accessible manner. It aims to provide a foundation for learners from various backgrounds, including high school students and professionals transitioning into tech. No prior expertise in computer science is required to grasp these concepts.
"},{"location":"umn-curriculum/background/introduction-to-os/#what-is-an-operating-system","title":"What is an Operating System?","text":"An Operating System is the software that manages all the computer hardware and software resources. It acts as an intermediary between the computer's hardware and the applications running on it. Think of the OS as a manager in an office environment, ensuring that resources such as printers, files, and applications are allocated efficiently, so everything runs smoothly.
"},{"location":"umn-curriculum/background/introduction-to-os/#key-functions-of-an-operating-system","title":"Key Functions of an Operating System","text":"Resource Management: Manages and allocates computer resources like CPU, memory, disk space, and peripheral devices.
File Management: Controls the reading and writing of data to disk and manages how files are stored, organized, and retrieved.
Process Management: Handles the creation, execution, and termination of processes. A process is an executing program.
Security and Access Control: Protects against unauthorized access to the system, ensuring data integrity and user privacy.
User Interface: Provides interfaces, such as graphical user interfaces (GUIs) or command-line interfaces (CLI), for users to interact with the computer.
Microsoft Windows: Known for its user-friendly GUI, it\u2019s widely used in homes, schools, and offices.
macOS: Developed by Apple Inc., it\u2019s known for its sleek design and integration with other Apple products.
Linux: An open-source OS that's highly customizable, making it popular among developers and in server environments.
Android: Based on the Linux kernel, it\u2019s designed primarily for touchscreen mobile devices like smartphones and tablets.
This module has introduced you to the basic concepts of operating systems, their key functions, types, examples, and their importance in the fields of AI and ML. Understanding operating systems is fundamental not only for aspiring computer scientists but also for anyone looking to delve into technology-driven fields. As we progress, we'll explore more technical aspects and real-world applications of operating systems in greater detail.
Stay tuned for the next modules where we'll dive deeper into the inner workings of operating systems and how they underpin the technological world around us.
"},{"location":"umn-curriculum/background/introduction-to-python/","title":"Introduction to Python","text":""},{"location":"umn-curriculum/background/introduction-to-python/#what-is-python","title":"What is Python?","text":"Python is a high-level, interpreted programming language known for its readability and versatility. It allows you to write clear programs on both a small and large scale. Because of its simplicity and elegance, Python is a great language for beginners, yet powerful enough for writing advanced applications.
"},{"location":"umn-curriculum/background/introduction-to-python/#why-learn-python","title":"Why Learn Python?","text":"Versatility: From web development to data analysis, machine learning, and even cybersecurity, Python can be used in various domains.
Community and Resources: Python has a large, supportive community. This means plenty of tutorials, documentation, and forums are available to help beginners.
Career Opportunities: Knowledge of Python opens up numerous career paths in tech, especially in growing fields like data science and artificial intelligence.
Installation: Installing Python is straightforward. You can download the latest version from the official Python website. Ensure to check the box that says \"Add Python to PATH\" during installation to use Python from the command line.
Your First Python Program: Once installed, you can write your first simple program. Open your text editor, type print(\"Hello, World!\")
, and save the file with a .py extension. Run it from your command line by typing python filename.py
.
Software versions indicate the state of the software at a particular point in time. They help in tracking changes, improvements, or fixes made over time.
Major Versions of Python
Python 2: Introduced in 2000, it was widely used for many years. However, Python 2 reached the end of its life on January 1, 2020, meaning it no longer receives updates or support.
Python 3: Released in 2008, Python 3 is the current version, actively developed and supported. It introduced many changes and improvements over Python 2, making programs more efficient and easier to read.
Using the latest version ensures access to new features, security patches, and performance improvements. However, when working on existing projects, you must use the version compatible with that project to avoid compatibility issues.
"},{"location":"umn-curriculum/background/introduction-to-python/#version-management","title":"Version Management","text":"Managing Multiple Versions: It's common for developers to work on projects that require different Python versions. Tools like pyenv for Unix-based systems and pyenv-win for Windows can help manage multiple versions on a single machine.
Virtual Environments: Virtual environments are a critical part of Python development, allowing you to maintain separate environments for different projects. This means you can have different versions of Python and various packages installed for each project without conflicts. You can create a virtual environment using python -m venv envname.
"},{"location":"umn-curriculum/background/introduction-to-python/#understanding-programming-through-python","title":"Understanding Programming Through Python","text":""},{"location":"umn-curriculum/background/introduction-to-python/#why-python","title":"Why Python?","text":"Python is a versatile and widely used programming language, favored for its easy-to-read syntax. It's used in web development, data analysis, artificial intelligence, scientific computing, and more. Python's syntax closely resembles English, which helps beginners understand the concepts of programming without getting bogged down by complex syntax rules.
"},{"location":"umn-curriculum/background/introduction-to-python/#first-steps-in-python-programming","title":"First Steps in Python Programming","text":"When you start programming in Python, you're learning to communicate with your computer through code. Let's explore some foundational concepts that will help you begin this journey.
Python Syntax: The rules that define how a Python program is written. Python was designed to be easy to understand and fun to use. The simplicity of Python's syntax allows beginners to focus on learning programming concepts rather than the intricacies of the language.
Print Statement: One of the basic functions in Python is print()
, which outputs data to the screen. It's a helpful way to see what your program is doing. For example, print(\"Hello, world!\")
displays the message \"Hello, world!\".
Variables and Data Types: Variables are used to store information that can be referenced and manipulated in a program. Each variable in Python has a data type, which dictates the kind of data it can hold. Common data types include:
int
(integer): Represents whole numbers, e.g., 5
float
: Represents decimal numbers, e.g., 5.0
str
(string): Represents text, e.g., \"Hello, world!\"
bool
(boolean): Represents True or False
Comments: Comments are parts of the code ignored by the Python interpreter. They're used to explain what the code does, making it easier to understand. In Python, you can create a comment by starting the line with the hash (#) symbol.
Basic Operations: Python supports basic arithmetic operations like addition (+), subtraction (-), multiplication (*), division (/), and more. These operations can be performed on numbers and, in some cases, on other types of data.
Control Flow: Control flow is the order in which individual statements, instructions, or function calls are executed or evaluated. The primary types of control flow are conditional statements (if, elif, else) and loops (for, while).
Conditional Statements allow you to execute different blocks of code based on certain conditions.
Loops allow you to execute a block of code multiple times, typically with some variation each time.