Course information may be found here.
You can find more details about the course at my homel.
Feel free to contact me if you have any questions or want to discuss any topic from the course 😊
All authorship is mentioned where possible.
The aim of the exercise is to get an overview of the basic capabilities of the Keras library and to build a simple neural network for MNIST dataset classification.
The goal of the exercise is to learn how to solve regression problems using deep learning.
The aim of the exercise is to learn how to use the basic architecture based on convolutional layers and how to classify image data.
The aim of the exercise is to learn how to use transfer learning for image data, in the second part of the exercise we will look at time series classification using CNN.
The goal of the exercise is to learn how to use Autoencoder and Variational autoencoder architectures in image data to generate new image data instances and detect anomalies.
The goal of this exercise is to learn how to use recurrent neural networks for sentiment analysis of text data. In the exercise, we will work with data from Twitter.
The goal of this exercise is to learn how to create your own Word2Vec embedding and generate your own text using RNN.
The aim of the exercise is to learn how to work with the Attention mechanism within the RNN.
The exercise focuses on the use of transformer models for text data classification using the Hugging face API.
This lecture is focused on using CNN for object localization tasks, we will also use YOLOv8 model in a real-world scenario during second part of the lecture.
The aim of the lecture is to get an overview of possibilities in the generative artificial intelligence (GenAI) domain
- You can use (Kaggle)[https://www.kaggle.com/] as an alternative to Google Colab
- 📌 Beware that both platforms use different configuration and libraries versions thus full compatibility cannot be always guaranteed
- For importing the Jupyter notebook perform these steps:
- Click on
+
sign (orCreate
) button in the left panel and selectNew Notebook
- In the new notebook select
File > Import notebook > Link
and paste URL of the Jupyter notebook from Github - In the
Notebook
sidebar (right side, it can be expanded through small arrow icon in the bottom right corner) use these Session options:- Accelerator:
GPU T4x2
orGPU P100
- Persistence:
Variables and Files
- Accelerator:
- Own datasets can be uploaded using the the
Notebook
sidebar as well -Input
section- Click on
Upload > New dataset > File
and Drag&Drop your file(s) - Set the Dataset title and click on
Create
- 💡 zip archives are automatically extracted
- You can copy path of the file using the copy icon when you hover over the filename
- The usual path is in format
/kaggle/input/<dataset_name>/<filename>
- The usual path is in format
- Click on
- 💡 There is some problem with using the hdf5 format in the
filepath
parameter inModelCheckpoint
- Use filename
best.weights.h5
instead (hdf5 and h5 is the same format) - 💡 Remember to change the path in the
load_weights()
function as well!**
- Use filename
- You can download your
.ipynb
notebooks usingFile > Download notebook
option
- Click on
python -m venv venv
- Activate
venv
in Windows
.\venv\Scripts\Activate.ps1
- Activate
venv
in Linux
source venv/bin/activate
- Works for tensorflow 2.15.0
pip install jupyter "jupyterlab>=3" "ipywidgets>=7.6"
pip install pandas matplotlib requests seaborn scipy scikit-learn optuna scikit-image pyarrow opencv-python plotly==5.18.0 tensorflow[and-cuda] nltk textblob transformers datasets huggingface_hub evaluate
- It should print list of all your GPUs
- 💡 It is not working if an empty list
[]
is printed
- 💡 It is not working if an empty list
python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"