Example notebook to run a trainning job in the ORU Titan server, using the GPUs and exporting the model to an S3 bucket.
- Login in the url https://ood.orca.oru.edu/pun/sys/dashboard with your account [email protected] and password (for example [email protected])
- Go to interactive apps > Jupyter Notebbok
- Launch a session with Partition gpu and some Number of hours
- git clone this repo
- Create a virtual env and register a kernel
- python -m venv venv
- source venv/bin/activate
- pip install -r requirements
- python -m ipykernel install --user --name=myenvkernel --display-name="venv"
- Open the TTS_train notebook with the venv kernel. The end of the notebbok saves a checkpoint of the model.
- The Export to S3 notebbok shows how to export this checkpoint to an S3 bucket in the IDX-AI AWS account. You will need a .env file as:
AWS_ACCESS_KEY_ID=***
AWS_SECRET_ACCESS_KEY=***
The TTS_train examlpe is a modified copy of https://tts.readthedocs.io/en/latest/tutorial_for_nervous_beginners.html
Notice that the data and model locations .ljspeech and tts_train_dir are excluded in the .gitignore file.