Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@
'sphinx.ext.githubpages',
'linuxdoc.rstFlatTable',
"notfound.extension",
'sphinx_copybutton' #need to add for copyable code blocks.
#'recommonmark',
#'sphinx_markdown_tables',
#'edit_on_github',
Expand Down
10 changes: 3 additions & 7 deletions docs/examples.rst
Original file line number Diff line number Diff line change
@@ -1,17 +1,13 @@
##########################
Examples, Demos, Tutorials
Examples
##########################

This page introduces various demos, examples, and tutorials currently available with the Ryzen™ AI Software.

*************************
Getting Started Tutorials
*************************
This page introduces examples currently available with the Ryzen™ AI Software.

NPU
~~~

- :doc:`Getting Started Tutorial for INT8 models <getstartex>` - Uses a custom ResNet model to demonstrate:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why remove "Getting Started" from the description?

- :doc:`ResNet50 INT8 example <getstartex>` - Uses a custom ResNet model to demonstrate:

- Pretrained model conversion to ONNX
- Model Quantization using AMD Quark quantizer
Expand Down
18 changes: 13 additions & 5 deletions docs/getstartex.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
:orphan:

########################
Getting Started Tutorial
ResNet50 Tutorial
########################

This tutorial uses a fine-tuned version of the ResNet model (using the CIFAR-10 dataset) to demonstrate the process of preparing, quantizing, and deploying a model using Ryzen AI Software. The tutorial features deployment using both Python and C++ ONNX runtime code.
Expand Down Expand Up @@ -79,9 +79,10 @@ Step 2: Prepare dataset and ONNX model

This example utilizes a custom ResNet model finetuned using the CIFAR-10 dataset

The ``prepare_model_data.py`` script downloads the CIFAR-10 dataset in pickle format (for python) and binary format (for C++). This dataset is used in the subsequent steps for quantization and inference. The script also exports the provided PyTorch model into ONNX format. The following snippet from the script shows how the ONNX model is exported:
The ``prepare_model_data.py`` script downloads the CIFAR-10 dataset in pickle format (for python) and binary format (for C++).
*The CIFAR-10 dataset consists of 60,000 32x32 colour images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images.* You can learn more about the CIFAR-10 dataset here: https://www.cs.toronto.edu/~kriz/cifar.html. This dataset is used in the subsequent steps for quantization and inference. The script also exports the provided PyTorch model into ONNX format. The following snippet from the script shows how the ONNX model is exported:

Comment on lines +83 to 84
Copy link

Copilot AI Oct 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The CIFAR-10 dataset description is formatted with asterisks instead of proper reStructuredText formatting. Consider using proper emphasis markup or a note directive for better presentation.

Suggested change
*The CIFAR-10 dataset consists of 60,000 32x32 colour images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images.* You can learn more about the CIFAR-10 dataset here: https://www.cs.toronto.edu/~kriz/cifar.html. This dataset is used in the subsequent steps for quantization and inference. The script also exports the provided PyTorch model into ONNX format. The following snippet from the script shows how the ONNX model is exported:
.. note::
The CIFAR-10 dataset consists of 60,000 32x32 colour images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images.
You can learn more about the CIFAR-10 dataset here: https://www.cs.toronto.edu/~kriz/cifar.html.
This dataset is used in the subsequent steps for quantization and inference. The script also exports the provided PyTorch model into ONNX format. The following snippet from the script shows how the ONNX model is exported:

Copilot uses AI. Check for mistakes.
.. code-block::
.. code-block:: python

dummy_inputs = torch.randn(1, 3, 32, 32)
input_names = ['input']
Expand Down Expand Up @@ -277,12 +278,16 @@ Prerequisites
Install OpenCV
--------------

It is recommended to build OpenCV from the source code and use static build. The default installation location is "\install" , the following instruction installs OpenCV in the location "C:\\opencv" as an example. You may first change the directory to where you want to clone the OpenCV repository.
If using C++, build OpenCV from source code and use static build. The following instructions install OpenCV in the location "C:\\opencv" as an example, but you can change the location as needed.

First navigate to a suitable directory where you add GitHub repositories. First, clone the OpenCV repository:
.. code-block:: bash

git clone https://github.com/opencv/opencv.git -b 4.6.0
cd opencv

Then, you can build OpenCV with the following:
.. code-block:: bash
cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DBUILD_SHARED_LIBS=OFF -DCMAKE_POSITION_INDEPENDENT_CODE=ON -DCMAKE_CONFIGURATION_TYPES=Release -A x64 -T host=x64 -G "Visual Studio 17 2022" "-DCMAKE_INSTALL_PREFIX=C:\opencv" "-DCMAKE_PREFIX_PATH=C:\opencv" -DCMAKE_BUILD_TYPE=Release -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=OFF -DBUILD_WITH_STATIC_CRT=OFF -B build
cmake --build build --config Release
cmake --install build --config Release
Expand All @@ -298,7 +303,10 @@ Run the following command to build the resnet example. Assign ``-DOpenCV_DIR`` t

Copy link

Copilot AI Oct 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Path uses backslashes which are Windows-specific. Consider using forward slashes for cross-platform compatibility or noting this is Windows-specific.

Suggested change
.. note::
The following command uses Windows-style backslashes and is intended for use in a Windows environment.

Copilot uses AI. Check for mistakes.
.. code-block:: bash

cd getting_started_resnet/cpp
cd RyzenAI-SW\tutorial\getting_started_resnet\int8\cpp

.. code-block:: bash

cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DBUILD_SHARED_LIBS=OFF -DCMAKE_POSITION_INDEPENDENT_CODE=ON -DCMAKE_CONFIGURATION_TYPES=Release -A x64 -T host=x64 -DCMAKE_INSTALL_PREFIX=. -DCMAKE_PREFIX_PATH=. -B build -S resnet_cifar -DOpenCV_DIR="C:/opencv/build" -G "Visual Studio 17 2022"

This should generate the build directory with the ``resnet_cifar.sln`` solution file along with other project files. Open the solution file using Visual Studio 2022 and build to compile. You can also use "Developer Command Prompt for VS 2022" to open the solution file in Visual Studio.
Expand Down
178 changes: 173 additions & 5 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,19 +7,187 @@ AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing
.. image:: images/rai-sw.png
:align: center

***********
Quick Start
***********
.. _hardware-support:
****************
Hardware Support
****************
Ryzen AI 1.6 Software runs on AMD processors outlined below. For a more detailed list of supported devices, refer to the `processor specifications <https://www.amd.com/en/products/specifications/processors.html>`_ page (scroll to the "AMD Ryzen™ AI" column toward the right side of the table, and select "Available" from the pull-down menu). Support for Linux is coming soon in Ryzen AI 1.6.1.

.. list-table:: Supported Ryzen AI Processor Configurations
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This table will only grow and will be hard to maintain as we add support for more platforms. It's redundant with the https://www.amd.com/en/products/specifications/processors.html page. And we risk creating inconsistencies. Case in point, see the comment about Z2 below.

I recommend we simply link to the official processor specification page.

:header-rows: 1
:widths: 25 25 12 22 12 10 10 10

* - Series
- Codename
- Abbreviation
- Graphics Model
- Ryzen™ AI Support
- Launch Year
- Windows
- Linux
* - Ryzen AI Max PRO 300 Series
- Strix Halo
- STX
- Radeon 8000S Series
- ✅
- 2025
- ☑️
-
* - Ryzen AI PRO 300 Series
- Strix Point / Krackan Point
- STX/KRK
- Radeon 800M Series
- ✅
- 2025
- ☑️
-
* - Ryzen AI Max 300 Series
- Strix Halo
- STX
- Radeon 8000S Series
- ✅
- 2025
- ☑️
-
* - Ryzen Z2
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't believe this device is supported.

- Z2
- Z2
- Radeon
- ✅
- 2025
-
-
* - Ryzen AI 300 Series
- Strix Point
- STX
- Radeon 800M Series
- ✅
- 2025
- ☑️
-
* - Ryzen Pro 200 Series
- Hawk Point
- HPT
- Radeon 700M Series
- ✅
- 2025
- ☑️
-
* - Ryzen 200 Series
- Hawk Point
- HPT
- Radeon 700M Series
- ✅
- 2025
- ☑️
-
* - Ryzen PRO 8000 Series
- Hawk Point
- HPT
- Radeon 700M Series
- ✅
- 2024
- ☑️
-
* - Ryzen 8000 Series
- Hawk Point
- HPT
- Radeon 700M Series
- ✅
- 2024
- ☑️
-
* - Ryzen Pro 7000 Series
- Phoenix
- PHX
- Radeon 700M Series
- ✅
- 2023
- ☑️
-
* - Ryzen 7000 Series
- Phoenix
- PHX
- Radeon 700M Series
- ✅
- 2023
- ☑️
-

************
LLMs Support
************
Ryzen AI 1.6 supports running LLMs on the hardware configurations in the table below.

.. list-table:: LLM Support on Ryzen AI Processors
:header-rows: 1
:widths: 25 25 25 25 25 25

* - Processor Series
- Codename
- CPU
- GPU
- NPU
- Hybrid (NPU + iGPU)
* - Ryzen AI 300
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We support LLMs on all STX and KRK platforms, not all Ryzen AI 300. The first column in this table is not needed and should be removed.

- STX/KRK
- ✓
- ✓
- ✓
- ✓
* - Ryzen AI 7000/8000/200
- PHX/HPT
- ✓
- ✓
- ✗
- ✗

For more details on running LLMs, refer to the :doc:`llm/overview` page.

*******************
Other Model Support
*******************

The following table lists which types of models are supported on the different hardware platforms.

.. list-table::
:header-rows: 1

* - Model Type
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why mention CPU and GPU for LLMs and not for other models? BF16 models can run on CPU and GPU on PHX/HPT.
The way LLMs and CNN/NLPs are presented in inconsistent. It would be preferable to find a common way to presenting the information.

- STX/KRK
- PHX/HPT
* - CNN INT8
- ✓
- ✓
* - CNN BF16
- ✓
-
* - NLP BF16
- ✓
-

***********************
Installation & Examples
***********************
To get started with installing and using Ryzen AI Software, visit the following:

- :ref:`Supported Configurations <supported-configurations>`
- :doc:`inst`
- :doc:`examples`

*************************
Development Flow Overview
*************************

The Ryzen AI development flow does not require any modifications to the existing model training processes and methods. The pre-trained model can be used as the starting point of the Ryzen AI flow.
A typical Ryzen AI flow might look like the following:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is accurate for CNNs, but not for BF16 NLPs (no quantization step) or LLMs (OGA flow).


1. Begin with a pretrained PyTorch (*.pt) model.
2. Convert the model to ONNX (*.onnx) format. You can follow the PyTorch documentation here: `Export a PyTorch model to ONNX <https://docs.pytorch.org/tutorials/beginner/onnx/export_simple_model_to_onnx_tutorial.html>`_.
3. Optionally, quantize the model with `AMD Quark <https://quark.docs.amd.com/latest/>`_ for a reduced model size.
4. Deploy the model for inference in your application.
5. Run the :doc:`ai_analyzer` to assess model performance.

.. note::
You may find that you can skip steps 1-3 and deploy a model right away if you already have an ONNX model that fits on your device.

Quantization
============
Expand Down
52 changes: 30 additions & 22 deletions docs/inst.rst
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
.. include:: /icons.txt

#########################
Installation Instructions
#########################
#################################
Windows Installation Instructions
#################################



*************
Prerequisites
*************

The Ryzen AI Software supports AMD processors with a Neural Processing Unit (NPU). Refer to the release notes for the full list of :ref:`supported configurations <supported-configurations>`.
The Ryzen AI Software supports AMD processors with a Neural Processing Unit (NPU). For a list of supported hardware configurations, refer to :ref:`hardware-support`.

The following dependencies must be installed on the system before installing the Ryzen AI Software:

Expand All @@ -21,21 +21,32 @@ The following dependencies must be installed on the system before installing the
* - Dependencies
- Version Requirement
* - Windows 11
- build >= 22621.3527
* - Visual Studio
- 2022
* - cmake
- version >= 3.26
* - Python distribution (Miniforge preferred)
- Latest version
- >= 22621.3527
* - `Visual Studio Community <https://apps.microsoft.com/detail/xpdcfjdklzjlp8?hl=en-US&gl=US>`_
- 2022 with `Desktop Development with C++` checked
* - `cmake <https://cmake.org/download/>`_
- >= 3.26
* - `Python (Miniforge preferred) <https://conda-forge.org/download/>`_
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we really say "Miniforge preferred"?
Internally to AMD, we need to use Miniforge. But other companies may have different requirements.

- >= 3.10
* - :ref:`install-driver`
- >= 32.0.203.280

|

|warning| **IMPORTANT**:

- Visual Studio 2022 Community: ensure that `Desktop Development with C++` is installed
- Miniforge: Ensure that the proper miniforge paths are set in the System PATH variable. Open Windows PowerShell by right clicking and "Run as administrator" to set system path environment varibles. After opening a command prompt, you can use the following code to add the appropriate environment variables, substituting your actual paths:
.. code-block:: powershell

- Miniforge: ensure that the following path is set in the System PATH variable: ``path\to\miniforge3\condabin`` or ``path\to\miniforge3\Scripts\`` or ``path\to\miniforge3\`` (The System PATH variable should be set in the *System Variables* section of the *Environment Variables* window).
$existingPath = [System.Environment]::GetEnvironmentVariable('Path', 'Machine')

.. code-block:: powershell
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not put all 3 lines in the same code block?


$newPaths = "C:\Users\<user>\miniforge3\Scripts;C:\Users\<user>\miniforge3\condabin"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will only work for miniforge. If people have Anaconda or Miniconda, this will not work.


.. code-block:: powershell

[System.Environment]::SetEnvironmentVariable('Path', "$existingPath;$newPaths", 'Machine')

|

Expand All @@ -45,17 +56,14 @@ The following dependencies must be installed on the system before installing the
Install NPU Drivers
*******************

- Download and Install the NPU driver version: 32.0.203.280 or newer using the following links:
- Under "Task Manager" in Windows, go to Performance -> NPU0 to check the driver version.
- If needed, download the NPU driver version: 32.0.203.280 or the latest 32.0.203.304 here:
Copy link

Copilot AI Oct 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The instruction mentions 'here:' but the actual download links are on the following lines. Consider rephrasing to 'download the NPU driver from one of the following links:' for better clarity.

Suggested change
- If needed, download the NPU driver version: 32.0.203.280 or the latest 32.0.203.304 here:
- If needed, download the NPU driver version: 32.0.203.280 or the latest 32.0.203.304 from one of the following links:

Copilot uses AI. Check for mistakes.

- :download:`NPU Driver (Version 32.0.203.280) <https://account.amd.com/en/forms/downloads/ryzenai-eula-public-xef.html?filename=NPU_RAI1.5_280_WHQL.zip>`
- :download:`NPU Driver (Version 32.0.203.304) <https://account.amd.com/en/forms/downloads/ryzen-ai-software-platform-xef.html?filename=NPU_RAI1.6_304_WHQL.zip>`

- Install the NPU drivers by following these steps:

- Extract the downloaded ZIP file.
- Open a terminal in administrator mode and execute the ``.\npu_sw_installer.exe`` file.

- Ensure that NPU MCDM driver (Version:32.0.203.280, Date:5/16/2025) or (Version:32.0.203.304, Date:10/07/2025) is correctly installed by opening Task Manager -> Performance -> NPU0.
- Extract the downloaded ZIP file.
- Right click and "Run as administrator" on ``npu_sw_installer.exe``.
- Check that the NPU driver (Version:32.0.203.280, Date:5/16/2025) or (Version:32.0.203.304, Date:10/07/2025) was correctly installed by opening Task Manager -> Performance -> NPU0.


.. _install-bundled:
Expand All @@ -72,7 +80,7 @@ Install Ryzen AI Software
- Provide the destination folder for Ryzen AI installation (default: ``C:\Program Files\RyzenAI\1.6.0``)
- Specify the name for the conda environment (default: ``ryzen-ai-1.6.0``)

The Ryzen AI Software packages are now installed in the conda environment created by the installer.
The Ryzen AI Software packages should now installed in the conda environment created by the installer.
Copy link

Copilot AI Oct 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing word 'be' in sentence. Should read 'should now be installed'.

Suggested change
The Ryzen AI Software packages should now installed in the conda environment created by the installer.
The Ryzen AI Software packages should now be installed in the conda environment created by the installer.

Copilot uses AI. Check for mistakes.

.. note::
**The LLM flow requires an additional patch installation.** See the next section (:ref:`apply-patch`) for instructions.
Expand Down
Loading