-
Notifications
You must be signed in to change notification settings - Fork 37
overhaul elements of index.rst and inst.rst #497
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Changes from all commits
b08189d
34fa6ef
8f5bb95
7c7344a
6e34db3
4f8f6d4
ddd814c
26e743d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -1,7 +1,7 @@ | ||||||||||||||
| :orphan: | ||||||||||||||
|
|
||||||||||||||
| ######################## | ||||||||||||||
| Getting Started Tutorial | ||||||||||||||
| ResNet50 Tutorial | ||||||||||||||
| ######################## | ||||||||||||||
|
|
||||||||||||||
| This tutorial uses a fine-tuned version of the ResNet model (using the CIFAR-10 dataset) to demonstrate the process of preparing, quantizing, and deploying a model using Ryzen AI Software. The tutorial features deployment using both Python and C++ ONNX runtime code. | ||||||||||||||
|
|
@@ -79,9 +79,10 @@ Step 2: Prepare dataset and ONNX model | |||||||||||||
|
|
||||||||||||||
| This example utilizes a custom ResNet model finetuned using the CIFAR-10 dataset | ||||||||||||||
|
|
||||||||||||||
| The ``prepare_model_data.py`` script downloads the CIFAR-10 dataset in pickle format (for python) and binary format (for C++). This dataset is used in the subsequent steps for quantization and inference. The script also exports the provided PyTorch model into ONNX format. The following snippet from the script shows how the ONNX model is exported: | ||||||||||||||
| The ``prepare_model_data.py`` script downloads the CIFAR-10 dataset in pickle format (for python) and binary format (for C++). | ||||||||||||||
| *The CIFAR-10 dataset consists of 60,000 32x32 colour images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images.* You can learn more about the CIFAR-10 dataset here: https://www.cs.toronto.edu/~kriz/cifar.html. This dataset is used in the subsequent steps for quantization and inference. The script also exports the provided PyTorch model into ONNX format. The following snippet from the script shows how the ONNX model is exported: | ||||||||||||||
|
|
||||||||||||||
|
Comment on lines
+83
to
84
|
||||||||||||||
| *The CIFAR-10 dataset consists of 60,000 32x32 colour images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images.* You can learn more about the CIFAR-10 dataset here: https://www.cs.toronto.edu/~kriz/cifar.html. This dataset is used in the subsequent steps for quantization and inference. The script also exports the provided PyTorch model into ONNX format. The following snippet from the script shows how the ONNX model is exported: | |
| .. note:: | |
| The CIFAR-10 dataset consists of 60,000 32x32 colour images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images. | |
| You can learn more about the CIFAR-10 dataset here: https://www.cs.toronto.edu/~kriz/cifar.html. | |
| This dataset is used in the subsequent steps for quantization and inference. The script also exports the provided PyTorch model into ONNX format. The following snippet from the script shows how the ONNX model is exported: |
Copilot
AI
Oct 15, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Path uses backslashes which are Windows-specific. Consider using forward slashes for cross-platform compatibility or noting this is Windows-specific.
| .. note:: | |
| The following command uses Windows-style backslashes and is intended for use in a Windows environment. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -7,19 +7,187 @@ AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing | |
| .. image:: images/rai-sw.png | ||
| :align: center | ||
|
|
||
| *********** | ||
| Quick Start | ||
| *********** | ||
| .. _hardware-support: | ||
| **************** | ||
| Hardware Support | ||
| **************** | ||
| Ryzen AI 1.6 Software runs on AMD processors outlined below. For a more detailed list of supported devices, refer to the `processor specifications <https://www.amd.com/en/products/specifications/processors.html>`_ page (scroll to the "AMD Ryzen™ AI" column toward the right side of the table, and select "Available" from the pull-down menu). Support for Linux is coming soon in Ryzen AI 1.6.1. | ||
|
|
||
| .. list-table:: Supported Ryzen AI Processor Configurations | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This table will only grow and will be hard to maintain as we add support for more platforms. It's redundant with the https://www.amd.com/en/products/specifications/processors.html page. And we risk creating inconsistencies. Case in point, see the comment about Z2 below. I recommend we simply link to the official processor specification page. |
||
| :header-rows: 1 | ||
| :widths: 25 25 12 22 12 10 10 10 | ||
|
|
||
| * - Series | ||
| - Codename | ||
| - Abbreviation | ||
| - Graphics Model | ||
| - Ryzen™ AI Support | ||
| - Launch Year | ||
| - Windows | ||
| - Linux | ||
| * - Ryzen AI Max PRO 300 Series | ||
| - Strix Halo | ||
| - STX | ||
| - Radeon 8000S Series | ||
| - ✅ | ||
| - 2025 | ||
| - ☑️ | ||
| - | ||
| * - Ryzen AI PRO 300 Series | ||
| - Strix Point / Krackan Point | ||
| - STX/KRK | ||
| - Radeon 800M Series | ||
| - ✅ | ||
| - 2025 | ||
| - ☑️ | ||
| - | ||
| * - Ryzen AI Max 300 Series | ||
| - Strix Halo | ||
| - STX | ||
| - Radeon 8000S Series | ||
| - ✅ | ||
| - 2025 | ||
| - ☑️ | ||
| - | ||
| * - Ryzen Z2 | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't believe this device is supported. |
||
| - Z2 | ||
| - Z2 | ||
| - Radeon | ||
| - ✅ | ||
| - 2025 | ||
| - | ||
| - | ||
| * - Ryzen AI 300 Series | ||
| - Strix Point | ||
| - STX | ||
| - Radeon 800M Series | ||
| - ✅ | ||
| - 2025 | ||
| - ☑️ | ||
| - | ||
| * - Ryzen Pro 200 Series | ||
| - Hawk Point | ||
| - HPT | ||
| - Radeon 700M Series | ||
| - ✅ | ||
| - 2025 | ||
| - ☑️ | ||
| - | ||
| * - Ryzen 200 Series | ||
| - Hawk Point | ||
| - HPT | ||
| - Radeon 700M Series | ||
| - ✅ | ||
| - 2025 | ||
| - ☑️ | ||
| - | ||
| * - Ryzen PRO 8000 Series | ||
| - Hawk Point | ||
| - HPT | ||
| - Radeon 700M Series | ||
| - ✅ | ||
| - 2024 | ||
| - ☑️ | ||
| - | ||
| * - Ryzen 8000 Series | ||
| - Hawk Point | ||
| - HPT | ||
| - Radeon 700M Series | ||
| - ✅ | ||
| - 2024 | ||
| - ☑️ | ||
| - | ||
| * - Ryzen Pro 7000 Series | ||
| - Phoenix | ||
| - PHX | ||
| - Radeon 700M Series | ||
| - ✅ | ||
| - 2023 | ||
| - ☑️ | ||
| - | ||
| * - Ryzen 7000 Series | ||
| - Phoenix | ||
| - PHX | ||
| - Radeon 700M Series | ||
| - ✅ | ||
| - 2023 | ||
| - ☑️ | ||
| - | ||
|
|
||
| ************ | ||
| LLMs Support | ||
| ************ | ||
| Ryzen AI 1.6 supports running LLMs on the hardware configurations in the table below. | ||
|
|
||
| .. list-table:: LLM Support on Ryzen AI Processors | ||
| :header-rows: 1 | ||
| :widths: 25 25 25 25 25 25 | ||
|
|
||
| * - Processor Series | ||
| - Codename | ||
| - CPU | ||
| - GPU | ||
| - NPU | ||
| - Hybrid (NPU + iGPU) | ||
| * - Ryzen AI 300 | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We support LLMs on all STX and KRK platforms, not all Ryzen AI 300. The first column in this table is not needed and should be removed. |
||
| - STX/KRK | ||
| - ✓ | ||
| - ✓ | ||
| - ✓ | ||
| - ✓ | ||
| * - Ryzen AI 7000/8000/200 | ||
| - PHX/HPT | ||
| - ✓ | ||
| - ✓ | ||
| - ✗ | ||
| - ✗ | ||
|
|
||
| For more details on running LLMs, refer to the :doc:`llm/overview` page. | ||
|
|
||
| ******************* | ||
| Other Model Support | ||
| ******************* | ||
|
|
||
| The following table lists which types of models are supported on the different hardware platforms. | ||
|
|
||
| .. list-table:: | ||
| :header-rows: 1 | ||
|
|
||
| * - Model Type | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why mention CPU and GPU for LLMs and not for other models? BF16 models can run on CPU and GPU on PHX/HPT. |
||
| - STX/KRK | ||
| - PHX/HPT | ||
| * - CNN INT8 | ||
| - ✓ | ||
| - ✓ | ||
| * - CNN BF16 | ||
| - ✓ | ||
| - | ||
| * - NLP BF16 | ||
| - ✓ | ||
| - | ||
|
|
||
| *********************** | ||
| Installation & Examples | ||
| *********************** | ||
| To get started with installing and using Ryzen AI Software, visit the following: | ||
|
|
||
| - :ref:`Supported Configurations <supported-configurations>` | ||
| - :doc:`inst` | ||
| - :doc:`examples` | ||
|
|
||
| ************************* | ||
| Development Flow Overview | ||
| ************************* | ||
|
|
||
| The Ryzen AI development flow does not require any modifications to the existing model training processes and methods. The pre-trained model can be used as the starting point of the Ryzen AI flow. | ||
| A typical Ryzen AI flow might look like the following: | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is accurate for CNNs, but not for BF16 NLPs (no quantization step) or LLMs (OGA flow). |
||
|
|
||
| 1. Begin with a pretrained PyTorch (*.pt) model. | ||
| 2. Convert the model to ONNX (*.onnx) format. You can follow the PyTorch documentation here: `Export a PyTorch model to ONNX <https://docs.pytorch.org/tutorials/beginner/onnx/export_simple_model_to_onnx_tutorial.html>`_. | ||
| 3. Optionally, quantize the model with `AMD Quark <https://quark.docs.amd.com/latest/>`_ for a reduced model size. | ||
| 4. Deploy the model for inference in your application. | ||
| 5. Run the :doc:`ai_analyzer` to assess model performance. | ||
|
|
||
| .. note:: | ||
| You may find that you can skip steps 1-3 and deploy a model right away if you already have an ONNX model that fits on your device. | ||
|
|
||
| Quantization | ||
| ============ | ||
|
|
||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -1,16 +1,16 @@ | ||||||
| .. include:: /icons.txt | ||||||
|
|
||||||
| ######################### | ||||||
| Installation Instructions | ||||||
| ######################### | ||||||
| ################################# | ||||||
| Windows Installation Instructions | ||||||
| ################################# | ||||||
|
|
||||||
|
|
||||||
|
|
||||||
| ************* | ||||||
| Prerequisites | ||||||
| ************* | ||||||
|
|
||||||
| The Ryzen AI Software supports AMD processors with a Neural Processing Unit (NPU). Refer to the release notes for the full list of :ref:`supported configurations <supported-configurations>`. | ||||||
| The Ryzen AI Software supports AMD processors with a Neural Processing Unit (NPU). For a list of supported hardware configurations, refer to :ref:`hardware-support`. | ||||||
|
|
||||||
| The following dependencies must be installed on the system before installing the Ryzen AI Software: | ||||||
|
|
||||||
|
|
@@ -21,21 +21,32 @@ The following dependencies must be installed on the system before installing the | |||||
| * - Dependencies | ||||||
| - Version Requirement | ||||||
| * - Windows 11 | ||||||
| - build >= 22621.3527 | ||||||
| * - Visual Studio | ||||||
| - 2022 | ||||||
| * - cmake | ||||||
| - version >= 3.26 | ||||||
| * - Python distribution (Miniforge preferred) | ||||||
| - Latest version | ||||||
| - >= 22621.3527 | ||||||
| * - `Visual Studio Community <https://apps.microsoft.com/detail/xpdcfjdklzjlp8?hl=en-US&gl=US>`_ | ||||||
| - 2022 with `Desktop Development with C++` checked | ||||||
| * - `cmake <https://cmake.org/download/>`_ | ||||||
| - >= 3.26 | ||||||
| * - `Python (Miniforge preferred) <https://conda-forge.org/download/>`_ | ||||||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should we really say "Miniforge preferred"? |
||||||
| - >= 3.10 | ||||||
| * - :ref:`install-driver` | ||||||
| - >= 32.0.203.280 | ||||||
|
|
||||||
| | | ||||||
|
|
||||||
| |warning| **IMPORTANT**: | ||||||
|
|
||||||
| - Visual Studio 2022 Community: ensure that `Desktop Development with C++` is installed | ||||||
| - Miniforge: Ensure that the proper miniforge paths are set in the System PATH variable. Open Windows PowerShell by right clicking and "Run as administrator" to set system path environment varibles. After opening a command prompt, you can use the following code to add the appropriate environment variables, substituting your actual paths: | ||||||
| .. code-block:: powershell | ||||||
|
|
||||||
| - Miniforge: ensure that the following path is set in the System PATH variable: ``path\to\miniforge3\condabin`` or ``path\to\miniforge3\Scripts\`` or ``path\to\miniforge3\`` (The System PATH variable should be set in the *System Variables* section of the *Environment Variables* window). | ||||||
| $existingPath = [System.Environment]::GetEnvironmentVariable('Path', 'Machine') | ||||||
|
|
||||||
| .. code-block:: powershell | ||||||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why not put all 3 lines in the same code block? |
||||||
|
|
||||||
| $newPaths = "C:\Users\<user>\miniforge3\Scripts;C:\Users\<user>\miniforge3\condabin" | ||||||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This will only work for miniforge. If people have Anaconda or Miniconda, this will not work. |
||||||
|
|
||||||
| .. code-block:: powershell | ||||||
|
|
||||||
| [System.Environment]::SetEnvironmentVariable('Path', "$existingPath;$newPaths", 'Machine') | ||||||
|
|
||||||
| | | ||||||
|
|
||||||
|
|
@@ -45,17 +56,14 @@ The following dependencies must be installed on the system before installing the | |||||
| Install NPU Drivers | ||||||
| ******************* | ||||||
|
|
||||||
| - Download and Install the NPU driver version: 32.0.203.280 or newer using the following links: | ||||||
| - Under "Task Manager" in Windows, go to Performance -> NPU0 to check the driver version. | ||||||
| - If needed, download the NPU driver version: 32.0.203.280 or the latest 32.0.203.304 here: | ||||||
|
||||||
| - If needed, download the NPU driver version: 32.0.203.280 or the latest 32.0.203.304 here: | |
| - If needed, download the NPU driver version: 32.0.203.280 or the latest 32.0.203.304 from one of the following links: |
Copilot
AI
Oct 15, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing word 'be' in sentence. Should read 'should now be installed'.
| The Ryzen AI Software packages should now installed in the conda environment created by the installer. | |
| The Ryzen AI Software packages should now be installed in the conda environment created by the installer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why remove "Getting Started" from the description?