diff --git a/BlazeDetectionSample/Face/README.md b/BlazeDetectionSample/Face/README.md index 1bef64da..114ab858 100644 --- a/BlazeDetectionSample/Face/README.md +++ b/BlazeDetectionSample/Face/README.md @@ -1,10 +1,10 @@ -# BlazeFace in Inference Engine +# BlazeFace in Sentis BlazeFace is a fast, light-weight face detector from Google Research. A pretrained model is available as part of Google's [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/vision/face_detector) framework. ![](../images/face.jpg) -The BlazeFace model has been converted from TFLite to ONNX for use in Inference Engine using [tf2onnx](https://github.com/onnx/tensorflow-onnx) with the default export parameters. +The BlazeFace model has been converted from TFLite to ONNX for use in Sentis using [tf2onnx](https://github.com/onnx/tensorflow-onnx) with the default export parameters. ## Functional API @@ -14,7 +14,7 @@ Each of the 896 boxes consists of: - [x position, y position, width, height] for the bounding box. The position is relative to the anchor position for the given index, these are precalculated and loaded from a csv file. - [x position, y position] for each of 6 facial keypoints relative to the anchor position. -We adapt the model using the Inference Engine functional API to apply non maximum suppression to filter the boxes with the highest scores that don't overlap with each other. +We adapt the model using the Sentis functional API to apply non maximum suppression to filter the boxes with the highest scores that don't overlap with each other. ``` var xCenter = rawBoxes[0, .., 0] + anchors[.., 0] * inputSize; var yCenter = rawBoxes[0, .., 1] + anchors[.., 1] * inputSize; @@ -68,4 +68,4 @@ In this demo we visualize the four faces with the highest scores that pass the s ## Notes This model has been trained primarily for short-range faces in images taken using the front-facing smartphone camera, results may be poor for longer-range images of faces. -The non max suppression operator requires a blocking GPU readback, this prevents this demo from running on the WebGPU backend in Unity 6 and Inference Engine 2.2. \ No newline at end of file +The non max suppression operator requires a blocking GPU readback, this prevents this demo from running on the WebGPU backend in Unity 6 and Sentis 2.2. \ No newline at end of file diff --git a/BlazeDetectionSample/Hand/README.md b/BlazeDetectionSample/Hand/README.md index 81b4f10a..691efe0e 100644 --- a/BlazeDetectionSample/Hand/README.md +++ b/BlazeDetectionSample/Hand/README.md @@ -1,10 +1,10 @@ -# BlazeHand in Inference Engine +# BlazeHand in Sentis BlazeHand is a fast, light-weight hand detector from Google Research. Pretrained models are available as part of Google's [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/vision/hand_landmarker) framework. ![](../images/hand.jpg) -The BlazeHand models have been converted from TFLite to ONNX for use in Inference Engine using [tf2onnx](https://github.com/onnx/tensorflow-onnx) with the default export parameters. +The BlazeHand models have been converted from TFLite to ONNX for use in Sentis using [tf2onnx](https://github.com/onnx/tensorflow-onnx) with the default export parameters. ## Functional API @@ -14,7 +14,7 @@ Each of the 2016 boxes consists of: - [x position, y position, width, height] for the palm bounding box. The position is relative to the anchor position for the given index, these are precalculated and loaded from a csv file. - [x position, y position] for each of 7 palm keypoints relative to the anchor position. -We adapt the model using the Inference Engine functional API to apply arg max to filter the box with the highest score. +We adapt the model using the Sentis functional API to apply arg max to filter the box with the highest score. ``` var detectionScores = ScoreFiltering(rawScores, 100f); // (1, 2254, 1) var bestScoreIndex = Functional.ArgMax(rawScores, 1).Squeeze(); @@ -75,6 +75,6 @@ m_HandLandmarkerWorker.Schedule(m_LandmarkerInput); The output tensor of the landmarker model is asynchronously downloaded and once the values are on the CPU we use them together with the affine transformation matrix to set the transforms on the keypoints for visualization. ## WebGPU -Unity 6 supports access to the WebGPU backend in early access. Inference Engine has full support for running models on the web using the WebGPU backend. Discover how to gain early access and test WebGPU in our [graphics forum](https://discussions.unity.com/t/early-access-to-the-new-webgpu-backend-in-unity-2023-3/933493). +Unity 6 supports access to the WebGPU backend in early access. Sentis has full support for running models on the web using the WebGPU backend. Discover how to gain early access and test WebGPU in our [graphics forum](https://discussions.unity.com/t/early-access-to-the-new-webgpu-backend-in-unity-2023-3/933493). ![](../images/hand_webgpu.png) \ No newline at end of file diff --git a/BlazeDetectionSample/Pose/README.md b/BlazeDetectionSample/Pose/README.md index 4236a173..8f474c08 100644 --- a/BlazeDetectionSample/Pose/README.md +++ b/BlazeDetectionSample/Pose/README.md @@ -1,10 +1,10 @@ -# BlazePose in Inference Engine +# BlazePose in Sentis BlazePose is a fast, light-weight hand detector from Google Research. Pretrained models are available as part of Google's [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/vision/hand_landmarker) framework. ![](../images/pose.jpg) -The BlazePose models have been converted from TFLite to ONNX for use in Inference Engine using [tf2onnx](https://github.com/onnx/tensorflow-onnx) with the default export parameters. Three variants of the landmarker model (lite, full, heavy) are provided which can be interchanged. The larger models may provide more accurate results but take longer to run. +The BlazePose models have been converted from TFLite to ONNX for use in Sentis using [tf2onnx](https://github.com/onnx/tensorflow-onnx) with the default export parameters. Three variants of the landmarker model (lite, full, heavy) are provided which can be interchanged. The larger models may provide more accurate results but take longer to run. ## Functional API @@ -14,7 +14,7 @@ Each of the 2254 boxes consists of: - [x position, y position, width, height] for the head bounding box. The position is relative to the anchor position for the given index, these are precalculated and loaded from a csv file. - [x position, y position] for each of 4 body keypoints relative to the anchor position. -We adapt the model using the Inference Engine functional API to apply arg max to filter the box with the highest score. +We adapt the model using the Sentis functional API to apply arg max to filter the box with the highest score. ``` var detectionScores = ScoreFiltering(rawScores, 100f); // (1, 2254, 1) var bestScoreIndex = Functional.ArgMax(rawScores, 1).Squeeze(); @@ -77,6 +77,6 @@ m_PoseLandmarkerWorker.Schedule(m_LandmarkerInput); The output tensor of the landmarker model is asynchronously downloaded and once the values are on the CPU we use them together with the affine transformation matrix to set the transforms on the keypoints for visualization. ## WebGPU -Unity 6 supports access to the WebGPU backend in early access. Inference Engine has full support for running models on the web using the WebGPU backend. Discover how to gain early access and test WebGPU in our [graphics forum](https://discussions.unity.com/t/early-access-to-the-new-webgpu-backend-in-unity-2023-3/933493). +Unity 6 supports access to the WebGPU backend in early access. Sentis has full support for running models on the web using the WebGPU backend. Discover how to gain early access and test WebGPU in our [graphics forum](https://discussions.unity.com/t/early-access-to-the-new-webgpu-backend-in-unity-2023-3/933493). ![](../images/pose_webgpu.png) \ No newline at end of file diff --git a/BlazeDetectionSample/README.md b/BlazeDetectionSample/README.md index 3d8ae251..4fd9fc8b 100644 --- a/BlazeDetectionSample/README.md +++ b/BlazeDetectionSample/README.md @@ -1,14 +1,14 @@ -# Blaze detection models in Inference Engine +# Blaze detection models in Sentis -The blaze family of models are light-weight models for real-time detection from Google Research. Here we demonstrate using these pretrained models in Unity using Inference Engine. +The blaze family of models are light-weight models for real-time detection from Google Research. Here we demonstrate using these pretrained models in Unity using Sentis. -We use the Inference Engine API to augment the models, run asynchronous inference on the GPU on all Unity backends. +We use the Sentis API to augment the models, run asynchronous inference on the GPU on all Unity backends. These demos use images for detection, but can be easily adapted for videos or webcams. ## Face detection -We use the BlazeFace detector model with the Inference Engine non max suppression operator to recognise multiple faces in an image. Each face has a score, bounding box and 6 keypoints. +We use the BlazeFace detector model with the Sentis non max suppression operator to recognise multiple faces in an image. Each face has a score, bounding box and 6 keypoints. ![](./images/face.jpg) diff --git a/BoardGameAISample/Assets/Scripts/Othello.cs b/BoardGameAISample/Assets/Scripts/Othello.cs index c3a8c811..dc0c4252 100644 --- a/BoardGameAISample/Assets/Scripts/Othello.cs +++ b/BoardGameAISample/Assets/Scripts/Othello.cs @@ -175,7 +175,7 @@ void CreateEngine() { m_Engine?.Dispose(); - // Load in the neural network that will make the move predictions for the spirit + create inference engine + // Load in the neural network that will make the move predictions for the spirit + create Sentis var othelloModel = ModelLoader.Load(model); var graph = new FunctionalGraph(); diff --git a/ChatSample/Assets/ChatLLM/Editor/EditorChatWindow.cs b/ChatSample/Assets/ChatLLM/Editor/EditorChatWindow.cs index 532b01de..d43e2a35 100644 --- a/ChatSample/Assets/ChatLLM/Editor/EditorChatWindow.cs +++ b/ChatSample/Assets/ChatLLM/Editor/EditorChatWindow.cs @@ -34,7 +34,7 @@ void OnDestroy() m_ChatWindow?.Dispose(); } - [MenuItem("Inference Engine/Sample/Chat/Start Chat")] + [MenuItem("Sentis/Sample/Chat/Start Chat")] public static void OpenWindow() { var window = GetWindow(); @@ -43,13 +43,13 @@ public static void OpenWindow() window.Show(); } - [MenuItem("Inference Engine/Sample/Chat/Start Chat", true)] + [MenuItem("Sentis/Sample/Chat/Start Chat", true)] public static bool OpenWindowValidate() { return ModelDownloaderWindow.VerifyModelsExist(); } - [MenuItem("Inference Engine/Sample/Chat/Download Models")] + [MenuItem("Sentis/Sample/Chat/Download Models")] public static void DownloadModels() { var window = GetWindow(); @@ -58,7 +58,7 @@ public static void DownloadModels() window.Show(); } - [MenuItem("Inference Engine/Sample/Chat/Download Models", true)] + [MenuItem("Sentis/Sample/Chat/Download Models", true)] public static bool DownloadModelsValidate() { return !ModelDownloaderWindow.VerifyModelsExist(); diff --git a/ChatSample/README.md b/ChatSample/README.md index 1644908d..091fc0de 100644 --- a/ChatSample/README.md +++ b/ChatSample/README.md @@ -1,6 +1,6 @@ # Chat LLM Sample -Interactive chat interface powered by the LLaVA OneVision multimodal model running locally in Unity using Inference Engine. +Interactive chat interface powered by the LLaVA OneVision multimodal model running locally in Unity using Sentis. ![Chat Interface](Documentation/main.png) @@ -18,7 +18,7 @@ We use this to create a seamless conversational AI experience. ## Features - **Multimodal Understanding**: Processes both text and images in conversation -- **Real-time Inference**: Fast GPU-accelerated inference using Unity's Inference Engine +- **Real-time Inference**: Fast GPU-accelerated inference using Unity's Sentis - **Editor Integration**: Available as an Editor window for development and testing - **Streaming Responses**: Token-by-token response generation for responsive interaction - **Model Management**: HuggingFace model downloading @@ -26,8 +26,8 @@ We use this to create a seamless conversational AI experience. ## Getting Started 1. Open the Unity project -2. Download models by navigating to **Inference Engine > Sample > Chat > Download Models** in the menu -3. Navigate to **Inference Engine > Sample > Chat > Start Chat** in the menu +2. Download models by navigating to **Sentis > Sample > Chat > Download Models** in the menu +3. Navigate to **Sentis > Sample > Chat > Start Chat** in the menu 4. Start chatting with the AI assistant! Alternatively, you can manually download the models from [https://huggingface.co/llava-hf/llava-onevision-qwen2-0.5b-si-hf](https://huggingface.co/llava-hf/llava-onevision-qwen2-0.5b-si-hf) and place them in `ChatSample/Assets/ChatLLM/Resources/Models/`. diff --git a/License.md b/License.md index 28c2e3e1..5a8b263c 100644 --- a/License.md +++ b/License.md @@ -1,4 +1,4 @@ -Inference Engine copyright © 2023 Unity Technologies. +Sentis copyright © 2023 Unity Technologies. Licensed under the Unity Terms of Service as an Experimental / Evaluation Version ( see https://unity.com/legal/terms-of-service.). diff --git a/ProteinFoldingSample/README.md b/ProteinFoldingSample/README.md index c3c70811..b3edfab0 100644 --- a/ProteinFoldingSample/README.md +++ b/ProteinFoldingSample/README.md @@ -1,6 +1,6 @@ # Protein Folding Sample -This sample is not compatible with Inference Engine, only Sentis 1.X. +This sample is only compatible with Sentis 1.X. Protein folding visualization demo showing how to render and visualize protein folding in real time 3D. diff --git a/README.md b/README.md index 33d12175..bc5950a4 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ -# Inference Engine Samples +# Sentis Samples -Contains example and template projects for Inference Engine package use. +Contains example and template projects for Sentis package use. ## Samples [Text To Speech Sample](TextToSpeechSample/README.md) diff --git a/StarSimulationSample/README.md b/StarSimulationSample/README.md index 8ae76e0a..a2bd97df 100644 --- a/StarSimulationSample/README.md +++ b/StarSimulationSample/README.md @@ -1,6 +1,6 @@ # Star Simulation Sample -Star simulation demo showing how to use Inference Engine as a linear algebra library, solving equations of motions, all on the GPU. +Star simulation demo showing how to use Sentis as a linear algebra library, solving equations of motions, all on the GPU. ![image info](./Documentation/main.jpg) @@ -8,7 +8,7 @@ Star simulation demo showing how to use Inference Engine as a linear algebra lib Equations of motion can be written in matrix form (Hamiltonian mechanics https://en.wikipedia.org/wiki/Hamiltonian_mechanics). -Inference Engine is at its core a tensor-based linear algebra engine, so you can use this to simulate a physical system in real time. +Sentis is at its core a tensor-based linear algebra engine, so you can use this to simulate a physical system in real time. The strength of this solution resides in the simplicity of writing out the system and efficient CPU/GPU code handling the intense computations while they remain on the GPU or Unity job system. @@ -40,7 +40,7 @@ We will use the current values to update the position and brightness of each sta We won't get into the specifics of the equations of motion here. -But we write them out as matrix form (2D-tensor), using Inference Engine [functional api](https://docs.unity3d.com/Packages/com.unity.ai.inference@2.2/manual/create-a-new-model.html). +But we write them out as matrix form (2D-tensor), using Sentis [functional api](https://docs.unity3d.com/Packages/com.unity.ai.inference@2.2/manual/create-a-new-model.html). This allows us to define successive tensor operations in a few lines of code, defining the system of equations to compute every frame. We compile this set of operations to a `Model` which then can be used as usual. diff --git a/TextToSpeechSample/Assets/Editor/AppEditorWindow.cs b/TextToSpeechSample/Assets/Editor/AppEditorWindow.cs index ae511a4b..ff4046c4 100644 --- a/TextToSpeechSample/Assets/Editor/AppEditorWindow.cs +++ b/TextToSpeechSample/Assets/Editor/AppEditorWindow.cs @@ -16,7 +16,7 @@ void CreateGUI() visualTreeAsset.CloneTree(rootVisualElement); } - [MenuItem("Inference Engine/Sample/Text-To-Speech/Start Kokoro")] + [MenuItem("Sentis/Sample/Text-To-Speech/Start Kokoro")] public static void OpenWindow() { var window = GetWindow(); @@ -24,7 +24,7 @@ public static void OpenWindow() window.Show(); } - [MenuItem("Inference Engine/Sample/Text-To-Speech/Start Kokoro", true)] + [MenuItem("Sentis/Sample/Text-To-Speech/Start Kokoro", true)] public static bool ValidateOpenWindow() { var configurations = UI.Network.ModelDownloaderWindow.GetDownloadConfigurations(); diff --git a/TextToSpeechSample/Assets/Editor/DownloadEditorWindow.cs b/TextToSpeechSample/Assets/Editor/DownloadEditorWindow.cs index 526aceb1..577f0234 100644 --- a/TextToSpeechSample/Assets/Editor/DownloadEditorWindow.cs +++ b/TextToSpeechSample/Assets/Editor/DownloadEditorWindow.cs @@ -15,7 +15,7 @@ void CreateGUI() visualTreeAsset.CloneTree(rootVisualElement); } - [MenuItem("Inference Engine/Sample/Text-To-Speech/Download Models")] + [MenuItem("Sentis/Sample/Text-To-Speech/Download Models")] public static void OpenWindow() { var window = GetWindow(); diff --git a/TextToSpeechSample/README.md b/TextToSpeechSample/README.md index 4feba0f9..13733161 100644 --- a/TextToSpeechSample/README.md +++ b/TextToSpeechSample/README.md @@ -1,6 +1,6 @@ # Text-to-Speech Sample -Interactive interface powered by the Kokoro Text-To-Speech model running locally in Unity using Inference Engine. +Interactive interface powered by the Kokoro Text-To-Speech model running locally in Unity using Sentis. ![TTS Interface](Documentation/main.png) @@ -20,7 +20,7 @@ We use this to create a seamless text-to-speech experience with natural-sounding - **Multiple Voices**: Choose from various pre-trained voice styles - **Speed Control**: Adjustable speech rate for different use cases -- **Real-time Generation**: Fast GPU-accelerated inference using Unity's Inference Engine +- **Real-time Generation**: Fast GPU-accelerated inference using Sentis - **Editor Integration**: Available as an Editor window for development and testing - **Cross-Platform**: Support for all Unity-supported platforms thanks to pure C# implementation - **Model Management**: Automated model downloading and setup @@ -29,8 +29,8 @@ We use this to create a seamless text-to-speech experience with natural-sounding ## Getting Started 1. Open the Unity project -2. Download models by navigating to **Inference Engine > Sample > Text-To-Speech > Download Models** in the menu -3. Navigate to **Inference Engine > Sample > Text-To-Speech > Start Kokoro** in the menu +2. Download models by navigating to **Sentis > Sample > Text-To-Speech > Download Models** in the menu +3. Navigate to **Sentis > Sample > Text-To-Speech > Start Kokoro** in the menu 4. Enter text and generate speech with your chosen voice! Alternatively, you can use the runtime scene at `TextToSpeechSample/Assets/Scenes/App.unity`, but make sure to download the models beforehand using the editor menu. @@ -62,4 +62,4 @@ The sample demonstrates: - State management using Redux patterns - Model scheduling and resource management - Cross-platform audio generation -- Advanced text-to-phoneme processing with comprehensive English language support \ No newline at end of file +- Advanced text-to-phoneme processing with comprehensive English language support diff --git a/catalog-info.yaml b/catalog-info.yaml index f636441c..39f94b8c 100644 --- a/catalog-info.yaml +++ b/catalog-info.yaml @@ -5,7 +5,7 @@ metadata: annotations: github.com/project-slug: Unity-Technologies/sentis-samples name: sentis-samples - description: "Inference Engine samples internal development repository. Contains example and template project for Inference Engine package use." + description: "Sentis samples internal development repository. Contains example and template project for Sentis package use." labels: costcenter: "5010" tags: