Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions BlazeDetectionSample/Face/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# BlazeFace in Inference Engine
# BlazeFace in Sentis

BlazeFace is a fast, light-weight face detector from Google Research. A pretrained model is available as part of Google's [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/vision/face_detector) framework.

![](../images/face.jpg)

The BlazeFace model has been converted from TFLite to ONNX for use in Inference Engine using [tf2onnx](https://github.com/onnx/tensorflow-onnx) with the default export parameters.
The BlazeFace model has been converted from TFLite to ONNX for use in Sentis using [tf2onnx](https://github.com/onnx/tensorflow-onnx) with the default export parameters.

## Functional API

Expand All @@ -14,7 +14,7 @@ Each of the 896 boxes consists of:
- [x position, y position, width, height] for the bounding box. The position is relative to the anchor position for the given index, these are precalculated and loaded from a csv file.
- [x position, y position] for each of 6 facial keypoints relative to the anchor position.

We adapt the model using the Inference Engine functional API to apply non maximum suppression to filter the boxes with the highest scores that don't overlap with each other.
We adapt the model using the Sentis functional API to apply non maximum suppression to filter the boxes with the highest scores that don't overlap with each other.
```
var xCenter = rawBoxes[0, .., 0] + anchors[.., 0] * inputSize;
var yCenter = rawBoxes[0, .., 1] + anchors[.., 1] * inputSize;
Expand Down Expand Up @@ -68,4 +68,4 @@ In this demo we visualize the four faces with the highest scores that pass the s
## Notes
This model has been trained primarily for short-range faces in images taken using the front-facing smartphone camera, results may be poor for longer-range images of faces.

The non max suppression operator requires a blocking GPU readback, this prevents this demo from running on the WebGPU backend in Unity 6 and Inference Engine 2.2.
The non max suppression operator requires a blocking GPU readback, this prevents this demo from running on the WebGPU backend in Unity 6 and Sentis 2.2.
8 changes: 4 additions & 4 deletions BlazeDetectionSample/Hand/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# BlazeHand in Inference Engine
# BlazeHand in Sentis

BlazeHand is a fast, light-weight hand detector from Google Research. Pretrained models are available as part of Google's [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/vision/hand_landmarker) framework.

![](../images/hand.jpg)

The BlazeHand models have been converted from TFLite to ONNX for use in Inference Engine using [tf2onnx](https://github.com/onnx/tensorflow-onnx) with the default export parameters.
The BlazeHand models have been converted from TFLite to ONNX for use in Sentis using [tf2onnx](https://github.com/onnx/tensorflow-onnx) with the default export parameters.

## Functional API

Expand All @@ -14,7 +14,7 @@ Each of the 2016 boxes consists of:
- [x position, y position, width, height] for the palm bounding box. The position is relative to the anchor position for the given index, these are precalculated and loaded from a csv file.
- [x position, y position] for each of 7 palm keypoints relative to the anchor position.

We adapt the model using the Inference Engine functional API to apply arg max to filter the box with the highest score.
We adapt the model using the Sentis functional API to apply arg max to filter the box with the highest score.
```
var detectionScores = ScoreFiltering(rawScores, 100f); // (1, 2254, 1)
var bestScoreIndex = Functional.ArgMax(rawScores, 1).Squeeze();
Expand Down Expand Up @@ -75,6 +75,6 @@ m_HandLandmarkerWorker.Schedule(m_LandmarkerInput);
The output tensor of the landmarker model is asynchronously downloaded and once the values are on the CPU we use them together with the affine transformation matrix to set the transforms on the keypoints for visualization.

## WebGPU
Unity 6 supports access to the WebGPU backend in early access. Inference Engine has full support for running models on the web using the WebGPU backend. Discover how to gain early access and test WebGPU in our [graphics forum](https://discussions.unity.com/t/early-access-to-the-new-webgpu-backend-in-unity-2023-3/933493).
Unity 6 supports access to the WebGPU backend in early access. Sentis has full support for running models on the web using the WebGPU backend. Discover how to gain early access and test WebGPU in our [graphics forum](https://discussions.unity.com/t/early-access-to-the-new-webgpu-backend-in-unity-2023-3/933493).

![](../images/hand_webgpu.png)
8 changes: 4 additions & 4 deletions BlazeDetectionSample/Pose/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# BlazePose in Inference Engine
# BlazePose in Sentis

BlazePose is a fast, light-weight hand detector from Google Research. Pretrained models are available as part of Google's [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/vision/hand_landmarker) framework.

![](../images/pose.jpg)

The BlazePose models have been converted from TFLite to ONNX for use in Inference Engine using [tf2onnx](https://github.com/onnx/tensorflow-onnx) with the default export parameters. Three variants of the landmarker model (lite, full, heavy) are provided which can be interchanged. The larger models may provide more accurate results but take longer to run.
The BlazePose models have been converted from TFLite to ONNX for use in Sentis using [tf2onnx](https://github.com/onnx/tensorflow-onnx) with the default export parameters. Three variants of the landmarker model (lite, full, heavy) are provided which can be interchanged. The larger models may provide more accurate results but take longer to run.

## Functional API

Expand All @@ -14,7 +14,7 @@ Each of the 2254 boxes consists of:
- [x position, y position, width, height] for the head bounding box. The position is relative to the anchor position for the given index, these are precalculated and loaded from a csv file.
- [x position, y position] for each of 4 body keypoints relative to the anchor position.

We adapt the model using the Inference Engine functional API to apply arg max to filter the box with the highest score.
We adapt the model using the Sentis functional API to apply arg max to filter the box with the highest score.
```
var detectionScores = ScoreFiltering(rawScores, 100f); // (1, 2254, 1)
var bestScoreIndex = Functional.ArgMax(rawScores, 1).Squeeze();
Expand Down Expand Up @@ -77,6 +77,6 @@ m_PoseLandmarkerWorker.Schedule(m_LandmarkerInput);
The output tensor of the landmarker model is asynchronously downloaded and once the values are on the CPU we use them together with the affine transformation matrix to set the transforms on the keypoints for visualization.

## WebGPU
Unity 6 supports access to the WebGPU backend in early access. Inference Engine has full support for running models on the web using the WebGPU backend. Discover how to gain early access and test WebGPU in our [graphics forum](https://discussions.unity.com/t/early-access-to-the-new-webgpu-backend-in-unity-2023-3/933493).
Unity 6 supports access to the WebGPU backend in early access. Sentis has full support for running models on the web using the WebGPU backend. Discover how to gain early access and test WebGPU in our [graphics forum](https://discussions.unity.com/t/early-access-to-the-new-webgpu-backend-in-unity-2023-3/933493).

![](../images/pose_webgpu.png)
8 changes: 4 additions & 4 deletions BlazeDetectionSample/README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# Blaze detection models in Inference Engine
# Blaze detection models in Sentis

The blaze family of models are light-weight models for real-time detection from Google Research. Here we demonstrate using these pretrained models in Unity using Inference Engine.
The blaze family of models are light-weight models for real-time detection from Google Research. Here we demonstrate using these pretrained models in Unity using Sentis.

We use the Inference Engine API to augment the models, run asynchronous inference on the GPU on all Unity backends.
We use the Sentis API to augment the models, run asynchronous inference on the GPU on all Unity backends.

These demos use images for detection, but can be easily adapted for videos or webcams.

## Face detection

We use the BlazeFace detector model with the Inference Engine non max suppression operator to recognise multiple faces in an image. Each face has a score, bounding box and 6 keypoints.
We use the BlazeFace detector model with the Sentis non max suppression operator to recognise multiple faces in an image. Each face has a score, bounding box and 6 keypoints.

![](./images/face.jpg)

Expand Down
2 changes: 1 addition & 1 deletion BoardGameAISample/Assets/Scripts/Othello.cs
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ void CreateEngine()
{
m_Engine?.Dispose();

// Load in the neural network that will make the move predictions for the spirit + create inference engine
// Load in the neural network that will make the move predictions for the spirit + create Sentis
var othelloModel = ModelLoader.Load(model);

var graph = new FunctionalGraph();
Expand Down
8 changes: 4 additions & 4 deletions ChatSample/Assets/ChatLLM/Editor/EditorChatWindow.cs
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ void OnDestroy()
m_ChatWindow?.Dispose();
}

[MenuItem("Inference Engine/Sample/Chat/Start Chat")]
[MenuItem("Sentis/Sample/Chat/Start Chat")]
public static void OpenWindow()
{
var window = GetWindow<EditorChatWindow>();
Expand All @@ -43,13 +43,13 @@ public static void OpenWindow()
window.Show();
}

[MenuItem("Inference Engine/Sample/Chat/Start Chat", true)]
[MenuItem("Sentis/Sample/Chat/Start Chat", true)]
public static bool OpenWindowValidate()
{
return ModelDownloaderWindow.VerifyModelsExist();
}

[MenuItem("Inference Engine/Sample/Chat/Download Models")]
[MenuItem("Sentis/Sample/Chat/Download Models")]
public static void DownloadModels()
{
var window = GetWindow<ModelDownloaderEditorWindow>();
Expand All @@ -58,7 +58,7 @@ public static void DownloadModels()
window.Show();
}

[MenuItem("Inference Engine/Sample/Chat/Download Models", true)]
[MenuItem("Sentis/Sample/Chat/Download Models", true)]
public static bool DownloadModelsValidate()
{
return !ModelDownloaderWindow.VerifyModelsExist();
Expand Down
8 changes: 4 additions & 4 deletions ChatSample/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Chat LLM Sample

Interactive chat interface powered by the LLaVA OneVision multimodal model running locally in Unity using Inference Engine.
Interactive chat interface powered by the LLaVA OneVision multimodal model running locally in Unity using Sentis.

![Chat Interface](Documentation/main.png)

Expand All @@ -18,16 +18,16 @@ We use this to create a seamless conversational AI experience.
## Features

- **Multimodal Understanding**: Processes both text and images in conversation
- **Real-time Inference**: Fast GPU-accelerated inference using Unity's Inference Engine
- **Real-time Inference**: Fast GPU-accelerated inference using Unity's Sentis
- **Editor Integration**: Available as an Editor window for development and testing
- **Streaming Responses**: Token-by-token response generation for responsive interaction
- **Model Management**: HuggingFace model downloading

## Getting Started

1. Open the Unity project
2. Download models by navigating to **Inference Engine > Sample > Chat > Download Models** in the menu
3. Navigate to **Inference Engine > Sample > Chat > Start Chat** in the menu
2. Download models by navigating to **Sentis > Sample > Chat > Download Models** in the menu
3. Navigate to **Sentis > Sample > Chat > Start Chat** in the menu
4. Start chatting with the AI assistant!

Alternatively, you can manually download the models from [https://huggingface.co/llava-hf/llava-onevision-qwen2-0.5b-si-hf](https://huggingface.co/llava-hf/llava-onevision-qwen2-0.5b-si-hf) and place them in `ChatSample/Assets/ChatLLM/Resources/Models/`.
Expand Down
2 changes: 1 addition & 1 deletion License.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Inference Engine copyright © 2023 Unity Technologies.
Sentis copyright © 2023 Unity Technologies.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gilescoope We want to do this one ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes


Licensed under the Unity Terms of Service as an Experimental / Evaluation Version ( see https://unity.com/legal/terms-of-service.).

Expand Down
2 changes: 1 addition & 1 deletion ProteinFoldingSample/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Protein Folding Sample

This sample is not compatible with Inference Engine, only Sentis 1.X.
This sample is only compatible with Sentis 1.X.

Protein folding visualization demo showing how to render and visualize protein folding in real time 3D.

Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Inference Engine Samples
# Sentis Samples

Contains example and template projects for Inference Engine package use.
Contains example and template projects for Sentis package use.
## Samples
[Text To Speech Sample](TextToSpeechSample/README.md)

Expand Down
6 changes: 3 additions & 3 deletions StarSimulationSample/README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# Star Simulation Sample

Star simulation demo showing how to use Inference Engine as a linear algebra library, solving equations of motions, all on the GPU.
Star simulation demo showing how to use Sentis as a linear algebra library, solving equations of motions, all on the GPU.

![image info](./Documentation/main.jpg)

## Idea

Equations of motion can be written in matrix form (Hamiltonian mechanics https://en.wikipedia.org/wiki/Hamiltonian_mechanics).

Inference Engine is at its core a tensor-based linear algebra engine, so you can use this to simulate a physical system in real time.
Sentis is at its core a tensor-based linear algebra engine, so you can use this to simulate a physical system in real time.

The strength of this solution resides in the simplicity of writing out the system and efficient CPU/GPU code handling the intense computations while they remain on the GPU or Unity job system.

Expand Down Expand Up @@ -40,7 +40,7 @@ We will use the current values to update the position and brightness of each sta

We won't get into the specifics of the equations of motion here.

But we write them out as matrix form (2D-tensor), using Inference Engine [functional api](https://docs.unity3d.com/Packages/[email protected]/manual/create-a-new-model.html).
But we write them out as matrix form (2D-tensor), using Sentis [functional api](https://docs.unity3d.com/Packages/[email protected]/manual/create-a-new-model.html).

This allows us to define successive tensor operations in a few lines of code, defining the system of equations to compute every frame.
We compile this set of operations to a `Model` which then can be used as usual.
Expand Down
4 changes: 2 additions & 2 deletions TextToSpeechSample/Assets/Editor/AppEditorWindow.cs
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,15 @@ void CreateGUI()
visualTreeAsset.CloneTree(rootVisualElement);
}

[MenuItem("Inference Engine/Sample/Text-To-Speech/Start Kokoro")]
[MenuItem("Sentis/Sample/Text-To-Speech/Start Kokoro")]
public static void OpenWindow()
{
var window = GetWindow<AppEditorWindow>();
window.minSize = new Vector2(300, 400);
window.Show();
}

[MenuItem("Inference Engine/Sample/Text-To-Speech/Start Kokoro", true)]
[MenuItem("Sentis/Sample/Text-To-Speech/Start Kokoro", true)]
public static bool ValidateOpenWindow()
{
var configurations = UI.Network.ModelDownloaderWindow.GetDownloadConfigurations();
Expand Down
2 changes: 1 addition & 1 deletion TextToSpeechSample/Assets/Editor/DownloadEditorWindow.cs
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ void CreateGUI()
visualTreeAsset.CloneTree(rootVisualElement);
}

[MenuItem("Inference Engine/Sample/Text-To-Speech/Download Models")]
[MenuItem("Sentis/Sample/Text-To-Speech/Download Models")]
public static void OpenWindow()
{
var window = GetWindow<DownloadEditorWindow>();
Expand Down
10 changes: 5 additions & 5 deletions TextToSpeechSample/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Text-to-Speech Sample

Interactive interface powered by the Kokoro Text-To-Speech model running locally in Unity using Inference Engine.
Interactive interface powered by the Kokoro Text-To-Speech model running locally in Unity using Sentis.

![TTS Interface](Documentation/main.png)

Expand All @@ -20,7 +20,7 @@ We use this to create a seamless text-to-speech experience with natural-sounding

- **Multiple Voices**: Choose from various pre-trained voice styles
- **Speed Control**: Adjustable speech rate for different use cases
- **Real-time Generation**: Fast GPU-accelerated inference using Unity's Inference Engine
- **Real-time Generation**: Fast GPU-accelerated inference using Sentis
- **Editor Integration**: Available as an Editor window for development and testing
- **Cross-Platform**: Support for all Unity-supported platforms thanks to pure C# implementation
- **Model Management**: Automated model downloading and setup
Expand All @@ -29,8 +29,8 @@ We use this to create a seamless text-to-speech experience with natural-sounding
## Getting Started

1. Open the Unity project
2. Download models by navigating to **Inference Engine > Sample > Text-To-Speech > Download Models** in the menu
3. Navigate to **Inference Engine > Sample > Text-To-Speech > Start Kokoro** in the menu
2. Download models by navigating to **Sentis > Sample > Text-To-Speech > Download Models** in the menu
3. Navigate to **Sentis > Sample > Text-To-Speech > Start Kokoro** in the menu
4. Enter text and generate speech with your chosen voice!

Alternatively, you can use the runtime scene at `TextToSpeechSample/Assets/Scenes/App.unity`, but make sure to download the models beforehand using the editor menu.
Expand Down Expand Up @@ -62,4 +62,4 @@ The sample demonstrates:
- State management using Redux patterns
- Model scheduling and resource management
- Cross-platform audio generation
- Advanced text-to-phoneme processing with comprehensive English language support
- Advanced text-to-phoneme processing with comprehensive English language support
2 changes: 1 addition & 1 deletion catalog-info.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ metadata:
annotations:
github.com/project-slug: Unity-Technologies/sentis-samples
name: sentis-samples
description: "Inference Engine samples internal development repository. Contains example and template project for Inference Engine package use."
description: "Sentis samples internal development repository. Contains example and template project for Sentis package use."
Copy link
Collaborator Author

@xdesilets xdesilets Nov 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gilescoope We want to do this one ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes

labels:
costcenter: "5010"
tags:
Expand Down
Loading