Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions zkml-research/kya_face/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
.venv/
workshop_*
40 changes: 40 additions & 0 deletions zkml-research/kya_face/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# KYA Workshop — Local Setup Guide

## 1 Prerequisites (install once)

| Tool | Windows | macOS | Linux (Ubuntu) |
|------|---------|-------|----------------|
| **Python 3.10 or 3.11 × 64-bit** | [python.org](https://www.python.org) installer | `brew install [email protected]` | `sudo apt install python3.11 python3.11-venv` |
| **C++ build tools**<br>(needed for `dlib` → `face_recognition`) | *Visual Studio Build Tools 2022* → “Desktop C++” workload | `xcode-select --install` | `sudo apt install build-essential` |
| **CMake ≥ 3.22** | [cmake.org](https://cmake.org) installer | `brew install cmake` | `sudo apt install cmake` |
| **Leo CLI** | Follow steps at <https://github.com/ProvableHQ/leo> | Follow steps at <https://github.com/ProvableHQ/leo> | Follow steps at <https://github.com/ProvableHQ/leo> |

---

## 2 Clone the repo & create a virtual env

```bash
git clone https://github.com/ProvableHQ/python-sdk.git
cd zkml-research/kya_face

# Create & activate a venv named .venv
python -m venv .venv
# Windows
.venv\Scripts\activate
# macOS/Linux
source .venv/bin/activate
```

---

## 3 Install Python dependencies

```bash
pip install -r requirements.txt
```

---

## 4 Open the Jupyter notebook

Open `kya.ipynb`, e.g., through VS Code (choosing the `venv`).
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 4 additions & 0 deletions zkml-research/kya_face/face_images/positive_user/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Ignore everything in this directory
*
# Except this file
!.gitignore
112 changes: 112 additions & 0 deletions zkml-research/kya_face/helper.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
from sklearn.utils.multiclass import unique_labels
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
import matplotlib.pyplot as plt

def plot_mlp_architecture(clf, ax=None):
"""
Plot the architecture of a Multi-layer Perceptron (MLP) classifier or regressor.
"""
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
n_layers = clf.n_layers_
layer_sizes = [clf.coefs_[0].shape[0]] + [w.shape[1] for w in clf.coefs_]
if ax is None:
fig, ax = plt.subplots(figsize=(8, 4))
# Draw nodes
y_offset = 0
for i, n in enumerate(layer_sizes):
x = i * 2
for j in range(n):
ax.add_patch(mpatches.Circle((x, j - n/2), 0.2, color='skyblue', ec='k'))
if i == 0:
ax.text(x, n/2 + 0.5, "Input\n({})".format(n), ha='center')
elif i == len(layer_sizes) - 1:
ax.text(x, n/2 + 0.5, "Output\n({})".format(n), ha='center')
else:
ax.text(x, n/2 + 0.5, "Hidden\n({})".format(n), ha='center')
# Draw connections
for i in range(len(layer_sizes) - 1):
x0, x1 = i * 2, (i + 1) * 2
for j in range(layer_sizes[i]):
for k in range(layer_sizes[i+1]):
ax.plot([x0, x1], [j - layer_sizes[i]/2, k - layer_sizes[i+1]/2], color='gray', lw=0.5, alpha=0.5)
ax.axis('off')
ax.set_title("MLP Architecture")
plt.show()

def summarize_mlp(clf):
"""
Print the architecture (layer sizes) and total number of parameters
for an sklearn MLPClassifier or MLPRegressor.

Parameters
----------
clf : object
A fitted sklearn.neural_network.MLPClassifier or MLPRegressor.
"""
# Reconstruct full layer sizes (input, hidden…, output)
layer_sizes = (
[clf.coefs_[0].shape[0]]
+ list(clf.hidden_layer_sizes)
+ [clf.coefs_[-1].shape[1]]
)

# Print architecture
print("Layer sizes (including input and output):", layer_sizes)
print(f" ➔ Total layers (including input layer): {len(layer_sizes)}")
print(f" ➔ Hidden layers: {len(clf.hidden_layer_sizes)}")
print(" ➔ Output layer: 1")

# Compute total parameters (weights + biases)
total_params = sum(
w.size + b.size for w, b in zip(clf.coefs_, clf.intercepts_)
)
print(f"Total parameters: {total_params:,}")



def plot_confusion_matrix(y_true, y_pred, *,
labels=None,
normalize=None,
ax=None,
title="Confusion matrix",
cmap="Blues"):
"""
Plot a confusion matrix for classification results.

Parameters
----------
y_true : array-like
Ground-truth labels.
y_pred : array-like
Predicted labels.
labels : list, optional
Class labels (order on both axes). If None, uses the union of labels
present in y_true and y_pred.
normalize : {'true', 'pred', 'all'}, default None
Normalization mode passed to sklearn.metrics.confusion_matrix.
ax : matplotlib.axes.Axes, optional
Existing axes to draw on. If None, a new figure/axes is created.
title : str, default "Confusion matrix"
Title shown above the plot.
cmap : str or matplotlib Colormap, default "Blues"
Colormap used for the heat-map.

Returns
-------
numpy.ndarray
The underlying confusion-matrix array (for further inspection if needed).
"""
if labels is None:
labels = unique_labels(y_true, y_pred)

cm = confusion_matrix(y_true, y_pred, labels=labels, normalize=normalize)

if ax is None:
_, ax = plt.subplots(figsize=(4, 4))

disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=labels)
disp.plot(ax=ax, cmap=cmap, colorbar=False)
ax.set_title(title)
plt.tight_layout()
plt.show()
2,198 changes: 2,198 additions & 0 deletions zkml-research/kya_face/kya.ipynb

Large diffs are not rendered by default.

14 changes: 14 additions & 0 deletions zkml-research/kya_face/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# -------------------------------------------------
# KYA Face ID demo – pinned, known-good versions
# -------------------------------------------------
numpy>=1.26,<2.0
pandas>=2.2,<3.0
matplotlib>=3.9,<4.0
scikit-learn>=1.5,<1.6
Pillow>=10.3,<11.0
face_recognition==1.3.0 # needs CMake & compiler
face_recognition_models>=0.3.0
zkml>=0.0.2b2
jupyterlab>=4.2,<5.0
ipykernel>=6.29,<7.0
pyobjc-framework-AVFoundation; sys_platform == "darwin"