Skip to content

Conversation

@rohanshah18
Copy link
Contributor

@rohanshah18 rohanshah18 commented Oct 23, 2025

Problem

This PR introduces Bring Your Own Cloud (BYOC) index creation, dedicated read capacity for serverless indexes, metadata schema configuration, the ability to create namespaces explicitly, enhanced namespace listing with prefix filtering, fetch and update operations by metadata filters, and support for version 2025-10 of the Pinecone API. You can read more about versioning here.

Solution

Bring Your Own Cloud (BYOC)

This release adds support for creating BYOC (Bring Your Own Cloud) indexes. BYOC indexes allow you to deploy Pinecone indexes in your own cloud infrastructure. You must have a BYOC environment set up with Pinecone before creating a BYOC index. The BYOC environment name is provided during BYOC onboarding.

The following methods were added for creating BYOC indexes:

  • createByocIndex(String indexName, String metric, int dimension, String environment) - Create a BYOC index with minimal required parameters
  • createByocIndex(String indexName, String metric, int dimension, String environment, String deletionProtection, Map<String, String> tags, BackupModelSchema schema) - Create a BYOC index with all options including deletion protection, tags, and metadata schema

Following example shows how to create BYOC indexes:

import io.pinecone.clients.Pinecone;
import org.openapitools.db_control.client.model.IndexModel;
import org.openapitools.db_control.client.model.BackupModelSchema;
import org.openapitools.db_control.client.model.BackupModelSchemaFieldsValue;
import java.util.HashMap;
import java.util.Map;

Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();

String indexName = "example-index";
String similarityMetric = "cosine";
int dimension = 1538;
String byocEnvironment = "your-byoc-environment";

// Create BYOC index with minimal parameters
IndexModel indexModel = pinecone.createByocIndex(indexName, similarityMetric, dimension, byocEnvironment);

// Create BYOC index with metadata schema
HashMap<String, String> tags = new HashMap<>();
tags.put("env", "production");

Map<String, BackupModelSchemaFieldsValue> fields = new HashMap<>();
fields.put("genre", new BackupModelSchemaFieldsValue().filterable(true));
fields.put("year", new BackupModelSchemaFieldsValue().filterable(true));
BackupModelSchema schema = new BackupModelSchema().fields(fields);

IndexModel indexModelWithSchema = pinecone.createByocIndex(
    indexName, similarityMetric, dimension, byocEnvironment, "enabled", tags, schema);

Dedicated Read Capacity

This release adds support for configuring dedicated read capacity nodes for serverless indexes, providing better performance and cost predictability.

The following methods were enhanced to support read capacity:

  • createServerlessIndex() - Added overloads that accept ReadCapacity parameter
  • createIndexForModel() - Added overloads that accept ReadCapacity parameter
  • configureServerlessIndex() - Enhanced to accept flattened parameters for configuring read capacity on existing indexes

Following example shows how to create a serverless index with dedicated read capacity:

import io.pinecone.clients.Pinecone;
import org.openapitools.db_control.client.model.IndexModel;
import org.openapitools.db_control.client.model.ReadCapacity;
import org.openapitools.db_control.client.model.ReadCapacityDedicatedSpec;
import org.openapitools.db_control.client.model.ReadCapacityDedicatedConfig;
import org.openapitools.db_control.client.model.ScalingConfigManual;
import java.util.HashMap;

Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();

String indexName = "example-index";
String similarityMetric = "cosine";
int dimension = 1538;
String cloud = "aws";
String region = "us-west-2";
HashMap<String, String> tags = new HashMap<>();
tags.put("env", "test");

// Configure dedicated read capacity with manual scaling
ScalingConfigManual manual = new ScalingConfigManual().shards(2).replicas(2);
ReadCapacityDedicatedConfig dedicated = new ReadCapacityDedicatedConfig()
    .nodeType("t1")
    .scaling("Manual")
    .manual(manual);
ReadCapacity readCapacity = new ReadCapacity(
    new ReadCapacityDedicatedSpec().mode("Dedicated").dedicated(dedicated));

// Create index with dedicated read capacity
IndexModel indexModel = pinecone.createServerlessIndex(indexName, similarityMetric, dimension, 
    cloud, region, "enabled", tags, readCapacity, null);

Configure Read Capacity on Existing Serverless Index

The configureServerlessIndex() method was enhanced to accept flattened parameters for easier configuration of read capacity on existing indexes. You can switch between OnDemand and Dedicated modes, or scale dedicated read nodes.

Note: Read capacity settings can only be updated once per hour per index.

Following methods were added:

  • configureServerlessIndex(String indexName, String deletionProtection, Map<String, String> tags, ConfigureIndexRequestEmbed embed, String readCapacityMode, String nodeType, Integer shards, Integer replicas) - Configure read capacity on an existing serverless index

Following example shows how to configure read capacity on an existing serverless index:

import io.pinecone.clients.Pinecone;
import org.openapitools.db_control.client.model.IndexModel;
import java.util.HashMap;

Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();

String indexName = "example-index";
HashMap<String, String> tags = new HashMap<>();
tags.put("env", "test");

// Switch to Dedicated read capacity with manual scaling
// Parameters: indexName, deletionProtection, tags, embed, readCapacityMode, nodeType, shards, replicas
IndexModel indexModel = pinecone.configureServerlessIndex(
    indexName, "enabled", tags, null, "Dedicated", "t1", 3, 2);

// Switch to OnDemand read capacity
IndexModel onDemandIndex = pinecone.configureServerlessIndex(
    indexName, "enabled", tags, null, "OnDemand", null, null, null);

Metadata Schema Configuration

This release adds support for configuring metadata schema for serverless indexes, allowing you to limit metadata indexing to specific fields for improved performance.

The following methods were enhanced to support metadata schema:

  • createServerlessIndex() - Added overloads that accept BackupModelSchema parameter
  • createIndexForModel() - Added overloads that accept BackupModelSchema parameter
  • createByocIndex() - Added overloads that accept BackupModelSchema parameter

Following example shows how to create a serverless index with metadata schema:

import io.pinecone.clients.Pinecone;
import org.openapitools.db_control.client.model.IndexModel;
import org.openapitools.db_control.client.model.BackupModelSchema;
import org.openapitools.db_control.client.model.BackupModelSchemaFieldsValue;
import java.util.HashMap;
import java.util.Map;

Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();

String indexName = "example-index";
String similarityMetric = "cosine";
int dimension = 1538;
String cloud = "aws";
String region = "us-west-2";
HashMap<String, String> tags = new HashMap<>();
tags.put("env", "test");

// Configure metadata schema to only index specific fields
Map<String, BackupModelSchemaFieldsValue> fields = new HashMap<>();
fields.put("genre", new BackupModelSchemaFieldsValue().filterable(true));
fields.put("year", new BackupModelSchemaFieldsValue().filterable(true));
BackupModelSchema schema = new BackupModelSchema().fields(fields);

// Create index with metadata schema
IndexModel indexModel = pinecone.createServerlessIndex(indexName, similarityMetric, dimension, 
    cloud, region, "enabled", tags, null, schema);

Create Namespaces

This release adds the ability to create namespaces explicitly within an index. Previously, namespaces were created implicitly when vectors were upserted. Now you can create namespaces ahead of time, optionally with a metadata schema to control which metadata fields are indexed for filtering.

The following methods were added for creating namespaces:

  • createNamespace(String name) - Create a namespace with the specified name
  • createNamespace(String name, MetadataSchema schema) - Create a namespace with a metadata schema

Following example shows how to create namespaces:

import io.pinecone.clients.Index;
import io.pinecone.clients.Pinecone;
import io.pinecone.proto.MetadataFieldProperties;
import io.pinecone.proto.MetadataSchema;
import io.pinecone.proto.NamespaceDescription;

String indexName = "PINECONE_INDEX_NAME";
Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();
Index index = pinecone.getIndexConnection(indexName);

// create a namespace
NamespaceDescription namespaceDescription = index.createNamespace("some-namespace");

// create a namespace with metadata schema
MetadataSchema schema = MetadataSchema.newBuilder()
    .putFields("genre", MetadataFieldProperties.newBuilder().setFilterable(true).build())
    .putFields("year", MetadataFieldProperties.newBuilder().setFilterable(true).build())
    .build();
NamespaceDescription namespaceWithSchema = index.createNamespace("some-namespace", schema);

Async Support

The createNamespace() method is also available for AsyncIndex:

import io.pinecone.clients.AsyncIndex;

AsyncIndex asyncIndex = pinecone.getAsyncIndexConnection(indexName);

// create a namespace with metadata schema
NamespaceDescription asyncNamespaceWithSchema = asyncIndex.createNamespace("some-namespace", schema).get();

Enhanced Namespace Listing

The listNamespaces() method has been enhanced with prefix filtering and total count support. You can now filter namespaces by prefix and get the total count of namespaces matching your filter criteria.

The following method signatures were added:

  • listNamespaces(String prefix, String paginationToken, int limit) - List namespaces with prefix filtering, pagination, and limit

The totalCount field in the response indicates the total number of namespaces matching the prefix (if provided). When prefix filtering is used, it returns the count of namespaces matching the prefix.

Following example shows the enhanced namespace listing functionality with prefix filtering:

import io.pinecone.clients.Index;
import io.pinecone.clients.Pinecone;
import io.pinecone.proto.ListNamespacesResponse;

String indexName = "PINECONE_INDEX_NAME";
Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();
Index index = pinecone.getIndexConnection(indexName);

// list namespaces with prefix filtering
// Prefix filtering allows you to filter namespaces that start with a specific prefix
ListNamespacesResponse listNamespacesResponse = index.listNamespaces("test-", null, 10);
int totalCount = listNamespacesResponse.getTotalCount(); // Total count of namespaces matching "test-" prefix

// list namespaces with prefix, pagination token, and limit
if (listNamespacesResponse.hasPagination() && listNamespacesResponse.getPagination().getNext() != null) {
    ListNamespacesResponse nextPage = index.listNamespaces("test-", listNamespacesResponse.getPagination().getNext(), 10);
}

Async Support

The listNamespaces() method is also available for AsyncIndex:

import io.pinecone.clients.AsyncIndex;

AsyncIndex asyncIndex = pinecone.getAsyncIndexConnection(indexName);

// list namespaces with prefix filtering
ListNamespacesResponse asyncListNamespacesResponse = asyncIndex.listNamespaces("test-", null, 10).get();
int asyncTotalCount = asyncListNamespacesResponse.getTotalCount();

Fetch and Update by Metadata

This release adds fetchByMetadata and updateByMetadata methods for both Index and AsyncIndex clients, enabling fetching and updating vectors by metadata filters.

Fetch By Metadata

The fetchByMetadata() method allows you to fetch vectors matching a metadata filter with optional limit and pagination support.

The following methods were added:

  • fetchByMetadata(String namespace, Struct filter, Integer limit, String paginationToken) - Fetch vectors by metadata filter

Following example shows how to fetch vectors by metadata:

import io.pinecone.clients.Index;
import io.pinecone.clients.Pinecone;
import io.pinecone.proto.FetchByMetadataResponse;
import com.google.protobuf.Struct;
import com.google.protobuf.Value;

Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();
Index index = pinecone.getIndexConnection("example-index");

// Create a metadata filter
Struct filter = Struct.newBuilder()
    .putFields("genre", Value.newBuilder()
        .setStructValue(Struct.newBuilder()
            .putFields("$eq", Value.newBuilder()
                .setStringValue("action")
                .build()))
        .build())
    .build();

// Fetch vectors by metadata with limit
FetchByMetadataResponse response = index.fetchByMetadata("example-namespace", filter, 10, null);

// Access fetched vectors
response.getVectorsMap().forEach((id, vector) -> {
    System.out.println("Vector ID: " + id);
});

Update By Metadata

The updateByMetadata() method allows you to update vectors matching a metadata filter with new metadata. It supports dry run mode to preview how many records would be updated.

The following methods were added:

  • updateByMetadata(Struct filter, Struct metadata, String namespace, boolean dryRun) - Update vectors by metadata filter

Following example shows how to update vectors by metadata:

import io.pinecone.clients.Index;
import io.pinecone.clients.Pinecone;
import io.pinecone.proto.UpdateResponse;
import com.google.protobuf.Struct;
import com.google.protobuf.Value;

Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();
Index index = pinecone.getIndexConnection("example-index");

// Create a filter to match vectors
Struct filter = Struct.newBuilder()
    .putFields("genre", Value.newBuilder()
        .setStructValue(Struct.newBuilder()
            .putFields("$eq", Value.newBuilder()
                .setStringValue("action")
                .build()))
        .build())
    .build();

// Create new metadata to apply
Struct newMetadata = Struct.newBuilder()
    .putFields("updated", Value.newBuilder().setStringValue("true").build())
    .putFields("year", Value.newBuilder().setStringValue("2024").build())
    .build();

// Dry run to check how many records would be updated
UpdateResponse dryRunResponse = index.updateByMetadata(filter, newMetadata, "example-namespace", true);
System.out.println("Records that would be updated: " + dryRunResponse.getMatchedRecords());

// Actually perform the update
UpdateResponse updateResponse = index.updateByMetadata(filter, newMetadata, "example-namespace", false);
System.out.println("Records updated: " + updateResponse.getMatchedRecords());

Async Support

Both methods are available for AsyncIndex:

import io.pinecone.clients.AsyncIndex;
import com.google.common.util.concurrent.ListenableFuture;

AsyncIndex asyncIndex = pinecone.getAsyncIndexConnection("example-index");

// Fetch by metadata asynchronously
ListenableFuture<FetchByMetadataResponse> fetchFuture = 
    asyncIndex.fetchByMetadata("example-namespace", filter, 10, null);
FetchByMetadataResponse fetchResponse = fetchFuture.get();

// Update by metadata asynchronously
ListenableFuture<UpdateResponse> updateFuture = 
    asyncIndex.updateByMetadata(filter, newMetadata, "example-namespace", false);
UpdateResponse updateResponse = updateFuture.get();

Note: These operations are supported for serverless indexes.

Breaking Changes

This release includes several breaking changes due to API updates in version 2025-10. The following changes require code updates when migrating from v5.x to v6.0.0:

Pinecone.java

Changed deletionProtection parameter type

All methods now accept String instead of DeletionProtection enum:

  • createServerlessIndex() - now accepts String deletionProtection (e.g., "enabled" or "disabled")
  • createSparseServelessIndex() - now accepts String deletionProtection
  • createIndexForModel() - now accepts String deletionProtection and String cloud (was CloudEnum)
  • createPodsIndex() - all overloads now accept String deletionProtection
  • configurePodsIndex() - overloads now accept String deletionProtection

Changed cloud parameter type

  • createIndexForModel() now accepts String cloud instead of CreateIndexForModelRequest.CloudEnum

Removed method overload

  • Removed one createPodsIndex() overload that took (String, Integer, String, String, String) parameters. The similarity metric now defaults to "cosine" when not specified.

Index.java and AsyncIndex.java

Changed errorMode parameter type

  • startImport() method now accepts String errorMode instead of ImportErrorMode.OnErrorEnum
  • Accepts "abort" or "continue" as string values

Model Classes (If Directly Used)

Split model classes

IndexModel, IndexSpec, and ConfigureIndexRequest have been split into type-specific variants:

  • IndexModelPodBased, IndexModelServerless, IndexModelBYOC
  • IndexSpecPodBased, IndexSpecServerless, IndexSpecBYOC
  • ConfigureIndexRequestPodBased, ConfigureIndexRequestServerless

Note: This only affects code that directly imports or casts these types. If you're only using the return values from high-level methods, this may not require changes.

Additional Enum to String Changes

The following enum types have also been replaced with string values throughout the SDK:

  • IndexModel.MetricEnumString (e.g., "cosine", "euclidean", "dotproduct")
  • CollectionModel.StatusEnumString (e.g., "ready")
  • IndexModelStatus.StateEnumString (e.g., "ready")
  • ServerlessSpec.CloudEnumString (e.g., "aws", "gcp", "azure")

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update
  • Infrastructure change (CI configs, etc)
  • Non-code change (docs, etc)
  • None of the above: (explain here)

Test Plan

  • All unit tests passing
  • All integration tests passing (pod and serverless)
  • JSON parsing tests updated to handle unknown properties
  • Test fixtures updated to match new API responses


set -eu -o pipefail

# Simple script to add titles to nested objects in ConfigureIndexRequest schema
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I dont add titles, openapi generator will create ConfigureIndexRequestServerless.java and ConfigureIndexRequestServerlessServerless.java classes. And similarly for pod.

…figuration (#202)

## Problem

Add Support for Dedicated Read Capacity (DRN) and Metadata Schema
Configuration

## Solution

This PR adds support for Dedicated Read Capacity (DRN) and Metadata
Schema Configuration for serverless indexes. These features allow users
to:

1. **Configure dedicated read nodes** for better performance and cost
predictability
2. **Limit metadata indexing** to specific fields for improved
performance
3. **Configure read capacity** on existing serverless indexes

#### 1. Create Serverless Index with Read Capacity and Schema

Added overloaded `createServerlessIndex` method that accepts:
- `ReadCapacity` parameter for configuring OnDemand or Dedicated read
capacity
- `BackupModelSchema` parameter for configuring metadata schema

```java
// Create index with Dedicated read capacity
ScalingConfigManual manual = new ScalingConfigManual().shards(2).replicas(2);
ReadCapacityDedicatedConfig dedicated = new ReadCapacityDedicatedConfig()
    .nodeType("t1")
    .scaling("Manual")
    .manual(manual);
ReadCapacity readCapacity = new ReadCapacity(
    new ReadCapacityDedicatedSpec().mode("Dedicated").dedicated(dedicated));

IndexModel indexModel = pinecone.createServerlessIndex(
    indexName, "cosine", 1536, "aws", "us-west-2", 
    "enabled", tags, readCapacity, null);

// Create index with metadata schema
Map<String, BackupModelSchemaFieldsValue> fields = new HashMap<>();
fields.put("genre", new BackupModelSchemaFieldsValue().filterable(true));
fields.put("year", new BackupModelSchemaFieldsValue().filterable(true));
BackupModelSchema schema = new BackupModelSchema().fields(fields);

IndexModel indexModel = pinecone.createServerlessIndex(
    indexName, "cosine", 1536, "aws", "us-west-2", 
    "enabled", tags, null, schema);
```

#### 2. Create Index for Model with Read Capacity and Schema

Added overloaded `createIndexForModel` method that accepts:
- `ReadCapacity` parameter
- `BackupModelSchema` parameter

#### 3. Configure Read Capacity on Existing Index

Enhanced `configureServerlessIndex` method to accept flattened
parameters for easier use:

```java
// Switch to Dedicated read capacity
IndexModel indexModel = pinecone.configureServerlessIndex(
    indexName, "enabled", tags, null, "Dedicated", "t1", 2, 2);

// Switch to OnDemand read capacity
IndexModel indexModel = pinecone.configureServerlessIndex(
    indexName, "enabled", tags, null, "OnDemand", null, null, null);
```

**Note:** Read capacity settings can only be updated once per hour per
index.

### Documentation

- Updated `README.md` with examples for:
  - Creating serverless indexes with dedicated read capacity
  - Creating serverless indexes with OnDemand read capacity
  - Creating serverless indexes with metadata schema
  - Configuring read capacity on existing indexes

## Type of Change

- [ ] Bug fix (non-breaking change which fixes an issue)
- [X] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] This change requires a documentation update
- [ ] Infrastructure change (CI configs, etc)
- [ ] Non-code change (docs, etc)
- [ ] None of the above: (explain here)

## Test Plan

- Added comprehensive integration tests in
`ReadCapacityAndSchemaTest.java`:
  - Create serverless index with OnDemand read capacity
  - Create serverless index with Dedicated read capacity
  - Create serverless index with metadata schema
  - Create serverless index with both read capacity and schema
  - Create index for model with read capacity and schema
  - Configure read capacity on existing index

- Added helper method `waitUntilReadCapacityIsReady()` in
`TestUtilities.java` to wait for read capacity status to be "Ready"
before configuring (required by API).

- Note: Tests for switching read capacity modes and scaling are omitted
due to API rate limits (once per hour per index).
## Problem

Add support for Bring Your Own Cloud (BYOC)

## Solution

This PR adds support for BYOC (Bring Your Own Cloud) index creation.
Added `createByocIndex` method with two overloads for creating BYOC
indexes:

**Minimal overload (4 parameters):**
```java
IndexModel indexModel = pinecone.createByocIndex(
    indexName, "cosine", 1536, "your-byoc-environment");
```
**Full overload with all options (7 parameters):**
```java
// Create BYOC index with metadata schema
Map<String, BackupModelSchemaFieldsValue> fields = new HashMap<>();
fields.put("genre", new BackupModelSchemaFieldsValue().filterable(true));
fields.put("year", new BackupModelSchemaFieldsValue().filterable(true));
BackupModelSchema schema = new BackupModelSchema().fields(fields);

Map<String, String> tags = new HashMap<>();
tags.put("env", "production");

IndexModel indexModel = pinecone.createByocIndex(
    indexName, "cosine", 1536, "aws-us-east-1-b921",
    "enabled", tags, schema);
```

**Note:** The BYOC environment name is provided during BYOC onboarding
with Pinecone. You must have a BYOC environment set up before creating
BYOC indexes.

## Type of Change

- [ ] Bug fix (non-breaking change which fixes an issue)
- [X] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] This change requires a documentation update
- [ ] Infrastructure change (CI configs, etc)
- [ ] Non-code change (docs, etc)
- [ ] None of the above: (explain here)

## Test Plan

Manually tested on dev ci clusters
## Problem

Update core based on 2025-10 specs

## Solution

Generated and updated the core based on the 2025-10 OAS and protos. Note
that the code had already been generated from the 2025-10 specs, but
some fixes were made to the OAS, so I had to regenerate it.

## Type of Change

- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] This change requires a documentation update
- [ ] Infrastructure change (CI configs, etc)
- [ ] Non-code change (docs, etc)
- [ ] None of the above: (explain here)

## Test Plan

No new functionalities are added in this pr so existing integration
tests should pass.
)

## Problem

Add support for `createNamespace()` endpoint, and add `prefix` +
`totalCount` to `listNamespaces()`.

## Solution

This PR adds the `createNamespace` method and enhances `listNamespaces`
with prefix filtering and total count support for both `Index` and
`AsyncIndex` clients.

**Create Namespace**: 
1. Basic Namespace Creation: Create a namespace explicitly by name:
```java
// Create a namespace explicitly
NamespaceDescription createdNamespace = index.createNamespace("my-namespace");
assertNotNull(createdNamespace);
assertEquals("my-namespace", createdNamespace.getName());

// Verify the namespace was created
ListNamespacesResponse response = index.listNamespaces();
assertTrue(response.getNamespacesList().stream()
    .anyMatch(ns -> ns.getName().equals("my-namespace")));
```

2. Metadata Schema Support: Create a namespace with a metadata schema to
define filterable fields:
```java
// Create a namespace with metadata schema
MetadataSchema schema = MetadataSchema.newBuilder()
    .putFields("genre", MetadataFieldProperties.newBuilder().setFilterable(true).build())
    .putFields("year", MetadataFieldProperties.newBuilder().setFilterable(true).build())
    .build();

NamespaceDescription namespaceWithSchema = index.createNamespace("movies-namespace", schema);
assertNotNull(namespaceWithSchema);
assertEquals("movies-namespace", namespaceWithSchema.getName());
```

**Prefix Filtering**: Filter namespaces by prefix to retrieve only
namespaces starting with a specific prefix:

```java
// List namespaces with prefix filtering
ListNamespacesResponse response = index.listNamespaces("test-", null, 100);
int totalCount = response.getTotalCount(); // Total namespaces matching "test-" prefix
List<NamespaceDescription> namespaces = response.getNamespacesList();

// Verify all returned namespaces match the prefix
for (NamespaceDescription ns : namespaces) {
    assert ns.getName().startsWith("test-");
}
```

**Total Count**: The response includes a `totalCount` field indicating
the total number of namespaces matching the prefix (useful for
pagination):

```
// Get total count even when results are paginated
ListNamespacesResponse response = index.listNamespaces("prod-", null, 10);
int totalCount = response.getTotalCount(); // Total matching namespaces
int returnedCount = response.getNamespacesCount(); // Namespaces in this page (≤ 10)

// Use totalCount to determine if more pages are available
if (totalCount > returnedCount) {
    // More namespaces available via pagination
}
```

**Async Support**: Same functionality available for `AsyncIndex`:

```
// Create namespace asynchronously
ListenableFuture<NamespaceDescription> future = asyncIndex.createNamespace("async-namespace");
NamespaceDescription createdNamespace = future.get();

// Create namespace with schema asynchronously
ListenableFuture<NamespaceDescription> futureWithSchema = 
    asyncIndex.createNamespace("async-movies", schema);
NamespaceDescription namespace = futureWithSchema.get();

// List namespaces with prefix filtering asynchronously
ListenableFuture<ListNamespacesResponse> listFuture = asyncIndex.listNamespaces("dev-", null, 50);
ListNamespacesResponse response = listFuture.get();
int totalCount = response.getTotalCount();
```

**Note**: 
- Null or empty prefix values are ignored (no filtering applied)
- The method signature is: `listNamespaces(String prefix, String
paginationToken, int limit)`
- Both synchronous (`Index`) and asynchronous (`AsyncIndex`)
implementations are included
- Fully backward compatible - existing `listNamespaces()` methods remain
unchanged


## Type of Change

- [ ] Bug fix (non-breaking change which fixes an issue)
- [X] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] This change requires a documentation update
- [ ] Infrastructure change (CI configs, etc)
- [ ] Non-code change (docs, etc)
- [ ] None of the above: (explain here)

## Test Plan

Updated NamespacesTests.java to include:
- `createNamespace()` call
- `prefix` and `totalCount`
## Problem

Add fetch and update by metadata

## Solution

This PR adds the fetchByMetadata and updateByMetadata methods for both
Index and AsyncIndex clients, enabling fetching and updating vectors by
metadata filters.

**Fetch By Metadata**
Basic Fetch: Fetch vectors matching a metadata filter with optional
limit and pagination:
```java
import io.pinecone.proto.FetchByMetadataResponse;
import com.google.protobuf.Struct;
import com.google.protobuf.Value;

// Create a metadata filter
Struct filter = Struct.newBuilder()
    .putFields("genre", Value.newBuilder()
        .setStructValue(Struct.newBuilder()
            .putFields("$eq", Value.newBuilder()
                .setStringValue("action")
                .build()))
        .build())
    .build();

// Fetch vectors by metadata with limit
FetchByMetadataResponse response = index.fetchByMetadata("example-namespace", filter, 10, null);
assertNotNull(response);
assertTrue(response.getVectorsCount() > 0);

// Access fetched vectors
response.getVectorsMap().forEach((id, vector) -> {
    assertNotNull(vector);
    assertTrue(vector.hasMetadata());
});
```

Pagination Support: Use pagination tokens to fetch additional pages of
results:
```java
// Fetch first page
FetchByMetadataResponse firstPage = index.fetchByMetadata("example-namespace", filter, 2, null);
assertNotNull(firstPage);

// Fetch next page using pagination token
if (firstPage.hasPagination() && 
    firstPage.getPagination().getNext() != null && 
    !firstPage.getPagination().getNext().isEmpty()) {
    
    FetchByMetadataResponse secondPage = index.fetchByMetadata(
        "example-namespace", filter, 2, firstPage.getPagination().getNext());
    assertNotNull(secondPage);
}
```

**Update By Metadata**
Basic Update: Update vectors matching a metadata filter with new
metadata:
```java
import io.pinecone.proto.UpdateResponse;
import com.google.protobuf.Struct;

// Create a filter to match vectors
Struct filter = Struct.newBuilder()
    .putFields("genre", Value.newBuilder()
        .setStructValue(Struct.newBuilder()
            .putFields("$eq", Value.newBuilder()
                .setStringValue("action")
                .build()))
        .build())
    .build();

// Create new metadata to apply
Struct newMetadata = Struct.newBuilder()
    .putFields("updated", Value.newBuilder().setStringValue("true").build())
    .putFields("year", Value.newBuilder().setStringValue("2024").build())
    .build();

// Update vectors matching the filter
UpdateResponse response = index.updateByMetadata(filter, newMetadata, "example-namespace", false);
assertNotNull(response);
assertTrue(response.getMatchedRecords() > 0);
```

Dry Run Mode: Preview how many records would be updated without actually
applying changes:
```java
// Dry run to check how many records would be updated
UpdateResponse dryRunResponse = index.updateByMetadata(filter, newMetadata, "example-namespace", true);
assertNotNull(dryRunResponse);
int matchedRecords = dryRunResponse.getMatchedRecords();
assertTrue(matchedRecords > 0);

// Actually perform the update
UpdateResponse updateResponse = index.updateByMetadata(filter, newMetadata, "example-namespace", false);
assertNotNull(updateResponse);
```

**Async Support**
Both methods are available for AsyncIndex:
```java
import com.google.common.util.concurrent.ListenableFuture;

// Fetch by metadata asynchronously
ListenableFuture<FetchByMetadataResponse> fetchFuture = 
    asyncIndex.fetchByMetadata("example-namespace", filter, 10, null);
FetchByMetadataResponse fetchResponse = fetchFuture.get();
assertNotNull(fetchResponse);

// Update by metadata asynchronously
ListenableFuture<UpdateResponse> updateFuture = 
    asyncIndex.updateByMetadata(filter, newMetadata, "example-namespace", false);
UpdateResponse updateResponse = updateFuture.get();
assertNotNull(updateResponse);
assertTrue(updateResponse.getMatchedRecords() > 0);
```

Note: These operations are supported for serverless indexes.

## Type of Change

- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] This change requires a documentation update
- [ ] Infrastructure change (CI configs, etc)
- [ ] Non-code change (docs, etc)
- [ ] None of the above: (explain here)

## Test Plan

Added integration tests.
@rohanshah18 rohanshah18 changed the title Generate code based on 2025-10 api spec Generate code based on 2025-10 api spec and add features Nov 12, 2025
@rohanshah18 rohanshah18 changed the title Generate code based on 2025-10 api spec and add features 2025-10 features Nov 12, 2025
@rohanshah18 rohanshah18 marked this pull request as ready for review November 12, 2025 22:54
@rohanshah18 rohanshah18 merged commit c6a8fb4 into main Nov 12, 2025
12 checks passed
@rohanshah18 rohanshah18 deleted the rshah/release-candidate/2025-10 branch November 12, 2025 23:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants