diff --git a/docs/python_sdk.md b/docs/python_sdk.md
index 8013647575e..f43d60cd891 100644
--- a/docs/python_sdk.md
+++ b/docs/python_sdk.md
@@ -13,25 +13,46 @@ for the AIStore Python SDK.
> For our PyTorch integration, please refer to the [PyTorch Docs](https://github.com/NVIDIA/aistore/tree/main/docs/pytorch.md).
For more information, please refer to [AIS Python SDK](https://pypi.org/project/aistore) available via Python Package Index (PyPI)
or see [https://github.com/NVIDIA/aistore/tree/main/python/aistore](https://github.com/NVIDIA/aistore/tree/main/python/aistore).
-* [client](#client)
- * [Client](#client.Client)
- * [bucket](#client.Client.bucket)
- * [cluster](#client.Client.cluster)
- * [job](#client.Client.job)
- * [etl](#client.Client.etl)
- * [dsort](#client.Client.dsort)
- * [fetch\_object\_by\_url](#client.Client.fetch_object_by_url)
-* [cluster](#cluster)
- * [Cluster](#cluster.Cluster)
- * [client](#cluster.Cluster.client)
- * [get\_info](#cluster.Cluster.get_info)
- * [get\_primary\_url](#cluster.Cluster.get_primary_url)
- * [list\_buckets](#cluster.Cluster.list_buckets)
- * [list\_jobs\_status](#cluster.Cluster.list_jobs_status)
- * [list\_running\_jobs](#cluster.Cluster.list_running_jobs)
- * [list\_running\_etls](#cluster.Cluster.list_running_etls)
- * [is\_ready](#cluster.Cluster.is_ready)
- * [get\_performance](#cluster.Cluster.get_performance)
+* [authn.authn\_client](#authn.authn_client)
+ * [AuthNClient](#authn.authn_client.AuthNClient)
+ * [client](#authn.authn_client.AuthNClient.client)
+ * [login](#authn.authn_client.AuthNClient.login)
+ * [logout](#authn.authn_client.AuthNClient.logout)
+ * [cluster\_manager](#authn.authn_client.AuthNClient.cluster_manager)
+ * [role\_manager](#authn.authn_client.AuthNClient.role_manager)
+ * [user\_manager](#authn.authn_client.AuthNClient.user_manager)
+ * [token\_manager](#authn.authn_client.AuthNClient.token_manager)
+* [authn.cluster\_manager](#authn.cluster_manager)
+ * [ClusterManager](#authn.cluster_manager.ClusterManager)
+ * [client](#authn.cluster_manager.ClusterManager.client)
+ * [list](#authn.cluster_manager.ClusterManager.list)
+ * [get](#authn.cluster_manager.ClusterManager.get)
+ * [register](#authn.cluster_manager.ClusterManager.register)
+ * [update](#authn.cluster_manager.ClusterManager.update)
+ * [delete](#authn.cluster_manager.ClusterManager.delete)
+* [authn.role\_manager](#authn.role_manager)
+ * [RoleManager](#authn.role_manager.RoleManager)
+ * [client](#authn.role_manager.RoleManager.client)
+ * [list](#authn.role_manager.RoleManager.list)
+ * [get](#authn.role_manager.RoleManager.get)
+ * [create](#authn.role_manager.RoleManager.create)
+ * [update](#authn.role_manager.RoleManager.update)
+ * [delete](#authn.role_manager.RoleManager.delete)
+* [authn.token\_manager](#authn.token_manager)
+ * [TokenManager](#authn.token_manager.TokenManager)
+ * [client](#authn.token_manager.TokenManager.client)
+ * [revoke](#authn.token_manager.TokenManager.revoke)
+* [authn.user\_manager](#authn.user_manager)
+ * [UserManager](#authn.user_manager.UserManager)
+ * [client](#authn.user_manager.UserManager.client)
+ * [get](#authn.user_manager.UserManager.get)
+ * [delete](#authn.user_manager.UserManager.delete)
+ * [create](#authn.user_manager.UserManager.create)
+ * [list](#authn.user_manager.UserManager.list)
+ * [update](#authn.user_manager.UserManager.update)
+* [authn.access\_attr](#authn.access_attr)
+ * [AccessAttr](#authn.access_attr.AccessAttr)
+ * [describe](#authn.access_attr.AccessAttr.describe)
* [bucket](#bucket)
* [Bucket](#bucket.Bucket)
* [client](#bucket.Bucket.client)
@@ -62,22 +83,46 @@ or see [https://github.com/NVIDIA/aistore/tree/main/python/aistore](https://gith
* [get\_path](#bucket.Bucket.get_path)
* [as\_model](#bucket.Bucket.as_model)
* [write\_dataset](#bucket.Bucket.write_dataset)
-* [object](#object)
- * [Object](#object.Object)
- * [bucket](#object.Object.bucket)
- * [name](#object.Object.name)
- * [props](#object.Object.props)
- * [head](#object.Object.head)
- * [get](#object.Object.get)
- * [get\_semantic\_url](#object.Object.get_semantic_url)
- * [get\_url](#object.Object.get_url)
- * [put\_content](#object.Object.put_content)
- * [put\_file](#object.Object.put_file)
- * [promote](#object.Object.promote)
- * [delete](#object.Object.delete)
- * [blob\_download](#object.Object.blob_download)
- * [append\_content](#object.Object.append_content)
- * [set\_custom\_props](#object.Object.set_custom_props)
+* [client](#client)
+ * [Client](#client.Client)
+ * [bucket](#client.Client.bucket)
+ * [cluster](#client.Client.cluster)
+ * [job](#client.Client.job)
+ * [etl](#client.Client.etl)
+ * [dsort](#client.Client.dsort)
+ * [fetch\_object\_by\_url](#client.Client.fetch_object_by_url)
+* [cluster](#cluster)
+ * [Cluster](#cluster.Cluster)
+ * [client](#cluster.Cluster.client)
+ * [get\_info](#cluster.Cluster.get_info)
+ * [get\_primary\_url](#cluster.Cluster.get_primary_url)
+ * [list\_buckets](#cluster.Cluster.list_buckets)
+ * [list\_jobs\_status](#cluster.Cluster.list_jobs_status)
+ * [list\_running\_jobs](#cluster.Cluster.list_running_jobs)
+ * [list\_running\_etls](#cluster.Cluster.list_running_etls)
+ * [is\_ready](#cluster.Cluster.is_ready)
+ * [get\_performance](#cluster.Cluster.get_performance)
+ * [get\_uuid](#cluster.Cluster.get_uuid)
+* [etl](#etl)
+ * [Etl](#etl.Etl)
+ * [name](#etl.Etl.name)
+ * [init\_spec](#etl.Etl.init_spec)
+ * [init\_code](#etl.Etl.init_code)
+ * [view](#etl.Etl.view)
+ * [start](#etl.Etl.start)
+ * [stop](#etl.Etl.stop)
+ * [delete](#etl.Etl.delete)
+ * [validate\_etl\_name](#etl.Etl.validate_etl_name)
+* [job](#job)
+ * [Job](#job.Job)
+ * [job\_id](#job.Job.job_id)
+ * [job\_kind](#job.Job.job_kind)
+ * [status](#job.Job.status)
+ * [wait](#job.Job.wait)
+ * [wait\_for\_idle](#job.Job.wait_for_idle)
+ * [wait\_single\_node](#job.Job.wait_single_node)
+ * [start](#job.Job.start)
+ * [get\_within\_timeframe](#job.Job.get_within_timeframe)
* [multiobj.object\_group](#multiobj.object_group)
* [ObjectGroup](#multiobj.object_group.ObjectGroup)
* [client](#multiobj.object_group.ObjectGroup.client)
@@ -98,354 +143,687 @@ or see [https://github.com/NVIDIA/aistore/tree/main/python/aistore](https://gith
* [from\_string](#multiobj.object_range.ObjectRange.from_string)
* [multiobj.object\_template](#multiobj.object_template)
* [ObjectTemplate](#multiobj.object_template.ObjectTemplate)
-* [job](#job)
- * [Job](#job.Job)
- * [job\_id](#job.Job.job_id)
- * [job\_kind](#job.Job.job_kind)
- * [status](#job.Job.status)
- * [wait](#job.Job.wait)
- * [wait\_for\_idle](#job.Job.wait_for_idle)
- * [wait\_single\_node](#job.Job.wait_single_node)
- * [start](#job.Job.start)
- * [get\_within\_timeframe](#job.Job.get_within_timeframe)
+* [object](#object)
+ * [Object](#object.Object)
+ * [bucket](#object.Object.bucket)
+ * [name](#object.Object.name)
+ * [props](#object.Object.props)
+ * [head](#object.Object.head)
+ * [get](#object.Object.get)
+ * [get\_semantic\_url](#object.Object.get_semantic_url)
+ * [get\_url](#object.Object.get_url)
+ * [put\_content](#object.Object.put_content)
+ * [put\_file](#object.Object.put_file)
+ * [promote](#object.Object.promote)
+ * [delete](#object.Object.delete)
+ * [blob\_download](#object.Object.blob_download)
+ * [append\_content](#object.Object.append_content)
+ * [set\_custom\_props](#object.Object.set_custom_props)
+* [object\_iterator](#object_iterator)
+ * [ObjectIterator](#object_iterator.ObjectIterator)
* [object\_reader](#object_reader)
* [ObjectReader](#object_reader.ObjectReader)
* [attributes](#object_reader.ObjectReader.attributes)
* [read\_all](#object_reader.ObjectReader.read_all)
* [raw](#object_reader.ObjectReader.raw)
* [\_\_iter\_\_](#object_reader.ObjectReader.__iter__)
-* [object\_iterator](#object_iterator)
- * [ObjectIterator](#object_iterator.ObjectIterator)
-* [etl](#etl)
- * [Etl](#etl.Etl)
- * [name](#etl.Etl.name)
- * [init\_spec](#etl.Etl.init_spec)
- * [init\_code](#etl.Etl.init_code)
- * [view](#etl.Etl.view)
- * [start](#etl.Etl.start)
- * [stop](#etl.Etl.stop)
- * [delete](#etl.Etl.delete)
- * [validate\_etl\_name](#etl.Etl.validate_etl_name)
-
+
-## Class: Client
+## Class: AuthNClient
```python
-class Client()
+class AuthNClient()
```
-AIStore client for managing buckets, objects, ETL jobs
+AuthN client for managing authentication.
+
+This client provides methods to interact with AuthN Server.
+For more info on AuthN Server, see https://github.com/NVIDIA/aistore/blob/main/docs/authn.md
**Arguments**:
-- `endpoint` _str_ - AIStore endpoint
+- `endpoint` _str_ - AuthN service endpoint URL.
+- `skip_verify` _bool, optional_ - If True, skip SSL certificate verification. Defaults to False.
+- `ca_cert` _str, optional_ - Path to a CA certificate file for SSL verification.
+- `timeout` _Union[float, Tuple[float, float], None], optional_ - Request timeout in seconds; a single float
+ for both connect/read timeouts (e.g., 5.0), a tuple for separate connect/read timeouts (e.g., (3.0, 10.0)),
+ or None to disable timeout.
+- `retry` _urllib3.Retry, optional_ - Retry configuration object from the urllib3 library.
+- `token` _str, optional_ - Authorization token.
-
+
-### bucket
+### client
```python
-def bucket(bck_name: str,
- provider: str = PROVIDER_AIS,
- namespace: Namespace = None)
+@property
+def client() -> RequestClient
```
-Factory constructor for bucket object.
-Does not make any HTTP request, only instantiates a bucket object.
+Get the request client.
+
+**Returns**:
+
+- `RequestClient` - The client this AuthN client uses to make requests.
+
+
+
+### login
+
+```python
+def login(username: str,
+ password: str,
+ expires_in: Optional[Union[int, float]] = None) -> str
+```
+
+Logs in to the AuthN Server and returns an authorization token.
**Arguments**:
-- `bck_name` _str_ - Name of bucket
-- `provider` _str_ - Provider of bucket, one of "ais", "aws", "gcp", ... (optional, defaults to ais)
-- `namespace` _Namespace_ - Namespace of bucket (optional, defaults to None)
+- `username` _str_ - The username to log in with.
+- `password` _str_ - The password to log in with.
+- `expires_in` _Optional[Union[int, float]]_ - The expiration duration of the token in seconds.
**Returns**:
- The bucket object created.
+- `str` - An authorization token to use for future requests.
+
-
+**Raises**:
-### cluster
+- `ValueError` - If the password is empty or consists only of spaces.
+- `AISError` - If the login request fails.
+
+
+
+### logout
```python
-def cluster()
+def logout() -> None
```
-Factory constructor for cluster object.
-Does not make any HTTP request, only instantiates a cluster object.
+Logs out and revokes current token from the AuthN Server.
-**Returns**:
+**Raises**:
- The cluster object created.
+- `AISError` - If the logout request fails.
-
+
-### job
+### cluster\_manager
```python
-def job(job_id: str = "", job_kind: str = "")
+def cluster_manager() -> ClusterManager
```
-Factory constructor for job object, which contains job-related functions.
-Does not make any HTTP request, only instantiates a job object.
-
-**Arguments**:
-
-- `job_id` _str, optional_ - Optional ID for interacting with a specific job
-- `job_kind` _str, optional_ - Optional specific type of job empty for all kinds
-
+Factory method to create a ClusterManager instance.
**Returns**:
- The job object created.
+- `ClusterManager` - An instance to manage cluster operations.
-
+
-### etl
+### role\_manager
```python
-def etl(etl_name: str)
+def role_manager() -> RoleManager
```
-Factory constructor for ETL object.
-Contains APIs related to AIStore ETL operations.
-Does not make any HTTP request, only instantiates an ETL object.
-
-**Arguments**:
-
-- `etl_name` _str_ - Name of the ETL
-
+Factory method to create a RoleManager instance.
**Returns**:
- The ETL object created.
+- `RoleManager` - An instance to manage role operations.
-
+
-### dsort
+### user\_manager
```python
-def dsort(dsort_id: str = "")
+def user_manager() -> UserManager
```
-Factory constructor for dSort object.
-Contains APIs related to AIStore dSort operations.
-Does not make any HTTP request, only instantiates a dSort object.
-
-**Arguments**:
-
-- `dsort_id` - ID of the dSort job
-
+Factory method to create a UserManager instance.
**Returns**:
- dSort object created
+- `UserManager` - An instance to manage user operations.
-
+
-### fetch\_object\_by\_url
+### token\_manager
```python
-def fetch_object_by_url(url: str) -> Object
+def token_manager() -> TokenManager
```
-Retrieve an object based on its URL.
-
-**Arguments**:
-
-- `url` _str_ - Full URL of the object (e.g., "ais://bucket1/file.txt")
-
+Factory method to create a TokenManager instance.
**Returns**:
-- `Object` - The object retrieved from the specified URL
+- `TokenManager` - An instance to manage token operations.
-
+
-## Class: Cluster
+## Class: ClusterManager
```python
-class Cluster()
+class ClusterManager()
```
-A class representing a cluster bound to an AIS client.
+ClusterManager class for handling operations on clusters within the context of authentication.
-
+This class provides methods to list, get, register, update, and delete clusters on AuthN server.
+
+**Arguments**:
+
+- `client` _RequestClient_ - The request client to make HTTP requests.
+
+
### client
```python
@property
-def client()
+def client() -> RequestClient
```
-Client this cluster uses to make requests
+RequestClient: The client this cluster manager uses to make requests.
-
+
-### get\_info
+### list
```python
-def get_info() -> Smap
+def list() -> ClusterList
```
-Returns state of AIS cluster, including the detailed information about its nodes.
+Retrieve all clusters.
**Returns**:
-- `aistore.sdk.types.Smap` - Smap containing cluster information
+- `ClusterList` - A list of all clusters.
**Raises**:
-- `requests.RequestException` - "There was an ambiguous exception that occurred while handling..."
-- `requests.ConnectionError` - Connection error
-- `requests.ConnectionTimeout` - Timed out connecting to AIStore
-- `requests.ReadTimeout` - Timed out waiting response from AIStore
+- `AISError` - If an error occurs while listing clusters.
-
+
-### get\_primary\_url
+### get
```python
-def get_primary_url() -> str
+def get(cluster_id: Optional[str] = None,
+ cluster_alias: Optional[str] = None) -> ClusterInfo
```
-Returns: URL of primary proxy
+Retrieve a specific cluster by ID or alias.
-
+**Arguments**:
-### list\_buckets
+- `cluster_id` _Optional[str]_ - The ID of the cluster. Defaults to None.
+- `cluster_alias` _Optional[str]_ - The alias of the cluster. Defaults to None.
+
+
+**Returns**:
+
+- `ClusterInfo` - Information about the specified cluster.
+
+
+**Raises**:
+
+- `ValueError` - If neither cluster_id nor cluster_alias is provided.
+- `RuntimeError` - If no cluster matches the provided ID or alias.
+- `AISError` - If an error occurs while getting the cluster.
+
+
+
+### register
```python
-def list_buckets(provider: str = PROVIDER_AIS)
+def register(cluster_alias: str, urls: List[str]) -> ClusterInfo
```
-Returns list of buckets in AIStore cluster.
+Register a new cluster.
**Arguments**:
-- `provider` _str, optional_ - Name of bucket provider, one of "ais", "aws", "gcp", "az" or "ht".
- Defaults to "ais". Empty provider returns buckets of all providers.
+- `cluster_alias` _str_ - The alias for the new cluster.
+- `urls` _List[str]_ - A list of URLs for the new cluster.
**Returns**:
-- `List[BucketModel]` - A list of buckets
+- `ClusterInfo` - Information about the registered cluster.
**Raises**:
-- `requests.RequestException` - "There was an ambiguous exception that occurred while handling..."
-- `requests.ConnectionError` - Connection error
-- `requests.ConnectionTimeout` - Timed out connecting to AIStore
-- `requests.ReadTimeout` - Timed out waiting response from AIStore
+- `ValueError` - If no URLs are provided or an invalid URL is provided.
+- `AISError` - If an error occurs while registering the cluster.
-
+
-### list\_jobs\_status
+### update
```python
-def list_jobs_status(job_kind="", target_id="") -> List[JobStatus]
+def update(cluster_id: str,
+ cluster_alias: Optional[str] = None,
+ urls: Optional[List[str]] = None) -> ClusterInfo
```
-List the status of jobs on the cluster
+Update an existing cluster.
**Arguments**:
-- `job_kind` _str, optional_ - Only show jobs of a particular type
-- `target_id` _str, optional_ - Limit to jobs on a specific target node
+- `cluster_id` _str_ - The ID of the cluster to update.
+- `cluster_alias` _Optional[str]_ - The new alias for the cluster. Defaults to None.
+- `urls` _Optional[List[str]]_ - The new list of URLs for the cluster. Defaults to None.
**Returns**:
- List of JobStatus objects
+- `ClusterInfo` - Information about the updated cluster.
+
-
+**Raises**:
-### list\_running\_jobs
+- `ValueError` - If neither cluster_alias nor urls are provided.
+- `AISError` - If an error occurs while updating the cluster
+
+
+
+### delete
```python
-def list_running_jobs(job_kind="", target_id="") -> List[str]
+def delete(cluster_id: Optional[str] = None,
+ cluster_alias: Optional[str] = None)
```
-List the currently running jobs on the cluster
+Delete a specific cluster by ID or alias.
**Arguments**:
-- `job_kind` _str, optional_ - Only show jobs of a particular type
-- `target_id` _str, optional_ - Limit to jobs on a specific target node
+- `cluster_id` _Optional[str]_ - The ID of the cluster to delete. Defaults to None.
+- `cluster_alias` _Optional[str]_ - The alias of the cluster to delete. Defaults to None.
+**Raises**:
+
+- `ValueError` - If neither cluster_id nor cluster_alias is provided.
+- `AISError` - If an error occurs while deleting the cluster
+
+
+
+## Class: RoleManager
+
+```python
+class RoleManager()
+```
+
+Manages role-related operations.
+
+This class provides methods to interact with roles, including
+retrieving, creating, updating, and deleting role information.
+
+**Arguments**:
+
+- `client` _RequestClient_ - The RequestClient used to make HTTP requests.
+
+
+
+### client
+
+```python
+@property
+def client() -> RequestClient
+```
+
+Returns the RequestClient instance used by this RoleManager.
+
+
+
+### list
+
+```python
+def list() -> RolesList
+```
+
+Retrieves information about all roles.
+
**Returns**:
- List of jobs in the format job_kind[job_id]
+- `RoleList` - A list containing information about all roles.
+
-
+**Raises**:
-### list\_running\_etls
+- `aistore.sdk.errors.AISError` - All other types of errors with AIStore.
+- `requests.RequestException` - If the HTTP request fails.
+
+
+
+### get
```python
-def list_running_etls() -> List[ETLInfo]
+def get(role_name: str) -> RoleInfo
```
-Lists all running ETLs.
+Retrieves information about a specific role.
-Note: Does not list ETLs that have been stopped or deleted.
+**Arguments**:
+
+- `role_name` _str_ - The name of the role to retrieve.
+
**Returns**:
-- `List[ETLInfo]` - A list of details on running ETLs
+- `RoleInfo` - Information about the specified role.
+
-
+**Raises**:
-### is\_ready
+- `aistore.sdk.errors.AISError` - All other types of errors with AIStore.
+- `requests.RequestException` - If the HTTP request fails.
+
+
+
+### create
```python
-def is_ready() -> bool
+def create(name: str,
+ desc: str,
+ cluster_alias: str,
+ perms: List[AccessAttr],
+ bucket_name: str = None) -> RoleInfo
```
-Checks if cluster is ready or still setting up.
+Creates a new role.
+
+**Arguments**:
+
+- `name` _str_ - The name of the role.
+- `desc` _str_ - A description of the role.
+- `cluster_alias` _str_ - The alias of the cluster this role will have access to.
+- `perms` _List[AccessAttr]_ - A list of permissions to be granted for this role.
+- `bucket_name` _str, optional_ - The name of the bucket this role will have access to.
+
**Returns**:
-- `bool` - True if cluster is ready, or false if cluster is still setting up
+- `RoleInfo` - Information about the newly created role.
+
-
+**Raises**:
-### get\_performance
+- `aistore.sdk.errors.AISError` - All other types of errors with AIStore.
+- `requests.RequestException` - If the HTTP request fails.
+
+
+
+### update
```python
-def get_performance(get_throughput: bool = True,
- get_latency: bool = True,
- get_counters: bool = True) -> ClusterPerformance
+def update(name: str,
+ desc: str = None,
+ cluster_alias: str = None,
+ perms: List[AccessAttr] = None,
+ bucket_name: str = None) -> RoleInfo
```
-Retrieves and calculates the performance metrics for each target node in the AIStore cluster.
-It compiles throughput, latency, and various operational counters from each target node,
-providing a comprehensive view of the cluster's overall performance
+Updates an existing role.
**Arguments**:
-- `get_throughput` _bool, optional_ - get cluster throughput
-- `get_latency` _bool, optional_ - get cluster latency
-- `get_counters` _bool, optional_ - get cluster counters
+- `name` _str_ - The name of the role.
+- `desc` _str, optional_ - An updated description of the role.
+- `cluster_alias` _str, optional_ - The alias of the cluster this role will have access to.
+- `perms` _List[AccessAttr], optional_ - A list of updated permissions to be granted for this role.
+- `bucket_name` _str, optional_ - The name of the bucket this role will have access to.
+
+
+**Raises**:
+
+- `aistore.sdk.errors.AISError` - All other types of errors with AIStore.
+- `requests.RequestException` - If the HTTP request fails.
+- `ValueError` - If the role does not exist or if invalid parameters are provided.
+
+
+
+### delete
+
+```python
+def delete(name: str, missing_ok: bool = False) -> None
+```
+
+Deletes a role.
+
+**Arguments**:
+
+- `name` _str_ - The name of the role to delete.
+- `missing_ok` _bool_ - Ignore error if role does not exist. Defaults to False
+
+
+**Raises**:
+
+- `aistore.sdk.errors.AISError` - All other types of errors with AIStore.
+- `requests.RequestException` - If the HTTP request fails.
+- `ValueError` - If the role does not exist.
+
+
+
+## Class: TokenManager
+
+```python
+class TokenManager()
+```
+
+Manages token-related operations.
+
+This class provides methods to interact with tokens in the AuthN server.
+.
+
+**Arguments**:
+
+- `client` _RequestClient_ - The RequestClient used to make HTTP requests.
+
+
+
+### client
+
+```python
+@property
+def client() -> RequestClient
+```
+
+Returns the RequestClient instance used by this TokenManager.
+
+
+
+### revoke
+
+```python
+def revoke(token: str) -> None
+```
+
+Revokes the specified authentication token.
+
+**Arguments**:
+
+- `token` _str_ - The token to be revoked.
+
+
+**Raises**:
+
+- `ValueError` - If the token is not provided.
+- `AISError` - If the revoke token request fails.
+
+
+
+## Class: UserManager
+
+```python
+class UserManager()
+```
+
+UserManager provides methods to manage users in the AuthN service.
+
+**Arguments**:
+
+- `client` _RequestClient_ - The RequestClient used to make HTTP requests.
+
+
+
+### client
+
+```python
+@property
+def client() -> RequestClient
+```
+
+Returns the RequestClient instance used by this UserManager.
+
+
+
+### get
+
+```python
+def get(username: str) -> UserInfo
+```
+
+Retrieve user information from the AuthN Server.
+
+**Arguments**:
+
+- `username` _str_ - The username to retrieve.
**Returns**:
-- `ClusterPerformance` - An object encapsulating the detailed performance metrics of the cluster,
- including throughput, latency, and counters for each node
+- `UserInfo` - The user's information.
**Raises**:
-- `requests.RequestException` - If there's an ambiguous exception while processing the request
-- `requests.ConnectionError` - If there's a connection error with the cluster
-- `requests.ConnectionTimeout` - If the connection to the cluster times out
-- `requests.ReadTimeout` - If the timeout is reached while awaiting a response from the cluster
+- `AISError` - If the user retrieval request fails.
+
+
+
+### delete
+
+```python
+def delete(username: str, missing_ok: bool = False) -> None
+```
+
+Delete an existing user from the AuthN Server.
+
+**Arguments**:
+
+- `username` _str_ - The username of the user to delete.
+- `missing_ok` _bool_ - Ignore error if user does not exist. Defaults to False.
+
+
+**Raises**:
+
+- `AISError` - If the user deletion request fails.
+
+
+
+### create
+
+```python
+def create(username: str, roles: List[str], password: str) -> UserInfo
+```
+
+Create a new user in the AuthN Server.
+
+**Arguments**:
+
+- `username` _str_ - The name or ID of the user to create.
+- `password` _str_ - The password for the user.
+- `roles` _List[str]_ - The list of names of roles to assign to the user.
+
+
+**Returns**:
+
+- `UserInfo` - The created user's information.
+
+
+**Raises**:
+
+- `AISError` - If the user creation request fails.
+
+
+
+### list
+
+```python
+def list()
+```
+
+List all users in the AuthN Server.
+
+**Returns**:
+
+- `str` - The list of users in the AuthN Server.
+
+
+**Raises**:
+
+- `AISError` - If the user list request fails.
+
+
+
+### update
+
+```python
+def update(username: str,
+ password: Optional[str] = None,
+ roles: Optional[List[str]] = None) -> UserInfo
+```
+
+Update an existing user's information in the AuthN Server.
+
+**Arguments**:
+
+- `username` _str_ - The ID of the user to update.
+- `password` _str, optional_ - The new password for the user.
+- `roles` _List[str], optional_ - The list of names of roles to assign to the user.
+
+
+**Returns**:
+
+- `UserInfo` - The updated user's information.
+
+
+**Raises**:
+
+- `AISError` - If the user update request fails.
+
+
+
+## Class: AccessAttr
+
+```python
+class AccessAttr(IntFlag)
+```
+
+AccessAttr defines permissions as bitwise flags for access control (for more details, refer to the Go API).
+
+
+
+### describe
+
+```python
+@staticmethod
+def describe(perms: int) -> str
+```
+
+Returns a comma-separated string describing the permissions based on the provided bitwise flags.
@@ -1069,7 +1447,7 @@ Does not make any HTTP request, only instantiates an object in a bucket owned by
### objects
```python
-def objects(obj_names: list = None,
+def objects(obj_names: List = None,
obj_range: ObjectRange = None,
obj_template: str = None) -> ObjectGroup
```
@@ -1094,8 +1472,8 @@ Factory constructor for multiple objects belonging to this bucket.
```python
def make_request(method: str,
action: str,
- value: dict = None,
- params: dict = None) -> requests.Response
+ value: Dict = None,
+ params: Dict = None) -> requests.Response
```
Use the bucket's client to make a request to the bucket endpoint on the AIS server
@@ -1163,190 +1541,182 @@ Write a dataset to a bucket in AIS in webdataset format using wds.ShardWriter. L
- `skip_missing` _bool, optional_ - Skip samples that are missing one or more attributes, defaults to True
- `**kwargs` _optional_ - Optional keyword arguments to pass to the ShardWriter
-
+
-## Class: Object
+## Class: Client
```python
-class Object()
+class Client()
```
-A class representing an object of a bucket bound to a client.
+AIStore client for managing buckets, objects, ETL jobs
**Arguments**:
-- `bucket` _Bucket_ - Bucket to which this object belongs
-- `name` _str_ - name of object
-- `size` _int, optional_ - size of object in bytes
-- `props` _ObjectProps, optional_ - Properties of object
+- `endpoint` _str_ - AIStore endpoint
+- `skip_verify` _bool, optional_ - If True, skip SSL certificate verification. Defaults to False.
+- `ca_cert` _str, optional_ - Path to a CA certificate file for SSL verification.
+- `timeout` _Union[float, Tuple[float, float], None], optional_ - Request timeout in seconds; a single float
+ for both connect/read timeouts (e.g., 5.0), a tuple for separate connect/read timeouts (e.g., (3.0, 10.0)),
+ or None to disable timeout.
+- `retry` _urllib3.Retry, optional_ - Retry configuration object from the urllib3 library.
+- `token` _str, optional_ - Authorization token.
-
+
### bucket
```python
-@property
-def bucket()
+def bucket(bck_name: str,
+ provider: str = PROVIDER_AIS,
+ namespace: Namespace = None)
```
-Bucket containing this object.
+Factory constructor for bucket object.
+Does not make any HTTP request, only instantiates a bucket object.
-
+**Arguments**:
-### name
+- `bck_name` _str_ - Name of bucket
+- `provider` _str_ - Provider of bucket, one of "ais", "aws", "gcp", ... (optional, defaults to ais)
+- `namespace` _Namespace_ - Namespace of bucket (optional, defaults to None)
+
-```python
-@property
-def name() -> str
-```
+**Returns**:
-Name of this object.
+ The bucket object created.
-
+
-### props
+### cluster
```python
-@property
-def props() -> ObjectProps
+def cluster()
```
-Properties of this object.
+Factory constructor for cluster object.
+Does not make any HTTP request, only instantiates a cluster object.
-
+**Returns**:
-### head
+ The cluster object created.
+
+
+
+### job
```python
-def head() -> Header
+def job(job_id: str = "", job_kind: str = "")
```
-Requests object properties and returns headers. Updates props.
+Factory constructor for job object, which contains job-related functions.
+Does not make any HTTP request, only instantiates a job object.
-**Returns**:
+**Arguments**:
- Response header with the object properties.
+- `job_id` _str, optional_ - Optional ID for interacting with a specific job
+- `job_kind` _str, optional_ - Optional specific type of job empty for all kinds
-**Raises**:
+**Returns**:
-- `requests.RequestException` - "There was an ambiguous exception that occurred while handling..."
-- `requests.ConnectionError` - Connection error
-- `requests.ConnectionTimeout` - Timed out connecting to AIStore
-- `requests.ReadTimeout` - Timed out waiting response from AIStore
-- `requests.exceptions.HTTPError(404)` - The object does not exist
+ The job object created.
-
+
-### get
+### etl
```python
-def get(archive_settings: ArchiveSettings = None,
- blob_download_settings: BlobDownloadSettings = None,
- chunk_size: int = DEFAULT_CHUNK_SIZE,
- etl_name: str = None,
- writer: BufferedWriter = None,
- latest: bool = False,
- byte_range: str = None) -> ObjectReader
+def etl(etl_name: str)
```
-Reads an object
+Factory constructor for ETL object.
+Contains APIs related to AIStore ETL operations.
+Does not make any HTTP request, only instantiates an ETL object.
**Arguments**:
-- `archive_settings` _ArchiveSettings, optional_ - Settings for archive extraction
-- `blob_download_settings` _BlobDownloadSettings, optional_ - Settings for using blob download
-- `chunk_size` _int, optional_ - chunk_size to use while reading from stream
-- `etl_name` _str, optional_ - Transforms an object based on ETL with etl_name
-- `writer` _BufferedWriter, optional_ - User-provided writer for writing content output
- User is responsible for closing the writer
-- `latest` _bool, optional_ - GET the latest object version from the associated remote bucket
-- `byte_range` _str, optional_ - Specify a specific data segment of the object for transfer, including
- both the start and end of the range (e.g. "bytes=0-499" to request the first 500 bytes)
+- `etl_name` _str_ - Name of the ETL
**Returns**:
- The stream of bytes to read an object or a file inside an archive.
-
-
-**Raises**:
-
-- `requests.RequestException` - "There was an ambiguous exception that occurred while handling..."
-- `requests.ConnectionError` - Connection error
-- `requests.ConnectionTimeout` - Timed out connecting to AIStore
-- `requests.ReadTimeout` - Timed out waiting response from AIStore
+ The ETL object created.
-
+
-### get\_semantic\_url
+### dsort
```python
-def get_semantic_url()
+def dsort(dsort_id: str = "")
```
-Get the semantic URL to the object
+Factory constructor for dSort object.
+Contains APIs related to AIStore dSort operations.
+Does not make any HTTP request, only instantiates a dSort object.
+
+**Arguments**:
+
+- `dsort_id` - ID of the dSort job
+
**Returns**:
- Semantic URL to get object
+ dSort object created
-
+
-### get\_url
+### fetch\_object\_by\_url
```python
-def get_url(archpath: str = "", etl_name: str = None)
+def fetch_object_by_url(url: str) -> Object
```
-Get the full url to the object including base url and any query parameters
+Retrieve an object based on its URL.
**Arguments**:
-- `archpath` _str, optional_ - If the object is an archive, use `archpath` to extract a single file
- from the archive
-- `etl_name` _str, optional_ - Transforms an object based on ETL with etl_name
+- `url` _str_ - Full URL of the object (e.g., "ais://bucket1/file.txt")
**Returns**:
- Full URL to get object
+- `Object` - The object retrieved from the specified URL
-
+
-### put\_content
+## Class: Cluster
```python
-def put_content(content: bytes) -> Header
+class Cluster()
```
-Puts bytes as an object to a bucket in AIS storage.
+A class representing a cluster bound to an AIS client.
-**Arguments**:
+
-- `content` _bytes_ - Bytes to put as an object.
-
+### client
-**Raises**:
+```python
+@property
+def client()
+```
-- `requests.RequestException` - "There was an ambiguous exception that occurred while handling..."
-- `requests.ConnectionError` - Connection error
-- `requests.ConnectionTimeout` - Timed out connecting to AIStore
-- `requests.ReadTimeout` - Timed out waiting response from AIStore
+Client this cluster uses to make requests
-
+
-### put\_file
+### get\_info
```python
-def put_file(path: str = None)
+def get_info() -> Smap
```
-Puts a local file as an object to a bucket in AIS storage.
+Returns state of AIS cluster, including the detailed information about its nodes.
-**Arguments**:
+**Returns**:
-- `path` _str_ - Path to local file
+- `aistore.sdk.types.Smap` - Smap containing cluster information
**Raises**:
@@ -1355,39 +1725,36 @@ Puts a local file as an object to a bucket in AIS storage.
- `requests.ConnectionError` - Connection error
- `requests.ConnectionTimeout` - Timed out connecting to AIStore
- `requests.ReadTimeout` - Timed out waiting response from AIStore
-- `ValueError` - The path provided is not a valid file
-
+
-### promote
+### get\_primary\_url
```python
-def promote(path: str,
- target_id: str = "",
- recursive: bool = False,
- overwrite_dest: bool = False,
- delete_source: bool = False,
- src_not_file_share: bool = False) -> str
+def get_primary_url() -> str
```
-Promotes a file or folder an AIS target can access to a bucket in AIS storage.
-These files can be either on the physical disk of an AIS target itself or on a network file system
-the cluster can access.
-See more info here: https://aistore.nvidia.com/blog/2022/03/17/promote
+Returns: URL of primary proxy
+
+
+
+### list\_buckets
+
+```python
+def list_buckets(provider: str = PROVIDER_AIS)
+```
+
+Returns list of buckets in AIStore cluster.
**Arguments**:
-- `path` _str_ - Path to file or folder the AIS cluster can reach
-- `target_id` _str, optional_ - Promote files from a specific target node
-- `recursive` _bool, optional_ - Recursively promote objects from files in directories inside the path
-- `overwrite_dest` _bool, optional_ - Overwrite objects already on AIS
-- `delete_source` _bool, optional_ - Delete the source files when done promoting
-- `src_not_file_share` _bool, optional_ - Optimize if the source is guaranteed to not be on a file share
+- `provider` _str, optional_ - Name of bucket provider, one of "ais", "aws", "gcp", "az" or "ht".
+ Defaults to "ais". Empty provider returns buckets of all providers.
**Returns**:
- Job ID (as str) that can be used to check the status of the operation, or empty if job is done synchronously
+- `List[BucketModel]` - A list of buckets
**Raises**:
@@ -1396,86 +1763,453 @@ See more info here: https://aistore.nvidia.com/blog/2022/03/17/promote
- `requests.ConnectionError` - Connection error
- `requests.ConnectionTimeout` - Timed out connecting to AIStore
- `requests.ReadTimeout` - Timed out waiting response from AIStore
-- `AISError` - Path does not exist on the AIS cluster storage
-
+
-### delete
+### list\_jobs\_status
```python
-def delete()
+def list_jobs_status(job_kind="", target_id="") -> List[JobStatus]
```
-Delete an object from a bucket.
+List the status of jobs on the cluster
-**Returns**:
+**Arguments**:
- None
+- `job_kind` _str, optional_ - Only show jobs of a particular type
+- `target_id` _str, optional_ - Limit to jobs on a specific target node
-**Raises**:
+**Returns**:
-- `requests.RequestException` - "There was an ambiguous exception that occurred while handling..."
-- `requests.ConnectionError` - Connection error
-- `requests.ConnectionTimeout` - Timed out connecting to AIStore
-- `requests.ReadTimeout` - Timed out waiting response from AIStore
-- `requests.exceptions.HTTPError(404)` - The object does not exist
+ List of JobStatus objects
-
+
-### blob\_download
+### list\_running\_jobs
```python
-def blob_download(chunk_size: int = None,
- num_workers: int = None,
- latest: bool = False) -> str
+def list_running_jobs(job_kind="", target_id="") -> List[str]
```
-A special facility to download very large remote objects a.k.a. BLOBs
-Returns job ID that for the blob download operation.
+List the currently running jobs on the cluster
**Arguments**:
-- `chunk_size` _int_ - chunk size in bytes
-- `num_workers` _int_ - number of concurrent blob-downloading workers (readers)
-- `latest` _bool_ - GET the latest object version from the associated remote bucket
+- `job_kind` _str, optional_ - Only show jobs of a particular type
+- `target_id` _str, optional_ - Limit to jobs on a specific target node
**Returns**:
- Job ID (as str) that can be used to check the status of the operation
-
-
-**Raises**:
-
-- `aistore.sdk.errors.AISError` - All other types of errors with AIStore
-- `requests.ConnectionError` - Connection error
-- `requests.ConnectionTimeout` - Timed out connecting to AIStore
-- `requests.exceptions.HTTPError` - Service unavailable
-- `requests.RequestException` - "There was an ambiguous exception that occurred while handling..."
+ List of jobs in the format job_kind[job_id]
-
+
-### append\_content
+### list\_running\_etls
```python
-def append_content(content: bytes,
- handle: str = "",
- flush: bool = False) -> str
+def list_running_etls() -> List[ETLInfo]
```
-Append bytes as an object to a bucket in AIS storage.
+Lists all running ETLs.
-**Arguments**:
+Note: Does not list ETLs that have been stopped or deleted.
-- `content` _bytes_ - Bytes to append to the object.
-- `handle` _str_ - Handle string to use for subsequent appends or flush (empty for the first append).
-- `flush` _bool_ - Whether to flush and finalize the append operation, making the object accessible.
+**Returns**:
+
+- `List[ETLInfo]` - A list of details on running ETLs
+
+
+
+### is\_ready
+
+```python
+def is_ready() -> bool
+```
+
+Checks if cluster is ready or still setting up.
+
+**Returns**:
+
+- `bool` - True if cluster is ready, or false if cluster is still setting up
+
+
+
+### get\_performance
+
+```python
+def get_performance(get_throughput: bool = True,
+ get_latency: bool = True,
+ get_counters: bool = True) -> ClusterPerformance
+```
+
+Retrieves and calculates the performance metrics for each target node in the AIStore cluster.
+It compiles throughput, latency, and various operational counters from each target node,
+providing a comprehensive view of the cluster's overall performance
+
+**Arguments**:
+
+- `get_throughput` _bool, optional_ - get cluster throughput
+- `get_latency` _bool, optional_ - get cluster latency
+- `get_counters` _bool, optional_ - get cluster counters
+
+
+**Returns**:
+
+- `ClusterPerformance` - An object encapsulating the detailed performance metrics of the cluster,
+ including throughput, latency, and counters for each node
+
+
+**Raises**:
+
+- `requests.RequestException` - If there's an ambiguous exception while processing the request
+- `requests.ConnectionError` - If there's a connection error with the cluster
+- `requests.ConnectionTimeout` - If the connection to the cluster times out
+- `requests.ReadTimeout` - If the timeout is reached while awaiting a response from the cluster
+
+
+
+### get\_uuid
+
+```python
+def get_uuid() -> str
+```
+
+Returns: UUID of AIStore Cluster
+
+
+
+## Class: Etl
+
+```python
+class Etl()
+```
+
+A class containing ETL-related functions.
+
+
+
+### name
+
+```python
+@property
+def name() -> str
+```
+
+Name of the ETL
+
+
+
+### init\_spec
+
+```python
+def init_spec(template: str,
+ communication_type: str = DEFAULT_ETL_COMM,
+ timeout: str = DEFAULT_ETL_TIMEOUT,
+ arg_type: str = "") -> str
+```
+
+Initializes ETL based on Kubernetes pod spec template.
+
+**Arguments**:
+
+- `template` _str_ - Kubernetes pod spec template
+ Existing templates can be found at `sdk.etl_templates`
+ For more information visit: https://github.com/NVIDIA/ais-etl/tree/master/transformers
+- `communication_type` _str_ - Communication type of the ETL (options: hpull, hrev, hpush)
+- `timeout` _str_ - Timeout of the ETL job (e.g. 5m for 5 minutes)
+
+**Returns**:
+
+ Job ID string associated with this ETL
+
+
+
+### init\_code
+
+```python
+def init_code(transform: Callable,
+ dependencies: List[str] = None,
+ preimported_modules: List[str] = None,
+ runtime: str = _get_default_runtime(),
+ communication_type: str = DEFAULT_ETL_COMM,
+ timeout: str = DEFAULT_ETL_TIMEOUT,
+ chunk_size: int = None,
+ arg_type: str = "") -> str
+```
+
+Initializes ETL based on the provided source code.
+
+**Arguments**:
+
+- `transform` _Callable_ - Transform function of the ETL
+- `dependencies` _list[str]_ - Python dependencies to install
+- `preimported_modules` _list[str]_ - Modules to import before running the transform function. This can
+ be necessary in cases where the modules used both attempt to import each other circularly
+- `runtime` _str_ - [optional, default= V2 implementation of the current python version if supported, else
+ python3.8v2] Runtime environment of the ETL [choose from: python3.8v2, python3.10v2, python3.11v2]
+ (see ext/etl/runtime/all.go)
+- `communication_type` _str_ - [optional, default="hpush"] Communication type of the ETL (options: hpull, hrev,
+ hpush, io)
+- `timeout` _str_ - [optional, default="5m"] Timeout of the ETL job (e.g. 5m for 5 minutes)
+- `chunk_size` _int_ - Chunk size in bytes if transform function in streaming data.
+ (whole object is read by default)
+- `arg_type` _optional, str_ - The type of argument the runtime will provide the transform function.
+ The default value of "" will provide the raw bytes read from the object.
+ When used with hpull communication_type, setting this to "url" will provide the URL of the object.
+
+**Returns**:
+
+ Job ID string associated with this ETL
+
+
+
+### view
+
+```python
+def view() -> ETLDetails
+```
+
+View ETL details
+
+**Returns**:
+
+- `ETLDetails` - details of the ETL
+
+
+
+### start
+
+```python
+def start()
+```
+
+Resumes a stopped ETL with given ETL name.
+
+Note: Deleted ETLs cannot be started.
+
+
+
+### stop
+
+```python
+def stop()
+```
+
+Stops ETL. Stops (but does not delete) all the pods created by Kubernetes for this ETL and
+terminates any transforms.
+
+
+
+### delete
+
+```python
+def delete()
+```
+
+Delete ETL. Deletes pods created by Kubernetes for this ETL and specifications for this ETL
+in Kubernetes.
+
+Note: Running ETLs cannot be deleted.
+
+
+
+### validate\_etl\_name
+
+```python
+@staticmethod
+def validate_etl_name(name: str)
+```
+
+Validate the ETL name based on specific criteria.
+
+**Arguments**:
+
+- `name` _str_ - The name of the ETL to validate.
+
+
+**Raises**:
+
+- `ValueError` - If the name is too short (less than 6 characters),
+ too long (more than 32 characters),
+ or contains invalid characters (anything other than lowercase letters, digits, or hyphens).
+
+
+
+## Class: Job
+
+```python
+class Job()
+```
+
+A class containing job-related functions.
+
+**Arguments**:
+
+- `client` _RequestClient_ - Client for interfacing with AIS cluster
+- `job_id` _str, optional_ - ID of a specific job, empty for all jobs
+- `job_kind` _str, optional_ - Specific kind of job, empty for all kinds
+
+
+
+### job\_id
+
+```python
+@property
+def job_id()
+```
+
+Return job id
+
+
+
+### job\_kind
+
+```python
+@property
+def job_kind()
+```
+
+Return job kind
+
+
+
+### status
+
+```python
+def status() -> JobStatus
+```
+
+Return status of a job
+
+**Returns**:
+
+ The job status including id, finish time, and error info.
+
+
+**Raises**:
+
+- `requests.RequestException` - "There was an ambiguous exception that occurred while handling..."
+- `requests.ConnectionError` - Connection error
+- `requests.ConnectionTimeout` - Timed out connecting to AIStore
+- `requests.ReadTimeout` - Timed out waiting response from AIStore
+
+
+
+### wait
+
+```python
+def wait(timeout: int = DEFAULT_JOB_WAIT_TIMEOUT, verbose: bool = True)
+```
+
+Wait for a job to finish
+
+**Arguments**:
+
+- `timeout` _int, optional_ - The maximum time to wait for the job, in seconds. Default timeout is 5 minutes.
+- `verbose` _bool, optional_ - Whether to log wait status to standard output
+
+
+**Returns**:
+
+ None
+
+
+**Raises**:
+
+- `requests.RequestException` - "There was an ambiguous exception that occurred while handling..."
+- `requests.ConnectionError` - Connection error
+- `requests.ConnectionTimeout` - Timed out connecting to AIStore
+- `requests.ReadTimeout` - Timed out waiting response from AIStore
+- `errors.Timeout` - Timeout while waiting for the job to finish
+
+
+
+### wait\_for\_idle
+
+```python
+def wait_for_idle(timeout: int = DEFAULT_JOB_WAIT_TIMEOUT,
+ verbose: bool = True)
+```
+
+Wait for a job to reach an idle state
+
+**Arguments**:
+
+- `timeout` _int, optional_ - The maximum time to wait for the job, in seconds. Default timeout is 5 minutes.
+- `verbose` _bool, optional_ - Whether to log wait status to standard output
+
+
+**Returns**:
+
+ None
+
+
+**Raises**:
+
+- `requests.RequestException` - "There was an ambiguous exception that occurred while handling..."
+- `requests.ConnectionError` - Connection error
+- `requests.ConnectionTimeout` - Timed out connecting to AIStore
+- `requests.ReadTimeout` - Timed out waiting response from AIStore
+- `errors.Timeout` - Timeout while waiting for the job to finish
+- `errors.JobInfoNotFound` - Raised when information on a job's status could not be found on the AIS cluster
+
+
+
+### wait\_single\_node
+
+```python
+def wait_single_node(timeout: int = DEFAULT_JOB_WAIT_TIMEOUT,
+ verbose: bool = True)
+```
+
+Wait for a job running on a single node
+
+**Arguments**:
+
+- `timeout` _int, optional_ - The maximum time to wait for the job, in seconds. Default timeout is 5 minutes.
+- `verbose` _bool, optional_ - Whether to log wait status to standard output
+
+
+**Returns**:
+
+ None
+
+
+**Raises**:
+
+- `requests.RequestException` - "There was an ambiguous exception that occurred while handling..."
+- `requests.ConnectionError` - Connection error
+- `requests.ConnectionTimeout` - Timed out connecting to AIStore
+- `requests.ReadTimeout` - Timed out waiting response from AIStore
+- `errors.Timeout` - Timeout while waiting for the job to finish
+- `errors.JobInfoNotFound` - Raised when information on a job's status could not be found on the AIS cluster
+
+
+
+### start
+
+```python
+def start(daemon_id: str = "",
+ force: bool = False,
+ buckets: List[Bucket] = None) -> str
+```
+
+Start a job and return its ID.
+
+**Arguments**:
+
+- `daemon_id` _str, optional_ - For running a job that must run on a specific target node (e.g. resilvering).
+- `force` _bool, optional_ - Override existing restrictions for a bucket (e.g., run LRU eviction even if the
+ bucket has LRU disabled).
+- `buckets` _List[Bucket], optional_ - List of one or more buckets; applicable only for jobs that have bucket
+ scope (for details on job types, see `Table` in xact/api.go).
**Returns**:
-- `handle` _str_ - Handle string to pass for subsequent appends or flush.
+ The running job ID.
**Raises**:
@@ -1484,23 +2218,28 @@ Append bytes as an object to a bucket in AIS storage.
- `requests.ConnectionError` - Connection error
- `requests.ConnectionTimeout` - Timed out connecting to AIStore
- `requests.ReadTimeout` - Timed out waiting response from AIStore
-- `requests.exceptions.HTTPError(404)` - The object does not exist
-
+
-### set\_custom\_props
+### get\_within\_timeframe
```python
-def set_custom_props(self, custom_metadata: Dict[str, str], replace_existing: bool = False)
+def get_within_timeframe(start_time: datetime.time,
+ end_time: datetime.time) -> List[JobSnapshot]
```
-Set custom properties for the object.
+Checks for jobs that started and finished within a specified timeframe
**Arguments**:
-- `custom_metadata` _Dict[str, str]_ - Custom metadata key-value pairs.
-- `replace_existing` _bool, optional_ - Whether to replace existing metadata. Defaults to False.
+- `start_time` _datetime.time_ - The start of the timeframe for monitoring jobs
+- `end_time` _datetime.time_ - The end of the timeframe for monitoring jobs
+
+
+**Returns**:
+- `list` - A list of jobs that have finished within the specified timeframe
+
**Raises**:
@@ -1508,7 +2247,8 @@ Set custom properties for the object.
- `requests.ConnectionError` - Connection error
- `requests.ConnectionTimeout` - Timed out connecting to AIStore
- `requests.ReadTimeout` - Timed out waiting response from AIStore
-- `requests.exceptions.HTTPError(404)` - The object does not exist
+- `errors.Timeout` - Timeout while waiting for the job to finish
+- `errors.JobInfoNotFound` - Raised when information on a job's status could not be found on the AIS cluster
@@ -1877,57 +2617,69 @@ A collection of object names specified by a template in the bash brace expansion
- `template` _str_ - A string template that defines the names of objects to include in the collection
-
+
-## Class: Job
+## Class: Object
```python
-class Job()
+class Object()
```
-A class containing job-related functions.
+A class representing an object of a bucket bound to a client.
**Arguments**:
-- `client` _RequestClient_ - Client for interfacing with AIS cluster
-- `job_id` _str, optional_ - ID of a specific job, empty for all jobs
-- `job_kind` _str, optional_ - Specific kind of job, empty for all kinds
+- `bucket` _Bucket_ - Bucket to which this object belongs
+- `name` _str_ - name of object
+- `size` _int, optional_ - size of object in bytes
+- `props` _ObjectProps, optional_ - Properties of object
-
+
-### job\_id
+### bucket
```python
@property
-def job_id()
+def bucket()
```
-Return job id
+Bucket containing this object.
-
+
-### job\_kind
+### name
```python
@property
-def job_kind()
+def name() -> str
```
-Return job kind
+Name of this object.
-
+
-### status
+### props
```python
-def status() -> JobStatus
+@property
+def props() -> ObjectProps
```
-Return status of a job
+Properties of this object.
+
+
+
+### head
+
+```python
+def head() -> Header
+```
+
+Requests object properties and returns headers. Updates props.
**Returns**:
- The job status including id, finish time, and error info.
+ Response header with the object properties.
**Raises**:
@@ -1936,26 +2688,41 @@ Return status of a job
- `requests.ConnectionError` - Connection error
- `requests.ConnectionTimeout` - Timed out connecting to AIStore
- `requests.ReadTimeout` - Timed out waiting response from AIStore
+- `requests.exceptions.HTTPError(404)` - The object does not exist
-
+
-### wait
+### get
```python
-def wait(timeout: int = DEFAULT_JOB_WAIT_TIMEOUT, verbose: bool = True)
+def get(archive_settings: ArchiveSettings = None,
+ blob_download_settings: BlobDownloadSettings = None,
+ chunk_size: int = DEFAULT_CHUNK_SIZE,
+ etl_name: str = None,
+ writer: BufferedWriter = None,
+ latest: bool = False,
+ byte_range: str = None) -> ObjectReader
```
-Wait for a job to finish
+Creates and returns an ObjectReader with access to object contents and optionally writes to a provided writer.
**Arguments**:
-- `timeout` _int, optional_ - The maximum time to wait for the job, in seconds. Default timeout is 5 minutes.
-- `verbose` _bool, optional_ - Whether to log wait status to standard output
+- `archive_settings` _ArchiveSettings, optional_ - Settings for archive extraction
+- `blob_download_settings` _BlobDownloadSettings, optional_ - Settings for using blob download
+- `chunk_size` _int, optional_ - chunk_size to use while reading from stream
+- `etl_name` _str, optional_ - Transforms an object based on ETL with etl_name
+- `writer` _BufferedWriter, optional_ - User-provided writer for writing content output
+ User is responsible for closing the writer
+- `latest` _bool, optional_ - GET the latest object version from the associated remote bucket
+- `byte_range` _str, optional_ - Specify a specific data segment of the object for transfer, including
+ both the start and end of the range (e.g. "bytes=0-499" to request the first 500 bytes)
**Returns**:
- None
+ An ObjectReader which can be iterated over to stream chunks of object content or used to read all content
+ directly.
**Raises**:
@@ -1964,25 +2731,138 @@ Wait for a job to finish
- `requests.ConnectionError` - Connection error
- `requests.ConnectionTimeout` - Timed out connecting to AIStore
- `requests.ReadTimeout` - Timed out waiting response from AIStore
-- `errors.Timeout` - Timeout while waiting for the job to finish
-
+
-### wait\_for\_idle
+### get\_semantic\_url
```python
-def wait_for_idle(timeout: int = DEFAULT_JOB_WAIT_TIMEOUT,
- verbose: bool = True)
+def get_semantic_url()
```
-Wait for a job to reach an idle state
+Get the semantic URL to the object
+
+**Returns**:
+
+ Semantic URL to get object
+
+
+
+### get\_url
+
+```python
+def get_url(archpath: str = "", etl_name: str = None)
+```
+
+Get the full url to the object including base url and any query parameters
**Arguments**:
-- `timeout` _int, optional_ - The maximum time to wait for the job, in seconds. Default timeout is 5 minutes.
-- `verbose` _bool, optional_ - Whether to log wait status to standard output
+- `archpath` _str, optional_ - If the object is an archive, use `archpath` to extract a single file
+ from the archive
+- `etl_name` _str, optional_ - Transforms an object based on ETL with etl_name
+
+
+**Returns**:
+
+ Full URL to get object
+
+
+
+### put\_content
+
+```python
+def put_content(content: bytes) -> Header
+```
+
+Puts bytes as an object to a bucket in AIS storage.
+
+**Arguments**:
+
+- `content` _bytes_ - Bytes to put as an object.
+
+
+**Raises**:
+
+- `requests.RequestException` - "There was an ambiguous exception that occurred while handling..."
+- `requests.ConnectionError` - Connection error
+- `requests.ConnectionTimeout` - Timed out connecting to AIStore
+- `requests.ReadTimeout` - Timed out waiting response from AIStore
+
+
+
+### put\_file
+
+```python
+def put_file(path: str = None)
+```
+
+Puts a local file as an object to a bucket in AIS storage.
+
+**Arguments**:
+
+- `path` _str_ - Path to local file
+
+
+**Raises**:
+
+- `requests.RequestException` - "There was an ambiguous exception that occurred while handling..."
+- `requests.ConnectionError` - Connection error
+- `requests.ConnectionTimeout` - Timed out connecting to AIStore
+- `requests.ReadTimeout` - Timed out waiting response from AIStore
+- `ValueError` - The path provided is not a valid file
+
+
+
+### promote
+
+```python
+def promote(path: str,
+ target_id: str = "",
+ recursive: bool = False,
+ overwrite_dest: bool = False,
+ delete_source: bool = False,
+ src_not_file_share: bool = False) -> str
+```
+
+Promotes a file or folder an AIS target can access to a bucket in AIS storage.
+These files can be either on the physical disk of an AIS target itself or on a network file system
+the cluster can access.
+See more info here: https://aiatscale.org/blog/2022/03/17/promote
+
+**Arguments**:
+
+- `path` _str_ - Path to file or folder the AIS cluster can reach
+- `target_id` _str, optional_ - Promote files from a specific target node
+- `recursive` _bool, optional_ - Recursively promote objects from files in directories inside the path
+- `overwrite_dest` _bool, optional_ - Overwrite objects already on AIS
+- `delete_source` _bool, optional_ - Delete the source files when done promoting
+- `src_not_file_share` _bool, optional_ - Optimize if the source is guaranteed to not be on a file share
+
+
+**Returns**:
+
+ Job ID (as str) that can be used to check the status of the operation, or empty if job is done synchronously
+**Raises**:
+
+- `requests.RequestException` - "There was an ambiguous exception that occurred while handling..."
+- `requests.ConnectionError` - Connection error
+- `requests.ConnectionTimeout` - Timed out connecting to AIStore
+- `requests.ReadTimeout` - Timed out waiting response from AIStore
+- `AISError` - Path does not exist on the AIS cluster storage
+
+
+
+### delete
+
+```python
+def delete()
+```
+
+Delete an object from a bucket.
+
**Returns**:
None
@@ -1994,64 +2874,63 @@ Wait for a job to reach an idle state
- `requests.ConnectionError` - Connection error
- `requests.ConnectionTimeout` - Timed out connecting to AIStore
- `requests.ReadTimeout` - Timed out waiting response from AIStore
-- `errors.Timeout` - Timeout while waiting for the job to finish
-- `errors.JobInfoNotFound` - Raised when information on a job's status could not be found on the AIS cluster
+- `requests.exceptions.HTTPError(404)` - The object does not exist
-
+
-### wait\_single\_node
+### blob\_download
```python
-def wait_single_node(timeout: int = DEFAULT_JOB_WAIT_TIMEOUT,
- verbose: bool = True)
+def blob_download(chunk_size: int = None,
+ num_workers: int = None,
+ latest: bool = False) -> str
```
-Wait for a job running on a single node
+A special facility to download very large remote objects a.k.a. BLOBs
+Returns job ID that for the blob download operation.
**Arguments**:
-- `timeout` _int, optional_ - The maximum time to wait for the job, in seconds. Default timeout is 5 minutes.
-- `verbose` _bool, optional_ - Whether to log wait status to standard output
+- `chunk_size` _int_ - chunk size in bytes
+- `num_workers` _int_ - number of concurrent blob-downloading workers (readers)
+- `latest` _bool_ - GET the latest object version from the associated remote bucket
**Returns**:
- None
+ Job ID (as str) that can be used to check the status of the operation
**Raises**:
-- `requests.RequestException` - "There was an ambiguous exception that occurred while handling..."
+- `aistore.sdk.errors.AISError` - All other types of errors with AIStore
- `requests.ConnectionError` - Connection error
- `requests.ConnectionTimeout` - Timed out connecting to AIStore
-- `requests.ReadTimeout` - Timed out waiting response from AIStore
-- `errors.Timeout` - Timeout while waiting for the job to finish
-- `errors.JobInfoNotFound` - Raised when information on a job's status could not be found on the AIS cluster
+- `requests.exceptions.HTTPError` - Service unavailable
+- `requests.RequestException` - "There was an ambiguous exception that occurred while handling..."
-
+
-### start
+### append\_content
```python
-def start(daemon_id: str = "",
- force: bool = False,
- buckets: List[Bucket] = None) -> str
+def append_content(content: bytes,
+ handle: str = "",
+ flush: bool = False) -> str
```
-Start a job and return its ID.
+Append bytes as an object to a bucket in AIS storage.
**Arguments**:
-- `daemon_id` _str, optional_ - For running a job that must run on a specific target node (e.g. resilvering).
-- `force` _bool, optional_ - Override existing restrictions for a bucket (e.g., run LRU eviction even if the
- bucket has LRU disabled).
-- `buckets` _List[Bucket], optional_ - List of one or more buckets; applicable only for jobs that have bucket
- scope (for details on job types, see `Table` in xact/api.go).
+- `content` _bytes_ - Bytes to append to the object.
+- `handle` _str_ - Handle string to use for subsequent appends or flush (empty for the first append).
+- `flush` _bool_ - Whether to flush and finalize the append operation, making the object accessible.
**Returns**:
- The running job ID.
+- `handle` _str_ - Handle string to pass for subsequent appends or flush.
**Raises**:
@@ -2060,37 +2939,37 @@ Start a job and return its ID.
- `requests.ConnectionError` - Connection error
- `requests.ConnectionTimeout` - Timed out connecting to AIStore
- `requests.ReadTimeout` - Timed out waiting response from AIStore
+- `requests.exceptions.HTTPError(404)` - The object does not exist
-
+
-### get\_within\_timeframe
+### set\_custom\_props
```python
-def get_within_timeframe(start_time: datetime.time,
- end_time: datetime.time) -> List[JobSnapshot]
+def set_custom_props(custom_metadata: Dict[str, str],
+ replace_existing: bool = False)
```
-Checks for jobs that started and finished within a specified timeframe
+Set custom properties for the object.
**Arguments**:
-- `start_time` _datetime.time_ - The start of the timeframe for monitoring jobs
-- `end_time` _datetime.time_ - The end of the timeframe for monitoring jobs
-
+- `custom_metadata` _Dict[str, str]_ - Custom metadata key-value pairs.
+- `replace_existing` _bool, optional_ - Whether to replace existing metadata. Defaults to False.
-**Returns**:
+
-- `list` - A list of jobs that have finished within the specified timeframe
-
+## Class: ObjectIterator
-**Raises**:
+```python
+class ObjectIterator()
+```
-- `requests.RequestException` - "There was an ambiguous exception that occurred while handling..."
-- `requests.ConnectionError` - Connection error
-- `requests.ConnectionTimeout` - Timed out connecting to AIStore
-- `requests.ReadTimeout` - Timed out waiting response from AIStore
-- `errors.Timeout` - Timeout while waiting for the job to finish
-- `errors.JobInfoNotFound` - Raised when information on a job's status could not be found on the AIS cluster
+Represents an iterable that will fetch all objects from a bucket, querying as needed with the specified function
+
+**Arguments**:
+
+- `list_objects` _Callable_ - Function returning a BucketList from an AIS cluster
@@ -2126,9 +3005,9 @@ Object metadata attributes.
def read_all() -> bytes
```
-Read all byte data from the object content stream.
+Read all byte data directly from the object response without using a stream.
-This uses a bytes cast which makes it slightly slower and requires all object content to fit in memory at once.
+This requires all object content to fit in memory at once and downloads all content before returning.
**Returns**:
@@ -2156,180 +3035,9 @@ Returns the raw byte stream of object content.
def __iter__() -> Iterator[bytes]
```
-Creates a generator to read the stream content in chunks.
-
-**Returns**:
-
-- `Iterator[bytes]` - An iterator to access the next chunk of bytes
-
-
-
-## Class: ObjectIterator
-
-```python
-class ObjectIterator()
-```
-
-Represents an iterable that will fetch all objects from a bucket, querying as needed with the specified function
-
-**Arguments**:
-
-- `list_objects` _Callable_ - Function returning a BucketList from an AIS cluster
-
-
-
-## Class: Etl
-
-```python
-class Etl()
-```
-
-A class containing ETL-related functions.
-
-
-
-### name
-
-```python
-@property
-def name() -> str
-```
-
-Name of the ETL
-
-
-
-### init\_spec
-
-```python
-def init_spec(template: str,
- communication_type: str = DEFAULT_ETL_COMM,
- timeout: str = DEFAULT_ETL_TIMEOUT,
- arg_type: str = "") -> str
-```
-
-Initializes ETL based on Kubernetes pod spec template.
-
-**Arguments**:
-
-- `template` _str_ - Kubernetes pod spec template
- Existing templates can be found at `sdk.etl_templates`
- For more information visit: https://github.com/NVIDIA/ais-etl/tree/master/transformers
-- `communication_type` _str_ - Communication type of the ETL (options: hpull, hrev, hpush)
-- `timeout` _str_ - Timeout of the ETL job (e.g. 5m for 5 minutes)
-
-**Returns**:
-
- Job ID string associated with this ETL
-
-
-
-### init\_code
-
-```python
-def init_code(transform: Callable,
- dependencies: List[str] = None,
- preimported_modules: List[str] = None,
- runtime: str = _get_default_runtime(),
- communication_type: str = DEFAULT_ETL_COMM,
- timeout: str = DEFAULT_ETL_TIMEOUT,
- chunk_size: int = None,
- arg_type: str = "") -> str
-```
-
-Initializes ETL based on the provided source code.
-
-**Arguments**:
-
-- `transform` _Callable_ - Transform function of the ETL
-- `dependencies` _list[str]_ - Python dependencies to install
-- `preimported_modules` _list[str]_ - Modules to import before running the transform function. This can
- be necessary in cases where the modules used both attempt to import each other circularly
-- `runtime` _str_ - [optional, default= V2 implementation of the current python version if supported, else
- python3.8v2] Runtime environment of the ETL [choose from: python3.8v2, python3.10v2, python3.11v2]
- (see ext/etl/runtime/all.go)
-- `communication_type` _str_ - [optional, default="hpush"] Communication type of the ETL (options: hpull, hrev,
- hpush, io)
-- `timeout` _str_ - [optional, default="5m"] Timeout of the ETL job (e.g. 5m for 5 minutes)
-- `chunk_size` _int_ - Chunk size in bytes if transform function in streaming data.
- (whole object is read by default)
-- `arg_type` _optional, str_ - The type of argument the runtime will provide the transform function.
- The default value of "" will provide the raw bytes read from the object.
- When used with hpull communication_type, setting this to "url" will provide the URL of the object.
-
-**Returns**:
-
- Job ID string associated with this ETL
-
-
-
-### view
-
-```python
-def view() -> ETLDetails
-```
-
-View ETL details
+Make a request to get a stream from the provided object and yield chunks of the stream content.
**Returns**:
-- `ETLDetails` - details of the ETL
-
-
-
-### start
-
-```python
-def start()
-```
-
-Resumes a stopped ETL with given ETL name.
-
-Note: Deleted ETLs cannot be started.
-
-
-
-### stop
-
-```python
-def stop()
-```
-
-Stops ETL. Stops (but does not delete) all the pods created by Kubernetes for this ETL and
-terminates any transforms.
-
-
-
-### delete
-
-```python
-def delete()
-```
-
-Delete ETL. Deletes pods created by Kubernetes for this ETL and specifications for this ETL
-in Kubernetes.
-
-Note: Running ETLs cannot be deleted.
-
-
-
-### validate\_etl\_name
-
-```python
-@staticmethod
-def validate_etl_name(name: str)
-```
-
-Validate the ETL name based on specific criteria.
-
-**Arguments**:
-
-- `name` _str_ - The name of the ETL to validate.
-
-
-**Raises**:
-
-- `ValueError` - If the name is too short (less than 6 characters),
- too long (more than 32 characters),
- or contains invalid characters (anything other than lowercase letters, digits, or hyphens).
+- `Iterator[bytes]` - An iterator over each chunk of bytes in the object
diff --git a/python/Makefile b/python/Makefile
index ac01c2c028a..b5e5556cce9 100644
--- a/python/Makefile
+++ b/python/Makefile
@@ -105,7 +105,11 @@ generate-sdk-docs:
@ [[ $$(pipx --help) ]] || $(PIP) install pipx; $(PYTHON) -m pipx ensurepath --force
@ [[ $$(pydoc-markdown --help) ]] || pipx install --force pydoc-markdown[novella]
@ source doc_prefixes.sh && echo "$$SDK_PREFIX" > $(SDK_DOCFILE)
- @ pydoc-markdown -I ./aistore/sdk -m client -m cluster -m bucket -m object -m multiobj.object_group -m multiobj.object_names -m multiobj.object_range -m multiobj.object_template -m job -m object_reader -m object_iterator -m etl '{ renderer: { type: markdown, descriptive_class_title: "Class: ", render_toc: true, render_toc_title: "", render_module_header: false, classdef_with_decorators: true } }' >> $(SDK_DOCFILE)
+ @ pydoc-markdown -I ./aistore/sdk \
+ -m authn.authn_client -m authn.cluster_manager -m authn.role_manager -m authn.token_manager -m authn.user_manager -m authn.access_attr \
+ -m bucket -m client -m cluster -m etl -m job -m multiobj.object_group -m multiobj.object_names -m multiobj.object_range -m multiobj.object_template \
+ -m object -m object_iterator -m object_reader \
+ '{ renderer: { type: markdown, descriptive_class_title: "Class: ", render_toc: true, render_toc_title: "", render_module_header: false, classdef_with_decorators: true } }' >> $(SDK_DOCFILE)
@ sed -i -e 's/####/###/g' $(SDK_DOCFILE)
.PHONY: generate-pytorch-docs
diff --git a/python/aistore/sdk/README.md b/python/aistore/sdk/README.md
index 6db5d9ce51d..1ca3b4f7211 100644
--- a/python/aistore/sdk/README.md
+++ b/python/aistore/sdk/README.md
@@ -4,7 +4,9 @@ AIS Python SDK provides a (growing) set of client-side APIs to access and utiliz
The project is, essentially, a Python port of the [AIS Go APIs](https://aistore.nvidia.com/docs/http-api), with additional objectives that prioritize *utmost convenience for Python developers*.
-Note that only Python 3.x (version 3.6 or later) is currently supported.
+The SDK also includes the `authn` sub-package for managing authentication, users, roles, clusters, and tokens. For more details, refer to the [AuthN sub-package README](https://github.com/NVIDIA/aistore/blob/main/python/aistore/sdk/authn/README.md).
+
+> Only Python 3.x (version 3.6 or later) is currently supported.
---
diff --git a/python/aistore/sdk/authn/README.md b/python/aistore/sdk/authn/README.md
new file mode 100644
index 00000000000..aa542ba176c
--- /dev/null
+++ b/python/aistore/sdk/authn/README.md
@@ -0,0 +1,121 @@
+## AIS Python SDK AuthN Sub-Package
+
+The AIStore Authentication Server (AuthN) is a standalone service that provides secure, user- and role-based access to AIStore by leveraging [OAuth 2.0](https://oauth.net/2/) compliant [JSON Web Tokens (JWTs)](https://datatracker.ietf.org/doc/html/rfc7519). The `aistore.sdk.authn` sub-package in the Python SDK allows developers to interact with the AuthN server to manage authentication, users, roles, clusters, and tokens seamlessly.
+
+> For more details, please refer to the [AuthN documentation](https://github.com/NVIDIA/aistore/blob/main/docs/authn.md).
+
+### Quick Start
+
+#### Client Initialization
+
+To interact with a running AuthN instance, create an `AuthNClient` object by providing the endpoint of the AuthN server:
+
+```python
+from aistore.sdk.authn import AuthNClient
+
+authn_client = AuthNClient("http://localhost:52001")
+```
+
+This `authn_client` enables the management of roles, users, clusters, and tokens.
+
+#### Logging In
+
+Log in to the AuthN server to get an authorization token for secure interactions:
+
+```python
+from aistore.sdk.authn import AuthNClient
+from aistore.sdk import Client
+
+# Initialize AuthN client and login as admin
+authn_client = AuthNClient("http://localhost:52001")
+admin_auth_token = authn_client.login("admin", "admin")
+
+# Initialize AIStore client with the token
+aistore_client = Client("http://localhost:8080", admin_auth_token)
+```
+
+#### Registering a Cluster
+
+Register a cluster with the AuthN server for security management. This allows AuthN to ensure that security policies, such as revoked tokens, are properly managed across all clusters. During registration, the AuthN server contacts the AIStore server to verify that `authn.enabled` is set and that the secret signing key hashes match:
+
+```python
+cluster_alias = "my-cluster"
+cluster_manager = authn_client.cluster_manager()
+cluster_info = cluster_manager.register(cluster_alias, ["http://localhost:8080"])
+```
+
+#### Creating Custom Roles & Users
+
+Define custom roles and manage users to control access to AIStore resources at the bucket or cluster level, and use the `AccessAttr` class to specify permissions like `GET`, `PUT`, `CREATE-BUCKET`, etc., tailored to your needs.
+
+> For more information on the specific access permissions, refer to the please refer to the permissions section of the [AuthN documentation](https://github.com/NVIDIA/aistore/blob/main/docs/authn.md#permissions) and the section [`AccessAttr`](https://github.com/NVIDIA/aistore/blob/main/docs/python_sdk.md#authn.access_attr.AccessAttr) under the [Python SDK Documentation](https://github.com/NVIDIA/aistore/blob/main/docs/python_sdk.md).
+
+##### Creating a Custom Role
+
+Use the `RoleManager` class to create roles that define access permissions:
+
+```python
+role_manager = authn_client.role_manager()
+
+# Custom Role w/ Object GET & PUT Access for `ais://my-bucket`
+custom_role = role_manager.create(
+ name="Custom-Role",
+ desc="Role to GET and PUT objects in specified bucket",
+ cluster_alias=cluster_alias,
+ perms=[AccessAttr.GET, AccessAttr.PUT],
+ bucket_name="my-bucket"
+)
+```
+
+The `perms` parameter uses the `AccessAttr` class to specify allowed actions for the role.
+
+##### Creating a Custom User
+
+Use the `UserManager` class to create users and assign roles:
+
+```python
+user_manager = authn_client.user_manager()
+
+custom_user = user_manager.create(
+ username="myusername",
+ roles=["Custom-Role"],
+ password="mypassword"
+)
+
+custom_user_token = authn_client.login("myusername", "mypassword")
+aistore_client = Client("http://localhost:8080", custom_user_token)
+```
+
+#### Managing Tokens
+
+The TokenManager class provides methods to manage tokens, including revoking them to maintain secure access control.
+
+##### Revoking Tokens
+
+Revoke a token to prevent unauthorized access:
+
+```python
+token_manager = authn_client.token_manager()
+token_manager.revoke(custom_user_token)
+```
+
+When a token is revoked, the AuthN server updates all registered clusters to ensure the token is no longer valid.
+
+### API Documentation
+
+| Module | Summary |
+| --- | --- |
+| [authn_client.py](https://github.com/NVIDIA/aistore/blob/main/python/aistore/sdk/authn/authn_client.py) | Manages AuthN server interactions, including login, logout, and managing clusters, roles, users, and tokens. |
+| [cluster_manager.py](https://github.com/NVIDIA/aistore/blob/main/python/aistore/sdk/authn/cluster_manager.py) | Handles cluster management, including registration and updates. |
+| [role_manager.py](https://github.com/NVIDIA/aistore/blob/main/python/aistore/sdk/authn/role_manager.py) | Manages roles and their permissions within AIStore. |
+| [token_manager.py](https://github.com/NVIDIA/aistore/blob/main/python/aistore/sdk/authn/token_manager.py) | Manages tokens, including revocation and secure access control. |
+| [user_manager.py](https://github.com/NVIDIA/aistore/blob/main/python/aistore/sdk/authn/user_manager.py) | Manages users and their roles within AIStore. |
+| [access_attr.py](https://github.com/NVIDIA/aistore/blob/main/python/aistore/sdk/authn/access_attr.py) | Defines permissions for access control, including predefined sets like read-only and admin access. |
+
+For more detailed information on available methods, roles, and permissions, as well as best practices for using the AuthN module as well as the AIStore SDK, please refer to the [Python SDK documentation](https://aistore.nvidia.com/docs/python_sdk.md).
+
+### References
+
+* [AIStore GitHub](https://github.com/NVIDIA/aistore)
+* [Documentation](https://aistore.nvidia.com/docs)
+* [AIStore `pip` Package](https://pypi.org/project/aistore/)
diff --git a/python/doc_prefixes.sh b/python/doc_prefixes.sh
index e1138ab727f..2cfbaf5231a 100755
--- a/python/doc_prefixes.sh
+++ b/python/doc_prefixes.sh
@@ -13,7 +13,7 @@ redirect_from:
AIStore Python SDK is a growing set of client-side objects and methods to access and utilize AIS clusters. This document contains API documentation
for the AIStore Python SDK.
-> For our PyTorch integration, please refer to the [Pytorch Docs](https://github.com/NVIDIA/aistore/tree/main/docs/pytorch.md).
+> For our PyTorch integration, please refer to the [PyTorch Docs](https://github.com/NVIDIA/aistore/tree/main/docs/pytorch.md).
For more information, please refer to [AIS Python SDK](https://pypi.org/project/aistore) available via Python Package Index (PyPI)
or see [https://github.com/NVIDIA/aistore/tree/main/python/aistore](https://github.com/NVIDIA/aistore/tree/main/python/aistore).