diff --git a/next-env.d.ts b/next-env.d.ts
index 52e831b43..254b73c16 100644
--- a/next-env.d.ts
+++ b/next-env.d.ts
@@ -1,5 +1,6 @@
///
///
+///
// NOTE: This file should not be edited
// see https://nextjs.org/docs/pages/api-reference/config/typescript for more information.
diff --git a/pages/clustering/high-availability.mdx b/pages/clustering/high-availability.mdx
index fa0d39f1f..92307249d 100644
--- a/pages/clustering/high-availability.mdx
+++ b/pages/clustering/high-availability.mdx
@@ -59,36 +59,74 @@ metrics which reveal us p50, p90 and p99 latencies of RPC messages, the duration
in the cluster. We are also counting the number of different RPC messages exchanged and the number of failed requests since this can give
us information about parts of the cluster that need further care. You can see the full list of metrics [here](/database-management/monitoring#system-metrics).
-
-
-When deploying coordinators to servers, you can use the instance of almost any size. Instances of 4GiB or 8GiB will suffice since coordinators'
-job mainly involves network communication and storing Raft metadata. Coordinators and data instances can be deployed on same servers (pairwise)
-but from the availability perspective, it is better to separate them physically.
-
-
+## Bolt+routing
+Directly connecting to the main instance isn't preferred in the HA cluster since
+the main instance changes due to various failures. Because of that, users can
+use **Bolt with routing**, which ensures that write queries are always sent to
+the current main instance. This prevents split-brain scenarios, as clients never
+write to the old main but are automatically routed to the new main after a
+failover.
+The routing protocol works as follows: the client sends a `ROUTE` Bolt
+message to any coordinator instance. The coordinator responds with a **routing
+table** containing three entries:
+
+1. Instances from which data can be read
+2. The instance where data can be written
+3. Instances acting as routers
+
+When a client connects directly to the cluster leader, the leader immediately
+returns the current routing table. Thanks to the Raft consensus protocol, the
+leader always has the most up-to-date cluster state. If a follower receives a
+routing request, it forwards the request to the current leader, ensuring the
+client always gets accurate routing information.
+
+This ensures:
+
+- **Consistency**: All clients receive the same routing information, regardless of
+their entry point.
+- **Reliability**: The Raft consensus protocol ensures data accuracy on the leader
+node.
+- **Transparency**: Client requests are handled seamlessly, whether connected to
+leaders or followers.
+
+In the Memgraph HA cluster:
+
+- The **main instance** is the only writable instance
+- **Replicas** are readable instances
+- **Coordinators** act as routers
+
+However, the cluster can be configured in such a way that main can also be used
+for reading. Check this
+[paragraph](#setting-config-for-highly-available-cluster) for more info.
+Bolt+routing is the client-side routing protocol, meaning network endpoint
+resolution happens inside drivers. For more details about the Bolt messages
+involved in the communication, check [the following
+link](https://neo4j.com/docs/bolt/current/bolt/message/#messages-route).
+
+Users only need to change the scheme they use for connecting to coordinators.
+This means instead of using `bolt://,` you should use
+`neo4j://` to get an active connection to the current
+main instance in the cluster. You can find examples of how to use bolt+routing
+in different programming languages
+[here](https://github.com/memgraph/memgraph/tree/master/tests/drivers).
+
+It is important to note that setting up the cluster on one coordinator
+(registration of data instances and coordinators, setting main) must be done
+using bolt connection since bolt+routing is only used for routing data-related
+queries, not coordinator-based queries.
-## Bolt+routing
+## System configuration
-Directly connecting to the main instance isn't preferred in the HA cluster since the main instance changes due to various failures. Because of that, users
-can use bolt+routing so that write queries can always be sent to the correct data instance. This will prevent a split-brain issue since clients, when writing,
-won't be routed to the old main but rather to the new main instance on which failover got performed.
-This protocol works in a way that the client
-first sends a ROUTE bolt message to any coordinator instance. The coordinator replies to the message by returning the routing table with three entries specifying
-from which instance can the data be read, to which instance data can be written to and which instances behave as routers. In the Memgraph HA cluster, the main
-data instance is the only writeable instance, replicas are readable instances, and COORDINATORs behave as routers. However, the cluster can be configured in such a way
-that main can also be used for reading. Check this [paragraph](#setting-config-for-highly-available-cluster) for more info.
-Bolt+routing is the client-side routing protocol, meaning network endpoint resolution happens inside drivers.
-For more details about the Bolt messages involved in the communication, check [the following link](https://neo4j.com/docs/bolt/current/bolt/message/#messages-route).
-Users only need to change the scheme they use for connecting to coordinators. This means instead of using `bolt://,` you should
-use `neo4j://` to get an active connection to the current main instance in the cluster. You can find examples of how to
-use bolt+routing in different programming languages [here](https://github.com/memgraph/memgraph/tree/master/tests/drivers).
+When deploying coordinators to servers, you can use the instance of almost any size. Instances of 4GiB or 8GiB will suffice since coordinators'
+job mainly involves network communication and storing Raft metadata. Coordinators and data instances can be deployed on same servers (pairwise)
+but from the availability perspective, it is better to separate them physically.
-It is important to note that setting up the cluster on one coordinator (registration of data instances and coordinators, setting main) must be done using bolt connection
-since bolt+routing is only used for routing data-related queries, not coordinator-based queries.
+When setting up disk space, you should always make sure that there is at least space for `--snapshot-retention-count+1` snapshots + few WAL files. That's
+because we first create (N+1)th snapshot and then delete the oldest one so we could guarantee that the creation of a new snapshot ended successfully. This is
+especially important when using Memgraph HA in K8s, since in K8s there is usually a limit set on the disk space used.
-## System configuration
Important note if you're using native Memgraph deployment with Red Hat.
@@ -104,35 +142,32 @@ This rule could also apply to CentOS and Fedora, but at the moment it's not test
## Authentication
-When using authentication features for high availability, there are a couple of
-things to consider first.
+User accounts exist exclusively on data instances - coordinators do not manage user authentication. Therefore, coordinator instances prohibit:
+ - Environment variables `MEMGRAPH_USER` and `MEMGRAPH_PASSWORD`.
+ - Authentication queries such as `CREATE USER`.
-Currently, it is not possible to create users with the query `CREATE USER` on
-**coordinators**. Instead, for the direct Bolt connection to coordinators to work as
-expected, create user by [setting environment
-variables](/database-management/configuration#environment-variables)
-`MEMGRAPH_USER` and `MEMGRAPH_PASSWORD`.
+When using the **bolt+routing protocol**, provide credentials for users that exist on the data instances. The authentication flow works as follows:
-On the other hand, users can be created using the `CREATE USER` query on the
-**main data instance**. Such user will be replicated to other replica data instances
-but coordinators won't know about this user.
+1. Client connects to a **coordinator**.
+2. Coordinator responds with the **routing table** (without authenticating).
+3. Client connects to the **designated data instance** using the **same credentials**.
+4. Data instance **authenticates the user and processes the request**.
+
+This architecture separates routing coordination from the user management, ensuring that authentication occurs only where user data resides.
-When using the **bolt+routing** type of connection, the drivers are using the same
-authentication arguments for connecting to coordinators and data instances.
-Because of that, for bolt+routing connections, the user created on data
-instances needs to exist on coordinators too.
## Starting instances
You can start the data and coordinator instances using environment flags or configuration flags.
-The main difference between data instance and coordinator is that data instances have `--management-port`, whereas coordinators must have `--coordinator-id` and `--coordinator-port`.
+The main difference between data instance and coordinator is that data instances have `--management-port`,
+whereas coordinators must have `--coordinator-id` and `--coordinator-port`.
### Configuration Flags
#### Data instance
Memgraph data instance must use flag `--management-port=`. This flag is tied to the high availability feature, enables the coordinator to connect to the data instance,
-and allows the Memgraph data instance to use the high availability feature.
+and allows the Memgraph data instance to use the high availability feature. The flag `--storage-wal-enabled` must be enabled, otherwise data instance won't be started.
```
docker run --name instance1 -p 7687:7687 -p 7444:7444 memgraph/memgraph-mage
@@ -395,7 +430,7 @@ server that coordinator instances use for communication. Flag `--instance-health
check the status of the replication instance to update its status. Flag `--instance-down-timeout-sec` gives the user the ability to control how much time should
pass before the coordinator starts considering the instance to be down.
-There is also a configuration option for specifying whether reads from the main are enabled. The configuration value is by default false but can be changed in run-time
+There is a configuration option for specifying whether reads from the main are enabled. The configuration value is by default false but can be changed in run-time
using the following query:
```
@@ -408,12 +443,39 @@ Users can also choose whether failover to the async replica is allowed by using
SET COORDINATOR SETTING 'sync_failover_only' TO 'true'/'false' ;
```
+Users can control the maximum transaction lag allowed during failover through configuration. If a replica is behind the main instance by more than the configured threshold,
+that replica becomes ineligible for failover. This prevents data loss beyond the user's acceptable limits.
+
+To implement this functionality, we employ a caching mechanism on the cluster leader that tracks replicas' lag. The cache gets updated with each StateCheckRpc response from
+replicas. During the brief failover window on the cooordinators' side, the new cluster leader may not have the current lag information for all data instances and in that case,
+any replica can become main. This trade-off is intentional and it avoids flooding Raft logs with frequently-changing lag data while maintaining failover safety guarantees
+in the large majority of situations.
+
+
+The configuration value can be controlled using the query:
+
+```
+SET COORDINATOR SETTING 'max_failover_replica_lag' TO '10' ;
+```
+
+
+
By default, the value is `true`, which means that only sync replicas are candidates in the election. When the value is set to `false`, the async replica is also considered, but
there is an additional risk of experiencing data loss. However, failover to an async replica may be necessary when other sync replicas are down and you want to
manually perform a failover.
+Users can control the maximum allowed replica lag to maintain read consistency. When a replica falls behind the current main by more than `max_replica_read_lag_` transactions, the
+bolt+routing protocol will exclude that replica from read query routing to ensure data freshness.
+
+The configuration value can be controlled using the query:
+
+
+```
+SET COORDINATOR SETTING 'max_replica_read_lag_' TO '10' ;
+```
+
All run-time configuration options can be retrieved using:
```
@@ -507,11 +569,18 @@ about other replicas to which it will replicate data. Once that request succeeds
### Choosing new main from available replicas
-When failover is happening, some replicas can also be down. From the list of alive replicas, a new main is chosen. First, the leader coordinator contacts each alive replica
-to get info about each database's last commit timestamp. In the case of enabled multi-tenancy, from each instance coordinator will get info on all databases and their last commit
-timestamp. Currently, the coordinator chooses an instance to become a new main by comparing the latest commit timestamps of all databases. The instance which is newest on most
-databases is considered the best candidate for the new main. If there are multiple instances which have the same number of newest databases, we sum timestamps of all databases
-and consider instance with a larger sum as the better candidate.
+
+During failover, the coordinator must select a new main instance from available replicas, as some may be offline. The leader coordinator queries each live replica to
+retrieve the committed transaction count for every database.
+
+The selection algorithm prioritizes data recency using a two-phase approach:
+
+1. **Database majority rule**: The coordinator identifies which replica has the highest committed transaction count for each database. The replica that leads in the most
+databases becomes the preferred candidate.
+2. **Total transaction tiebreaker**: If multiple replicas tie for leading the most databases, the coordinator sums each replica's committed transactions across all databases.
+The replica with the highest total becomes the new main.
+
+This approach ensures the new main instance has the most up-to-date data across the cluster while maintaining consistency guarantees.
### Old main rejoining to the cluster
@@ -685,6 +754,242 @@ distributed in any way you want between data centers. The failover time will be
We support deploying Memgraph HA as part of the Kubernetes cluster through Helm charts.
You can see example configurations [here](/getting-started/install-memgraph/kubernetes#memgraph-high-availability-helm-chart).
+## In-Service Software Upgrade (ISSU)
+
+Memgraph’s **High Availability** supports in-service software upgrades (ISSU).
+This guide explains the process when using [HA Helm
+charts]((/getting-started/install-memgraph/kubernetes#memgraph-high-availability-helm-chart)).
+The procedure is very similar for native deployments.
+
+
+
+**Important**: Although the upgrade process is designed to complete
+successfully, unexpected issues may occur. We strongly recommend doing a backup
+of your `lib` directory on all of your `StatefulSets` or native instances
+depending on the deployment type.
+
+
+
+
+
+{ Prerequisites
}
+
+If you are using **HA Helm charts**, set the following configuration before
+doing any upgrade.
+
+ ```yaml
+ updateStrategy.type: OnDelete
+ ```
+
+ Depending on the infrastructure on which you have your Memgraph cluster, the
+details will differ a bit, but the backbone is the same.
+
+Prepare a backup of all data from all instances. This ensures you can safely
+downgrade cluster to the last stable version you had.
+
+ - For **native deployments**, tools like `cp` or `rsync` are sufficient.
+ - For **Kubernetes**, create a `VolumeSnapshotClass`with the yaml file fimilar
+ to this:
+
+ ```yaml
+ apiVersion: snapshot.storage.k8s.io/v1
+ kind: VolumeSnapshotClass
+ metadata:
+ name: csi-azure-disk-snapclass
+ driver: disk.csi.azure.com
+ deletionPolicy: Delete
+ ```
+
+ Apply it:
+
+ ```bash
+ kubectl apply -f azure_class.yaml
+ ```
+
+ - On **Google Kubernetes Engine**, the default CSI driver is
+ `pd.csi.storage.gke.io` so make sure to change the field `driver`.
+ - On **AWS EKS**, refer to the [AWS snapshot controller
+ docs](https://docs.aws.amazon.com/eks/latest/userguide/csi-snapshot-controller.html).
+
+
+{ Create snapshots
}
+
+Now you can create a `VolumeSnapshot` of the lib directory using the yaml file:
+
+```yaml
+apiVersion: snapshot.storage.k8s.io/v1
+kind: VolumeSnapshot
+metadata:
+ name: coord-3-snap # Use a unique name for each instance
+ namespace: default
+spec:
+ volumeSnapshotClassName: csi-azure-disk-snapclass
+ source:
+ persistentVolumeClaimName: memgraph-coordinator-3-lib-storage-memgraph-coordinator-3-0
+```
+
+Apply it:
+
+```bash
+kubectl apply -f azure_snapshot.yaml
+```
+
+Repeat for every instance in the cluster.
+
+
+{ Update configuration
}
+
+Next you should update `image.tag` field in the `values.yaml` configuration file
+to the version to which you want to upgrade your cluster.
+
+1. In your `values.yaml`, update the image version:
+
+ ```yaml
+ image:
+ tag:
+ ```
+2. Apply the upgrade:
+
+ ```bash
+ helm upgrade -f
+ ```
+
+ Since we are using `updateStrategy.type=OnDelete`, this step will not restart
+ any pod, rather it will just prepare pods for running the new version.
+ - For **native deployments**, ensure the new binary is available.
+
+
+{ Upgrade procedure (zero downtime)
}
+
+Our procedure for achieving zero-downtime upgrades consists of restarting one
+instance at a time. Memgraph uses **primary–secondary replication**. To avoid
+downtime:
+
+1. Upgrade **replicas** first.
+2. Upgrade the **main** instance.
+3. Upgrade **coordinator followers**, then the **leader**.
+
+In order to find out on which pod/server the current main and the current
+cluster leader sits, run:
+
+```cypher
+SHOW INSTANCES;
+```
+
+
+{ Upgrade replicas
}
+
+If you are using K8s, the upgrade can be performed by deleting the pod. Start by
+deleting the replica pod (in this example replica is running on the pod
+`memgraph-data-1-0`):
+
+```bash
+kubectl delete pod memgraph-data-1-0
+```
+
+**Native deployment:** stop the old binary and start the new one.
+
+Before starting the upgrade of the next pod, it is important to wait until all
+pods are ready. Otherwise, you may end up with a data loss. On K8s you can
+easily achieve that by running:
+
+```bash
+kubectl wait --for=condition=ready pod --all
+```
+
+For the native deployment, check if all your instances are alived manually.
+
+This step should be repeated for all of your replicas in the cluster.
+
+
+{ Upgrade the main
}
+
+Before deleting the main pod, check replication lag to see whether replicas are
+behind MAIN:
+
+```cypher
+SHOW REPLICATION LAG;
+```
+
+If replicas are behind, your upgrade will be prone to a data loss. In order to
+achieve zero-downtime upgrade without any data loss, either:
+
+ - Use `STRICT_SYNC` mode (writes will be blocked during upgrade), or
+ - Wait until replicas are fully caught up, then pause writes. This way, you
+can use any replication mode. Read queries should however work without any
+issues independently from the replica type you are using.
+
+Upgrade the main pod:
+
+```bash
+kubectl delete pod memgraph-data-0-0
+kubectl wait --for=condition=ready pod --all
+```
+
+
+{ Upgrade coordinators
}
+
+The upgrade of coordinators is done in exactly the same way. Start by upgrading
+followers and finish with deleting the leader pod:
+
+```bash
+kubectl delete pod memgraph-coordinator-3-0
+kubectl wait --for=condition=ready pod --all
+
+kubectl delete pod memgraph-coordinator-2-0
+kubectl wait --for=condition=ready pod --all
+
+kubectl delete pod memgraph-coordinator-1-0
+kubectl wait --for=condition=ready pod --all
+```
+
+
+
+{ Verify upgrade
}
+
+Your upgrade should be finished now, to check that everything works, run:
+
+```cypher
+SHOW VERSION;
+```
+
+It should show you the new Memgraph version.
+
+
+{ Rollback
}
+
+If during the upgrade, you figured out that an error happened or even after
+upgrading all of your pods something doesn't work (e.g. write queries don't
+pass), you can safely downgrade your cluster to the previous version using
+`VolumeSnapshots` you took on K8s or file backups for native deployments.
+
+- **Kubernetes:**
+
+ ```bash
+ helm uninstall
+ ```
+
+ In `values.yaml`, for all instances set:
+
+ ```yaml
+ restoreDataFromSnapshot: true
+ ```
+
+ Make sure to set correct name of the snapshot you will use to recover your
+instances.
+
+- **Native deployments:** restore from your file backups.
+
+
+
+
+If you're doing an upgrade on `minikube`, it is important to make sure that the
+snapshot resides on the same node on which the `StatefulSet` is installed.
+Otherwise, it won't be able to restore `StatefulSet's` attached
+PersistentVolumeClaim from the `VolumeSnapshot`.
+
+
+
## Docker Compose
The following example shows you how to setup Memgraph cluster using Docker Compose. The cluster will use user-defined bridge network.
diff --git a/pages/database-management/authentication-and-authorization/multiple-roles.mdx b/pages/database-management/authentication-and-authorization/multiple-roles.mdx
index 648331be5..c1d19ac2c 100644
--- a/pages/database-management/authentication-and-authorization/multiple-roles.mdx
+++ b/pages/database-management/authentication-and-authorization/multiple-roles.mdx
@@ -13,6 +13,12 @@ users to have different roles assigned to them for specific databases. This
feature enables proper tenant isolation and fine-grained access control in
multi-tenant environments.
+
+
+User-role mappings are simple maps located in the user. Deleting or renaming the database will not update this information. The admin needs to make sure the correct access is maintained at all times.
+
+
+
## Privileges with multiple roles
When a user has multiple roles, their privileges are combined according to the
@@ -215,7 +221,7 @@ specification, even in multi-tenant environments. It will show all roles
assigned to the user across all databases.
```cypher
--- Show all roles for a user (works in all environments)
+-- Show all roles for a user (works in all environments)
SHOW ROLE FOR user_name;
SHOW ROLES FOR user_name;
```
diff --git a/pages/database-management/authentication-and-authorization/role-based-access-control.mdx b/pages/database-management/authentication-and-authorization/role-based-access-control.mdx
index 130fa1f07..e83499792 100644
--- a/pages/database-management/authentication-and-authorization/role-based-access-control.mdx
+++ b/pages/database-management/authentication-and-authorization/role-based-access-control.mdx
@@ -114,6 +114,12 @@ SHOW ROLE FOR user_name ON CURRENT;
SHOW ROLE FOR user_name ON DATABASE database_name;
```
+
+
+User-role mappings are simple maps located in the user. Deleting or renaming the database will not update this information. The admin needs to make sure the correct access is maintained at all times.
+
+
+
These commands return the aggregated roles for the user in the specified
database context. The `ON MAIN` option shows roles for the user's main database,
`ON CURRENT` shows roles for whatever database is currently active, and `ON
diff --git a/pages/database-management/multi-tenancy.md b/pages/database-management/multi-tenancy.mdx
similarity index 60%
rename from pages/database-management/multi-tenancy.md
rename to pages/database-management/multi-tenancy.mdx
index 8226ba2ef..8d47e6000 100644
--- a/pages/database-management/multi-tenancy.md
+++ b/pages/database-management/multi-tenancy.mdx
@@ -3,7 +3,9 @@ title: Multi-tenancy (Enterprise)
description: Discover the benefits of multi-tenancy for scalability, resource utilization, and performance. Also learn how to manage few isolated databases within a single instance in our detailed documentation.
---
-# Multi-tenancy (Enterprise)
+import { Callout } from 'nextra/components'
+
+# Multi-tenancy Enterprise
Multi-tenant support in Memgraph enables users to manage multiple isolated
databases within a single instance. The primary objective is to facilitate
@@ -22,17 +24,30 @@ database name cannot be altered.
### Default database best practices
-In multi-tenant environments, we recommend treating the default "memgraph" database as an administrative/system database rather than storing application data in it. This approach provides better security and isolation, especially given recent changes to authentication and authorization requirements.
+In multi-tenant environments, we recommend treating the default "memgraph"
+database as an administrative/system database rather than storing application
+data in it. This approach provides better security and isolation, especially
+given recent changes to authentication and authorization requirements.
#### Why treat memgraph as an admin database?
-As of Memgraph v3.5, users have to have both the `AUTH` privilege and access to the default "memgraph" database to execute authentication and authorization queries. Additionally, replication queries (such as `REGISTER REPLICA`, `SHOW REPLICAS`, etc.) and multi-database queries (such as `SHOW DATABASES`, `CREATE DATABASE`, etc.) also now target the "memgraph" database and require access to it. This requirement affects multi-tenant environments where users might have access to other databases but not the default one.
+As of Memgraph v3.5, users have to have both the `AUTH` privilege and access to
+the default "memgraph" database to execute authentication and authorization
+queries. Additionally, replication queries (such as `REGISTER REPLICA`, `SHOW
+REPLICAS`, etc.) and multi-database queries (such as `SHOW DATABASES`, `CREATE
+DATABASE`, etc.) also now target the "memgraph" database and require access to
+it. This requirement affects multi-tenant environments where users might have
+access to other databases but not the default one.
#### Recommended setup
-1. **Restrict memgraph database access**: Only grant access to the "memgraph" database to privileged users who need to perform system administration tasks
-2. **Use tenant-specific databases**: Store all application data in dedicated tenant databases
-3. **Separate concerns**: Keep user management, role management, system administration, replication management, and multi-database management separate from application data
+1. **Restrict memgraph database access**: Only grant access to the "memgraph"
+ database to privileged users who need to perform system administration tasks
+2. **Use tenant-specific databases**: Store all application data in dedicated
+ tenant databases
+3. **Separate concerns**: Keep user management, role management, system
+ administration, replication management, and multi-database management
+ separate from application data
#### Example configuration
@@ -82,7 +97,8 @@ SET ROLE FOR tenant2_regular_user TO tenant2_user;
```
In this configuration:
-- `system_admin_user` can perform all authentication/authorization, replication, and multi-database operations and has access to the "memgraph" database
+- `system_admin_user` can perform all authentication/authorization, replication,
+ and multi-database operations and has access to the "memgraph" database
- Tenant users can only access their respective tenant databases
- Application data is completely isolated in tenant-specific databases
- The "memgraph" database serves purely as an administrative database
@@ -94,8 +110,8 @@ instances. Queries executed on a specific database should operate as if it were
the sole database in the system, preventing cross-database contamination. Users
interact with individual databases, and cross-database queries are prohibited.
-Every database has its own database UUID, which can be read by running the `SHOW STORAGE INFO`
-query on a particular database.
+Every database has its own database UUID, which can be read by running the `SHOW
+STORAGE INFO` query on a particular database.
## Database configuration and data directory
@@ -119,20 +135,103 @@ based on configuration.
Users interact with multi-tenant features through specialized Cypher queries:
1. `CREATE DATABASE name`: Creates a new database.
-2. `DROP DATABASE name`: Deletes a specified database.
-3. `SHOW DATABASE`: Shows the current used database. It will return `NULL` if
- no database is currently in use. You can also use `SHOW CURRENT DATABASE` for the same functionality. This command does not require any special privileges.
-
-4. `SHOW DATABASES`: Shows only the existing set of multitenant databases.
-5. `USE DATABASE name`: Switches focus to a specific database (disabled during
+2. `DROP DATABASE name [FORCE]`: Deletes a specified database.
+3. `RENAME DATABASE old_name TO new_name`: Renames a database.
+4. `SHOW DATABASE`: Shows the current used database. It will return `NULL` if no
+ database is currently in use. You can also use `SHOW CURRENT DATABASE` for
+ the same functionality. This command does not require any special privileges.
+
+5. `SHOW DATABASES`: Shows only the existing set of multitenant databases.
+6. `USE DATABASE name`: Switches focus to a specific database (disabled during
transactions).
-6. `GRANT DATABASE name TO user`: Grants a user access to a specified database.
-7. `DENY DATABASE name FROM user`: Denies a user's access to a specified
+7. `GRANT DATABASE name TO user`: Grants a user access to a specified database.
+8. `DENY DATABASE name FROM user`: Denies a user's access to a specified
database.
-8. `REVOKE DATABASE name FROM user`: Removes database from user's authentication
+9. `REVOKE DATABASE name FROM user`: Removes database from user's authentication
context.
-9. `SET MAIN DATABASE name FOR user`: Sets a user's default (landing) database.
-10. `SHOW DATABASE PRIVILEGES FOR user`: Lists a user's database access rights.
+10. `SET MAIN DATABASE name FOR user`: Sets a user's default (landing) database.
+11. `SHOW DATABASE PRIVILEGES FOR user`: Lists a user's database access rights.
+
+### DROP DATABASE with FORCE
+
+The `DROP DATABASE` command removes an existing database. You can optionally
+include the `FORCE` parameter to delete a database even when it has active
+connections or transactions.
+
+{ Syntax
}
+
+```cypher
+DROP DATABASE database_name [FORCE];
+```
+
+{ Behavior
}
+
+- **Without `FORCE`**: The command will fail if the database is currently in use
+ by any active connections or transactions.
+- **With `FORCE`**: The database will be immediately hidden from new connections,
+ but actual deletion is deferred until it's safe to proceed. All active
+ transactions using the database will be terminated.
+
+{ Use cases for FORCE
}
+
+- **Emergency cleanup**: Remove a database stuck in an inconsistent or
+ long-running state.
+- **Administrative maintenance**: Perform system maintenance requiring immediate
+ database removal.
+- **Development environments**: Quickly reset test environments that might still
+ have active connections.
+
+{ Privileges required
}
+
+Using the `FORCE` option requires:
+- `MULTI_DATABASE_EDIT` privilege
+- Access to the `memgraph` database
+- `TRANSACTION_MANAGEMENT` privilege (to terminate active transactions)
+
+{ Important considerations
}
+
+- All active transactions on the target database will be forcibly terminated.
+- The database becomes immediately unavailable to new connections.
+- Actual deletion may be deferred until existing connections are properly closed.
+- **This operation cannot be undone.**
+
+### RENAME DATABASE
+
+The `RENAME DATABASE` command allows you to rename an existing database to a new
+name. This simplifies administrative workflows by eliminating the need to create
+a new database, recover from a snapshot, and drop the old database.
+
+{ Syntax
}
+
+```cypher
+RENAME DATABASE old_name TO new_name;
+```
+
+{ Behavior
}
+
+- The database is **renamed immediately** without requiring unique access.
+- If you are currently using the database being renamed, the current database
+ context is automatically updated to the new name.
+- All existing data, indexes, constraints, and other database objects are
+ preserved.
+
+
+Current implementation of `RENAME` does not update auth data. User/role database
+access and database-specific roles information is not updated. This can lead to
+unindented access to databases.
+
+
+
+{ Important considerations
}
+
+- The `RENAME DATABASE` command requires the `MULTI_DATABASE_EDIT` privilege and
+ access to the `memgraph` database.
+- The new database name must not already exist.
+- The old database name must exist.
+- This operation cannot be undone once completed.
+- All active connections to the database will continue to work seamlessly with
+ the new name.
+
### User's main database
@@ -148,13 +247,21 @@ unified source of truth. A single user can access multiple databases with a
global set of privileges, but currently, per-database privileges cannot be
granted.
+
+
+User-role mappings are simple maps located in the user. Deleting or renaming the database will not update this information. The admin needs to make sure the correct access is maintained at all times.
+
+
+
Access to all databases can be granted or revoked using wildcards:
`GRANT DATABASE * TO user;`, `DENY DATABASE * TO user;` or
`REVOKE DATABASE * FROM user;`.
### Multi-database queries and the memgraph database
-As of Memgraph v3.5 multi-database queries (such as `SHOW DATABASES`, `CREATE DATABASE`, `DROP DATABASE`, etc.) target the "memgraph" database and require access to it.
+As of Memgraph v3.5 multi-database queries (such as `SHOW DATABASES`, `CREATE
+DATABASE`, `DROP DATABASE`, `RENAME DATABASE`, etc.) target the "memgraph"
+database and require access to it.
To execute these queries, users must have:
- The appropriate privileges (`MULTI_DATABASE_USE`, `MULTI_DATABASE_EDIT`)
@@ -162,15 +269,21 @@ To execute these queries, users must have:
### Multi-tenant query syntax changes
-As of Memgraph v3.5 the syntax for certain queries in multi-tenant environments have changed. The `SHOW ROLE` and `SHOW PRIVILEGES` commands now require specifying the database context in some cases.
+As of Memgraph v3.5 the syntax for certain queries in multi-tenant environments
+have changed. The `SHOW ROLE` and `SHOW PRIVILEGES` commands now require
+specifying the database context in some cases.
-**SHOW ROLE FOR USER**: This command does not require database specification and will show all roles assigned to the user across all databases.
+**SHOW ROLE FOR USER**: This command does not require database specification and
+will show all roles assigned to the user across all databases.
-**SHOW PRIVILEGES FOR USER**: This command requires database specification in multi-tenant environments.
+**SHOW PRIVILEGES FOR USER**: This command requires database specification in
+multi-tenant environments.
-**SHOW PRIVILEGES FOR ROLE**: This command does not require database specification and will show all privileges for the role.
+**SHOW PRIVILEGES FOR ROLE**: This command does not require database
+specification and will show all privileges for the role.
-In multi-tenant environments, you must specify which database context to use when showing privileges for users:
+In multi-tenant environments, you must specify which database context to use
+when showing privileges for users:
1. **Show roles for the user's main database:**
```cypher
@@ -206,11 +319,18 @@ SHOW PRIVILEGES FOR user_or_role ON CURRENT;
SHOW PRIVILEGES FOR user_or_role ON DATABASE database_name;
```
-These commands return the aggregated roles and privileges for the user in the specified database context. The `ON MAIN` option shows information for the user's main database, `ON CURRENT` shows information for whatever database is currently active, and `ON DATABASE` shows information for the explicitly specified database.
+These commands return the aggregated roles and privileges for the user in the
+specified database context. The `ON MAIN` option shows information for the
+user's main database, `ON CURRENT` shows information for whatever database is
+currently active, and `ON DATABASE` shows information for the explicitly
+specified database.
#### Impact on multi-tenant environments
-In multi-tenant environments where users might not have access to the "memgraph" database, multi-database management operations will fail. This reinforces the recommendation to treat the "memgraph" database as an administrative/system database.
+In multi-tenant environments where users might not have access to the "memgraph"
+database, multi-database management operations will fail. This reinforces the
+recommendation to treat the "memgraph" database as an administrative/system
+database.
#### Example: Admin user with multi-database privileges
@@ -226,13 +346,18 @@ SET ROLE FOR db_admin TO multi_db_admin;
```
In this setup, `db_admin` can:
-- Execute all multi-database queries (`SHOW DATABASES`, `CREATE DATABASE`, etc.)
+- Execute all multi-database queries (`SHOW DATABASES`, `CREATE DATABASE`, `DROP
+ DATABASE`, `RENAME DATABASE`, etc.)
- Access the "memgraph" database for administrative operations
- Manage the multi-tenant database configuration
#### Best practice
-For multi-database management, ensure that users who need to perform multi-database operations have both the appropriate multi-database privileges and access to the "memgraph" database. This aligns with the overall recommendation to treat the "memgraph" database as an administrative database in multi-tenant environments.
+For multi-database management, ensure that users who need to perform
+multi-database operations have both the appropriate multi-database privileges
+and access to the "memgraph" database. This aligns with the overall
+recommendation to treat the "memgraph" database as an administrative database in
+multi-tenant environments.
### Additional multi-tenant privileges
diff --git a/pages/deployment/workloads/memgraph-in-cybersecurity.mdx b/pages/deployment/workloads/memgraph-in-cybersecurity.mdx
index d95328aba..38266c935 100644
--- a/pages/deployment/workloads/memgraph-in-cybersecurity.mdx
+++ b/pages/deployment/workloads/memgraph-in-cybersecurity.mdx
@@ -281,4 +281,21 @@ SET n += props;
This function **parses a JSON-formatted string into a Cypher map**, making it very useful for flexible security event ingestion pipelines
where the event structure might vary slightly or be semi-structured.
+### Setting nested properties
+
+Cybersecurity data often consists of nested objects (such as cloud security configurations) that are efficiently stored as maps. Many graph
+database vendors do not support nested JSON objects and can only store them as strings within the property store. Memgraph, however, provides *full support
+for nested objects*, including the ability to update them directly using queries such as the following:
+
+```cypher
+MATCH (n:Node {id: 1}) SET n.details.created_at = date(), n.details.ip = '127.0.0.1';
+```
+
+This approach keeps the configuration schema consistent with the original data sources powering your cybersecurity solution, eliminating the need for
+manual and time-consuming graph modeling to represent configurations. In many cases, these configurations are so tightly coupled to the underlying objects
+that there is no real need to separate them into distinct nodes and relationships. Attempting to do so can lead to *graph explosion* due to the large number
+of values contained within nested configuration objects.
+
+For more information, read the [guide on setting nested propertes](/querying/clauses/set#9-setting-nested-properties).
+
diff --git a/pages/deployment/workloads/memgraph-in-graphrag.mdx b/pages/deployment/workloads/memgraph-in-graphrag.mdx
index 086ec6fe6..23a25e634 100644
--- a/pages/deployment/workloads/memgraph-in-graphrag.mdx
+++ b/pages/deployment/workloads/memgraph-in-graphrag.mdx
@@ -172,6 +172,13 @@ This command retrieves the graph schema in **constant time**, enabling the LLM t
minimal overhead. It’s a key part of the retrieval pipeline that helps bridge natural language questions and
structured graph queries.
+Additionally, `SHOW SCHEMA INFO;` is integrated with [fine-grained access control](/querying/schema#schema-with-memgraph-enterprise),
+
+allowing you to restrict specific users or agents from viewing the full graph schema. This is particularly useful
+when working with LLMs, as it helps reduce schema noise in large enterprise graphs with many use cases.
+Limiting the visible schema ensures that the LLM focuses on relevant parts of the graph, improving its ability to
+generate meaningful answers or queries.
+
While there's no single query pattern that fits all GraphRAG use cases. Since LLMs generate a wide variety of
questions, **multi-hop queries** (i.e., traversals across multiple relationships) stand to benefit significantly from
Memgraph’s **in-memory architecture**, offering fast and consistent response times even under complex traversal logic.
diff --git a/pages/fundamentals/constraints.mdx b/pages/fundamentals/constraints.mdx
index 8889057f8..503e74db9 100644
--- a/pages/fundamentals/constraints.mdx
+++ b/pages/fundamentals/constraints.mdx
@@ -274,13 +274,37 @@ DROP CONSTRAINT ON (n:Person) ASSERT n.age IS TYPED INTEGER;
Now, `SHOW CONSTRAINT INFO;` returns an empty set.
+## Drop all constraints
+
+The `DROP ALL CONSTRAINTS` clause allows you to delete all constraints in your
+database in a single operation. This includes all types of constraints:
+- existence constraints
+- unique constraints
+- type constraints
+
+```cypher
+DROP ALL CONSTRAINTS;
+```
+
## Schema-related procedures
-You can also modify the constraints using the [`schema.assert()` procedure](/querying/schema#assert).
+You can also modify the constraints using the [`schema.assert()`
+procedure](/querying/schema#assert).
+
+### Delete all constraints via schema.assert
-### Delete all constraints
+
+
+Starting from version 3.6, the `DROP ALL CONSTRAINTS` command was introduced and
+is the preferred way to remove all constraints, since it works across all
+constraint types. This makes it a better choice than the
+[`schema.assert()`](/querying/schema#assert) procedure.
+
+
+
-To delete all constraints, use the [`schema.assert()`](/querying/schema#assert) procedure with the following parameters:
+To delete all constraints, use the [`schema.assert()`](/querying/schema#assert)
+procedure with the following parameters:
- `indices_map` = map of key-value pairs of all indexes in the database
- `unique_constraints` = `{}`
- `existence_constraints` = `{}`
diff --git a/pages/fundamentals/data-durability.mdx b/pages/fundamentals/data-durability.mdx
index ba90845d3..79679577b 100644
--- a/pages/fundamentals/data-durability.mdx
+++ b/pages/fundamentals/data-durability.mdx
@@ -18,6 +18,9 @@ These mechanisms generate **durability files** and save them in the respective
`wal` and `snapshots` folders in the **data directory**. Data directory stores
permanent data on disk.
+Memgraph **cannot be used with only WAL files enabled**. You can either have only snapshots
+or snapshots and WAL files.
+
The default data directory path is `/var/lib/memgraph` but the path can be
changed by modifying the `--data-directory` configuration flag. To learn how to
modify configuration flags, head over to the
@@ -87,6 +90,11 @@ on the value of the `--storage-snapshot-on-exit` configuration flag. When a
snapshot creation is triggered, the entire data storage is written to the drive.
Nodes and relationships are divided into groups called batches.
+
+
+If both flags `--storage-snapshot-interval` and `--storage-snapshot-interval-sec` are defined, the flag `--storage-snapshot-interval` will be used.
+
+
Snapshot creation can be made faster by using **multiple threads**. See [Parallelized execution](#parallelized-execution) for more information.
On startup, the database state is recovered from the most recent snapshot file.
diff --git a/pages/fundamentals/indexes.mdx b/pages/fundamentals/indexes.mdx
index 98667c8e8..5802e394f 100644
--- a/pages/fundamentals/indexes.mdx
+++ b/pages/fundamentals/indexes.mdx
@@ -655,8 +655,17 @@ DROP GLOBAL EDGE INDEX ON :(property_name);
DROP POINT INDEX ON :Label(property);
```
-These queries instruct all active transactions to abort as soon as possible.
-Once all transactions have finished, the index will be deleted.
+### Drop all indexes
+
+The `DROP ALL INDEXES` clause allows you to delete all indices in your database
+in a single operation. This includes all types of indices: label indices,
+label-property indices, edge type indices, edge type-property indices, global
+edge indices, point indices, text indices, vector indices, and vector edge
+indices.
+
+```cypher
+DROP ALL INDEXES;
+```
### Delete all node indexes
@@ -665,6 +674,15 @@ The `schema.assert()` procedure will not drop edge and point indexes.
Our plan is to update it, and you can track the progress on our [GitHub](https://github.com/memgraph/memgraph/issues/2462).
+
+
+Starting from version 3.6, the `DROP ALL INDEXES` command was introduced and is
+the preferred way to remove all indexes, since it works across all index types.
+This makes it a better choice than the
+[`schema.assert()`](/querying/schema#assert) procedure.
+
+
+
To delete all indexes, use the [`schema.assert()`](/querying/schema#assert) procedure with the following parameters:
- `indices_map` = `{}`
- `unique_constraints` = map of key-value pairs of all uniqueness constraints in the database
diff --git a/pages/fundamentals/telemetry.mdx b/pages/fundamentals/telemetry.mdx
index 0fc8bec1d..2daa7a74a 100644
--- a/pages/fundamentals/telemetry.mdx
+++ b/pages/fundamentals/telemetry.mdx
@@ -52,6 +52,15 @@ available, the following data will be sent to and stored on Memgraph's servers.
- Query module calls - **Only the names** of the query module and procedure are recorded.
+**High availability cluster information:**
+ - The number of strict sync, sync and asynchronous replicas (retrieved from the current main).
+ - The number of coordinators in the cluster.
+ - Configuration options: `instance_down_timeout_sec`, `instance_health_check_frequency_sec`, `enabled_reads_on_main`, `sync_failover_only`.
+
+**Running environment:**
+ - Whether Memgraph is running in K8s or somewhere else.
+
+
No personal information is sent in the process of collecting telemetry data.
Each database generates a unique identifier by which data coming from the same
database instance is grouped. This unique identifier is in no way connected to
diff --git a/pages/getting-started/install-memgraph/kubernetes.mdx b/pages/getting-started/install-memgraph/kubernetes.mdx
index 688d2b95c..93c67e752 100644
--- a/pages/getting-started/install-memgraph/kubernetes.mdx
+++ b/pages/getting-started/install-memgraph/kubernetes.mdx
@@ -256,14 +256,17 @@ their default values.
| `serviceAccount.annotations` | Annotations to add to the service account | `{}` |
| `serviceAccount.name` | The name of the service account to use. If not set and create is true, a name is generated. | `""` |
| `container.terminationGracePeriodSeconds` | Grace period for pod termination | `1800` |
+| `container.livenessProbe.exec` | If defined, will be used instead of a default K8s's probe. | `""` |
| `container.livenessProbe.tcpSocket.port` | Port used for TCP connection. Should be the same as bolt port. | `7687` |
| `container.livenessProbe.failureThreshold` | Failure threshold for liveness probe | `20` |
| `container.livenessProbe.timeoutSeconds` | Initial delay for readiness probe | `10` |
| `container.livenessProbe.periodSeconds` | Period seconds for readiness probe | `5` |
+| `container.readinessProbe.exec` | If defined, will be used instead of a default K8s's probe. | `""` |
| `container.readinessProbe.tcpSocket.port` | Port used for TCP connection. Should be the same as bolt port. | `7687` |
| `container.readinessProbe.failureThreshold` | Failure threshold for readiness probe | `20` |
| `container.readinessProbe.timeoutSeconds` | Initial delay for readiness probe | `10` |
| `container.readinessProbe.periodSeconds` | Period seconds for readiness probe | `5` |
+| `container.startupProbe.exec` | If defined, will be used instead of a default K8s's probe. | `""` |
| `container.startupProbe.tcpSocket.port` | Port used for TCP connection. Should be the same as bolt port. | `7687` |
| `container.startupProbe.failureThreshold` | Failure threshold for startup probe | `1440` |
| `container.startupProbe.periodSeconds` | Period seconds for startup probe | `10` |
@@ -282,6 +285,10 @@ their default values.
| `sysctlInitContainer.image.repository` | Busybox image repository | `library/busybox` |
| `sysctlInitContainer.image.tag` | Specific tag for the Busybox Docker image | `latest` |
| `sysctlInitContainer.image.pullPolicy` | Image pull policy for busybox | `IfNotPresent` |
+| `lifecycleHooks` | For the memgraph container(s) to automate configuration before or after startup | `[]` |
+| `extraVolumes` | Optionally specify extra list of additional volumes | `[]` |
+| `extraVolumeMounts` | Optionally specify extra list of additional volumeMounts | `[]` |
+| `extraEnv` | Env variables that users can define | `[]` |
To change the default chart values, provide your own `values.yaml` file during
the installation:
@@ -322,7 +329,8 @@ reference guide](/database-management/configuration).
A Helm chart for deploying Memgraph in [high availability (HA)
setup](/clustering/high-availability). This helm chart requires [Memgraph
-Enterprise license](/database-management/enabling-memgraph-enterprise).
+Enterprise license](/database-management/enabling-memgraph-enterprise). We recommend reading
+the documentation about [high availability](https://memgraph.com/docs/clustering/high-availability) in Memgraph.
Memgraph HA cluster includes 3 coordinators and 2 data instances by default. Since
multiple Memgraph instances are used, it is advised to use multiple workers nodes in Kubernetes.
@@ -330,6 +338,7 @@ Our advice is that each Memgraph instance gets on its own node. The size of node
data pods will reside depends on the computing power and the memory you need to store data.
Coordinator nodes can be smaller and machines with basic requirements met (8-16 GB of RAM) will be enough.
+
### Installing the Memgraph HA Helm chart
To include Memgraph HA cluster as a part of your Kubernetes cluster, you need to
@@ -556,13 +565,6 @@ kubectl create secret generic memgraph-secrets --from-literal=USER=memgraph --fr
The same user will then be created on all coordinator and data instances through Memgraph's environment variables.
-### Monitoring
-
-HA chart supports monitoring through Prometheus. The basic configuration template is provided in `values.yaml` file which allows you to specify whether or not you
-want to use Prometheus and through which type of service you want your Prometheus server to be exposed. We use `prometheus-community` [chart](https://github.com/prometheus-community/helm-charts)
-as a dependency set-up through Chart.yaml file.
-
-
### Setting up the cluster
Although there are many configuration options you can use to set up HA cluster (especially for networking), the set-up process
@@ -769,12 +771,17 @@ The following table lists the configurable parameters of the Memgraph HA chart a
| `prometheus.memgraphExporter.pullFrequencySeconds` | How often will Memgraph's Prometheus exporter pull data from Memgraph instances. | `5` |
| `prometheus.memgraphExporter.repository` | The repository where Memgraph's Prometheus exporter image is available. | `memgraph/prometheus-exporter` |
| `prometheus.memgraphExporter.tag` | The tag of Memgraph's Prometheus exporter image. | `0.2.1` |
+| `prometheus.serviceMonitor.enabled` | If enabled, a `ServiceMonitor` object will be deployed. | `true` |
| `prometheus.serviceMonitor.kubePrometheusStackReleaseName` | The release name under which `kube-prometheus-stack` chart is installed. | `kube-prometheus-stack` |
| `prometheus.serviceMonitor.interval` | How often will Prometheus pull data from Memgraph's Prometheus exporter. | `15s` |
| `labels.coordinators.podLabels` | Enables you to set labels on a pod level. | `{}` |
| `labels.coordinators.statefulSetLabels` | Enables you to set labels on a stateful set level. | `{}` |
| `labels.coordinators.serviceLabels` | Enables you to set labels on a service level. | `{}` |
| `updateStrategy.type` | Update strategy for StatefulSets. Possible values are `RollingUpdate` and `OnDelete` | `RollingUpdate` |
+| `extraEnv.data` | Env variables that users can define and are applied to data instances | `[]` |
+| `extraEnv.coordinators` | Env variables that users can define and are applied to coordinators | `[]` |
+| `initContainers.data` | Init containers that users can define that will be applied to data instances. | `[]` |
+| `initContainers.coordinators` | Init containers that users can define that will be applied to coordinators. | `[]` |
For the `data` and `coordinators` sections, each item in the list has the following parameters:
@@ -864,4 +871,4 @@ Refer to the [Memgraph Lab documentation](/memgraph-lab/configuration) for detai
-
+kube
diff --git a/pages/querying/clauses.mdx b/pages/querying/clauses.mdx
index 41f8a74aa..06c40b728 100644
--- a/pages/querying/clauses.mdx
+++ b/pages/querying/clauses.mdx
@@ -13,7 +13,7 @@ using the following clauses:
* [`CASE`](/querying/clauses/case) - allows the creation of conditional expressions
* [`CREATE`](/querying/clauses/create) - creates new nodes and relationships
* [`DELETE`](/querying/clauses/delete) - deletes nodes and relationships
- * [`DROP GRAPH`](/querying/clauses/drop-graph) - delete all the data, along with all the indices, constraints, triggers, and streams
+ * [`DROP`](/querying/clauses/drop) - drop all constraints, indexes, or the entire database
* [`EXPLAIN`](/querying/clauses/explain) - inspect a particular Cypher query in order to see its execution plan.
* [`FOREACH`](/querying/clauses/foreach) - iterates over a list of elements and stores each element inside a variable
* [`LOAD CSV`](/data-migration/csv) - loads data from CSV file
diff --git a/pages/querying/clauses/_meta.ts b/pages/querying/clauses/_meta.ts
index 2b61dc9c9..ebd1d59a5 100644
--- a/pages/querying/clauses/_meta.ts
+++ b/pages/querying/clauses/_meta.ts
@@ -4,7 +4,7 @@ export default {
"case": "CASE",
"create": "CREATE",
"delete": "DELETE",
- "drop-graph": "DROP GRAPH",
+ "drop": "DROP",
"explain": "EXPLAIN",
"foreach": "FOREACH",
"load-csv": "LOAD CSV",
diff --git a/pages/querying/clauses/drop-graph.mdx b/pages/querying/clauses/drop-graph.mdx
deleted file mode 100644
index 0503d0bbd..000000000
--- a/pages/querying/clauses/drop-graph.mdx
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title: DROP GRAPH
-description: Learn to delete all the data, along with all the indices, constraints, triggers, and streams, in a very efficient manner using the DROP GRAPH clause in Memgraph.
----
-
-# DROP GRAPH
-
-In the [analytical
-mode](/fundamentals/storage-memory-usage#in-memory-analytical-storage-mode), the
-fastest way to delete everything in your current database is by issuing the
-following query:
-
-```cypher
-DROP GRAPH;
-```
-
-The query allows the user to delete all the data, along with all the indices,
-constraints, triggers, and streams, in a very efficient manner. For this query
-to succeed, the user must ensure that Memgraph is operating in the
-`IN_MEMORY_ANALYTICAL` storage mode. The user must also ensure that `DROP GRAPH`
-is the only transaction running at the time of issuing the command. If any other
-Cypher query is running when `DROP GRAPH` starts to execute, it will prevent
-this command from running until all the other queries are finished.
-
diff --git a/pages/querying/clauses/drop.mdx b/pages/querying/clauses/drop.mdx
new file mode 100644
index 000000000..33f2ece99
--- /dev/null
+++ b/pages/querying/clauses/drop.mdx
@@ -0,0 +1,85 @@
+---
+title: DROP
+description: Learn to delete all indexes, constraints or the entire data of the graph.
+---
+
+# DROP
+
+The `DROP` commands in Memgraph provide a way to remove different types of
+schema elements and data from your database. Depending on your needs, you can
+choose to remove only specific structures such as indexes or constraints, or
+perform a complete reset of the entire database.
+
+
+Memgraph supports several `DROP` operations:
+
+- `DROP ALL INDEXES` – removes every index in the database.
+- `DROP ALL CONSTRAINTS` – removes all defined constraints.
+- `DROP GRAPH` – clears the entire database, including data, indexes, constraints,
+triggers, and streams, in one operation (analytical mode only).
+
+Each of these operations serves a different purpose, ranging from cleaning up
+schema elements to performing a complete reset. The following sections describe
+each command in detail.
+
+## DROP ALL INDEXES
+
+The `DROP ALL INDEXES` clause allows you to delete all indices in your database
+in a single operation. This includes all types of indices: label indices,
+label-property indices, edge type indices, edge type-property indices, global
+edge indices, point indices, text indices, vector indices, and vector edge
+indices.
+
+Removing all indexes can be useful if you want to redefine indexing strategies
+or if you are about to load data in bulk and recreate indexes afterward.
+
+**Syntax:**
+
+```cypher
+DROP ALL INDEXES;
+```
+
+## DROP ALL CONSTRAINTS
+
+The `DROP ALL CONSTRAINTS` clause allows you to delete all constraints in your
+database in a single operation. This includes all types of constraints:
+- existence constraints
+- unique constraints
+- type constraints
+
+This operation is particularly helpful when you want to redefine your schema
+rules or prepare the database for significant restructuring.
+
+**Syntax:**
+
+```cypher
+DROP ALL CONSTRAINTS;
+```
+
+## DROP GRAPH
+
+While dropping indexes and constraints clears only schema definitions, the `DROP
+GRAPH` command provides a much more comprehensive reset.
+
+In the [analytical
+mode](/fundamentals/storage-memory-usage#in-memory-analytical-storage-mode), the
+fastest way to delete everything in your current database is by issuing the
+following query:
+
+```cypher
+DROP GRAPH;
+```
+
+The query allows the user to delete **all data**, along with **all indices**,
+**constraints**, **triggers**, and **streams**, in a very efficient manner.
+
+To ensure successful execution:
+- Memgraph must be operating in the `IN_MEMORY_ANALYTICAL` storage mode.
+- `DROP GRAPH` must be the only transaction running at the time of execution. If
+any other Cypher queries are running, they must finish before this command can
+proceed.
+
+This command is ideal when you need a fresh start without manually removing data
+or schema elements.
+
+
diff --git a/pages/querying/clauses/set.md b/pages/querying/clauses/set.md
index 828345054..cac29fe84 100644
--- a/pages/querying/clauses/set.md
+++ b/pages/querying/clauses/set.md
@@ -14,7 +14,9 @@ The `SET` clause is used to update labels on nodes and properties on nodes and r
5. [Remove a property](#5-remove-a-property)
6. [Copy all properties](#6-copy-all-properties)
7. [Replace all properties using map](#7-replace-all-properties-using-map)
-8. [Update all properties using map](#8-update-all-properties-using-map)
+8. [Update all properties using map](#8-update-all-properties-using-map)
+9. [Setting nested properties](#9-setting-nested-properties)
+10. [Removing nested properties](#10-removing-nested-properties)
## Dataset
@@ -221,6 +223,72 @@ Output:
+-----------------------------------------------------------------------------------------------+
```
+## 9. Setting nested properties
+
+Starting from **version 3.6**, Memgraph supports **nested properties**. Nested properties allow you to define and modify values inside `Map` property types.
+Before nested property support was introduced, users could only set base properties using queries such as:
+```cypher
+MATCH (n:Person {name: 'Harry'}) SET n.age = 21;
+```
+
+With nested property support, you can now **set properties inside a map**, such as:
+```cypher
+MATCH (n:Person {name: 'Harry'}) SET n.details.age = 21;
+```
+
+If the `details` property does not already exist, Memgraph automatically creates it as a map and assigns the nested property within it.
+
+This feature is especially useful when working with configuration objects or when optimizing graph storage, since maps typically consume less memory than multiple node or relationship objects.
+
+You can query a nested property the same way you would any other:
+
+```cypher
+MATCH (n:Person {name: 'Harry'})
+RETURN n.details.age AS age;
+
+Output:
+
+```nocopy
++-----+
+| age |
++-----+
+| 21 |
++-----+
+```
+
+There are a few edge cases when working with nested properties:
+If the parent property is not of type `Map`, the query will **throw an exception**:
+```cypher
+MATCH (n:Person {name: 'Harry'}) SET n.name.surname = 'Johnson' // ERROR because n.name is a string, not a map
+```
+
+{ Appending to nested properties
}
+
+You can also append to **existing map properties** using the `+=` operator:
+
+```cypher
+MATCH (n:Person {name: 'Harry'}) SET n.details += {age: 21};
+```
+
+When using this syntax:
+- The **right-hand side** must be a `Map`.
+- The **left-hand side** must also be a `Map` (and must exist).
+
+If either side is not a map, Memgraph will throw an exception.
+This ensures that map merging is always type-safe and consistent.
+
+## 10. Removing nested properties
+
+Starting from version v3.6, Memgraph also supports removing nested properties for easier manipulation of map objects
+within the node or relationship property store. The following query performs nested property removal:
+```cypher
+MATCH (n:Person {name: 'Harry'}) REMOVE n.details.age;
+```
+
+This removes only the specified nested property (`age`) while preserving all other keys in the parent map (`details`).
+
+If the property does not exist, Memgraph does not throw an exception - the behavior matches that of removing top-level properties.
+
## Dataset queries
We encourage you to try out the examples by yourself.
diff --git a/pages/querying/clauses/where.md b/pages/querying/clauses/where.md
index f8459d8de..1999994c0 100644
--- a/pages/querying/clauses/where.md
+++ b/pages/querying/clauses/where.md
@@ -22,7 +22,8 @@ order to avoid problems with performance or results.
1.4. [Filter with node properties](#14-filter-with-node-properties)
1.5. [Filter with relationship properties](#15-filter-with-relationship-properties)
1.6. [Check if property is not null](#16-check-if-property-is-not-null)
- 1.7. [Filter with pattern expressions](#17-filter-with-pattern-expressions)
+ 1.7. [Filter with EXISTS expressions](#17-filter-with-exists-expressions)
+ 1.8. [Filter with pattern expressions](#18-filter-with-pattern-expressions)
2. [String matching](#2-string-matching)
3. [Regular Expressions](#3-regular-expressions)
4. [Existential subqueries](#4-existential-subqueries)
@@ -166,9 +167,9 @@ Output:
+----------------+----------------+
```
-### 1.7. Filter with pattern expressions
+### 1.7. Filter with EXISTS expressions
-Currently, we support pattern expression filters with the `exists(pattern)`
+Currently, we support `EXISTS` expression filters with the `exists(pattern)`
function, which can perform filters based on neighboring entities:
```cypher
@@ -191,6 +192,30 @@ Output:
+----------------+
```
+### 1.8. Filter with pattern expressions
+
+Currently, we support pattern expression filters inside the `WHERE` clause.
+
+```cypher
+MATCH (p:Person)
+WHERE (p)-[:LIVING_IN]->(:Country {name: 'Germany'})
+RETURN p.name
+ORDER BY p.name;
+```
+
+Output:
+
+```nocopy
++----------------+
+| c.name |
++----------------+
+| Anna |
+| John |
++----------------+
+```
+
+The pattern expressions can't contain any additional symbols that are not introduced in the previously matched symbols.
+
## 2. String matching
Apart from comparison and concatenation operators Cypher provides special
diff --git a/pages/querying/differences-in-cypher-implementations.mdx b/pages/querying/differences-in-cypher-implementations.mdx
index b17f92dc9..2e451799f 100644
--- a/pages/querying/differences-in-cypher-implementations.mdx
+++ b/pages/querying/differences-in-cypher-implementations.mdx
@@ -187,6 +187,7 @@ Such a construct is not supported in Memgraph, but you can use `collect()` [aggr
{Patterns in expressions
}
Patterns in expressions are supported in Memgraph in particular functions, like `exists(pattern)`.
+Memgraph also supports filtering based on patterns, like `MATCH (n) WHERE NOT (n)-->()`.
In other cases, Memgraph does not yet support patterns in functions, e.g. `size((n)-->())`.
Most of the time, the same functionalities can be expressed differently in Memgraph
using `OPTIONAL` expansions, function calls, etc.
diff --git a/pages/querying/read-and-modify-data.md b/pages/querying/read-and-modify-data.md
index 53ff813a6..009fbc188 100644
--- a/pages/querying/read-and-modify-data.md
+++ b/pages/querying/read-and-modify-data.md
@@ -468,6 +468,37 @@ SET p1 = p2
RETURN p1, p2;
```
+#### Setting a nested property
+
+If the property of a node or relationship is a map, Memgraph supports setting nested
+properties within the map for more granular updates. The following command updates a nested
+property of a node:
+
+```cypher
+MATCH (h:Person {name: 'Harry'})
+SET h.details.age = 21;
+```
+
+If the map property does not exist beforehand, it will be created as an empty map with the nested properties placed inside.
+There are certain schema guarantees — you cannot modify a nested property if the parent property exists and is not of type `Map`.
+
+For more information, read the [guide on setting nested propertes](/querying/clauses/set#9-setting-nested-properties)
+
+#### Removing a nested property
+
+Similar to setting nested properties, Memgraph supports removing nested properties for more fine-grained manipulation of object data.
+The following Cypher query deletes a nested property from a node:
+
+```cypher
+MATCH (h:Person {name: 'Harry'})
+REMOVE h.details.age;
+```
+
+If the leaf property does not exist, the command will not raise an exception.
+However, if the parent property maps do not exist, the query will result in a runtime exception.
+
+For more information, read the [guide on removing nested propertes](/querying/clauses/set#10-removing-nested-properties)
+
#### Bulk update
You can use `SET` clause to do a bulk update. Here is an example of how to
diff --git a/pages/querying/schema.mdx b/pages/querying/schema.mdx
index dc3bbf31d..b7f042e64 100644
--- a/pages/querying/schema.mdx
+++ b/pages/querying/schema.mdx
@@ -257,6 +257,12 @@ Enabling `--schema-info-enabled` incurs a performance cost because additional wo
* __Edge Property Type Changes__: Modifying the type of an edge's property may require scanning large portions of the graph. To avoid this, define edges and their properties in the same transaction and keep property types stable.
* __Recovery Using WALs__: Recovering edges with properties via Write-Ahead Log (WAL) files can cause a significant performance hit. To mitigate this, use snapshots instead. Starting from v2.21, this issue has been alleviated, when using WAL files created by v2.21+.
+### Schema with Memgraph Enterprise
+
+`SHOW SCHEMA INFO;` returns the schema for the entire graph. However, if the system is configured with users that have [fine-grained access control](/database-management/authentication-and-authorization/role-based-access-control#label-based-access-control),
+the returned schema will be filtered according to each user’s visibility settings. Similar to how a user can only see the portion of the graph permitted by fine-grained access controls,
+the `SHOW SCHEMA INFO;` command will also return only the nodes and relationships for which the user has `READ` permissions.
+
## Schema metadata
Schema metadata is a light weight run-time schema tracking. To enable schema metadata queries, start Memgraph with the `--storage-enable-schema-metadata` [configuration flag](/configuration/configuration-settings#storage) set to `True`. The flag facilitates the utilization of a specialized cache designed to store specific metadata related to the database.
diff --git a/pages/querying/text-search.mdx b/pages/querying/text-search.mdx
index 61f183c80..4d450fede 100644
--- a/pages/querying/text-search.mdx
+++ b/pages/querying/text-search.mdx
@@ -7,26 +7,35 @@ import { Callout } from 'nextra/components'
# Text search
-
+Text search allows you to look up nodes and edges whose properties contain specific text.
+To make a node or edge searchable, you must first create a text index for it.
+
+Text indices and search are powered by the
+[Tantivy](https://github.com/quickwit-oss/tantivy) full-text search engine.
-Text search is an [experimental
-feature](/database-management/experimental-features) introduced in Memgraph
-2.15.1. To use it, start Memgraph with the `--experimental-enabled=text-search`
-flag.
+
+Text search is no longer an experimental feature as of Memgraph version 3.6. You can use text search without any special configuration flags.
-Text search allows you to look up nodes with properties that contain specific content.
-For a node to be searchable, you first need to create a text index that applies
-to it.
+## Create text index
+Before you can use **text search**, you need to create a **text index**.
+Text indices are created using the `CREATE TEXT INDEX` command.
+To create the text index, you need to:
+1. **Provide a name** for the index.
+2. **Specify the label or edge type** the index applies to.
+3. (*Optional*) **Define which properties** should be indexed.
-Text indices and search are powered by the
-[Tantivy](https://github.com/quickwit-oss/tantivy) full-text search engine.
+{Create a text index on nodes
}
-## Create text indices
+```shell
+CREATE TEXT INDEX text_index_name ON :Label;
+```
-Text indices are created with the `CREATE TEXT INDEX` command. You need to give
-a name to the new index and specify which labels it should apply to.
+{Create a text index on edges
}
+```shell
+CREATE TEXT EDGE INDEX text_index_name ON :EDGE_TYPE;
+```
### Index all properties
@@ -51,11 +60,25 @@ For example, to create an index only on the `title` and `content` properties of
CREATE TEXT INDEX complianceDocuments ON :Report(title, content);
```
+### Edge text indices
+
+Text indices can also be created on edges. To create a text index on edges:
+
+```cypher
+CREATE TEXT EDGE INDEX edge_index_name ON :EDGE_TYPE;
+```
+
+You can also specify specific properties for edge indices:
+
+```cypher
+CREATE TEXT EDGE INDEX edge_index_name ON :EDGE_TYPE(prop1, prop2);
+```
+
If you attempt to create an index with an existing name, the statement will fail.
### What is indexed
-For any given node, if a text index applies to it:
+For any given node or edge, if a text index applies to it:
- When no specific properties are listed, all properties with text-indexable types (`String`, `Integer`, `Float`, or `Boolean`) are stored.
- When specific properties are listed, only those properties (if they have text-indexable types) are stored.
@@ -65,12 +88,22 @@ Changes made within the same transaction are not visible to the index. To see yo
-## Show text indices
+## Run text search
+
+To run text search, you need to call `text_search` query module procedures.
+
+
+
+Unlike other index types, the query planner currently does not utilize text indices.
+
+
+
+### Show text indices
To list all text indices in Memgraph, use the `SHOW INDEX INFO`
[statement](/fundamentals/indexes#show-created-indexes).
-## Query text indices
+### Query text index
@@ -79,27 +112,27 @@ For consistent results, avoid performing multiple identical searches within the
-Querying text indices is done through query procedures.
-
-
-
-Unlike other index types, text indices are not used by the query planner.
-
-
+Use the `text_search.search()` and `text_search.search_edges()` procedures to search for text within
+a text index. These procedures allow you to find nodes or edges that match
+your search query based on their indexed properties.
+{ Input:
}
-### Search in specific properties
+- `index_name: string` ➡ The text index to search.
+- `search_query: string` ➡ The query to search for in the index.
+- `limit: int (optional, default=1000)` ➡ The maximum number of results to return.
-The `text_search.search` procedure finds text-indexed nodes matching the given query.
+{ Output:
}
-{ Input:
}
+When the index is defined on nodes:
-- `index_name: String` - The text index to be searched.
-- `search_query: String` - The query applied to the text-indexed nodes.
+- `node: Node` ➡ A node in the text index matching the given query.
+- `score: double` ➡ The relevance score of the match. Higher scores indicate more relevant results.
-{ Output:
}
+When the index is defined on edges:
-- `node: Node` - A node in `index_name` matching the given `search_query`.
+- `edge: Relationship` ➡ An edge in the text index matching the given query.
+- `score: double` ➡ The relevance score of the match. Higher scores indicate more relevant results.
{ Usage:
}
@@ -107,13 +140,13 @@ The syntax for the `search_query` parameter is available
[here](https://docs.rs/tantivy/latest/tantivy/query/struct.QueryParser.html).
If the query contains property names, attach the `data.` prefix to them.
-The following query searches the `complianceDocuments` index for nodes with the
-value of `title` property containing `Rules2024`:
+```shell
+CALL text_search.search("index_name", "data.title:Rules2024") YIELD node, score RETURN *;
+```
-```cypher
-CALL text_search.search("complianceDocuments", "data.title:Rules2024")
-YIELD node
-RETURN node;
+To query an index on edges, use:
+```shell
+CALL text_search.search_edges("index_name", "data.title:Rules2024") YIELD edge, score RETURN *;
```
{Example
}
@@ -178,20 +211,29 @@ Result:
### Search over all indexed properties
-The `text_search.search_all` procedure looks for text-indexed nodes where at
+The `text_search.search_all` and `text_search.search_all_edges` procedures look for text-indexed nodes or edges where at
least one property value matches the given query.
-Unlike `text_search.search`, this procedure searches over all properties, and
+Unlike `text_search.search`, these procedures search over all properties, and
there is no need to specify property names in the query.
{ Input:
}
-- `index_name: String` - The text index to be searched.
-- `search_query: String` - The query applied to the text-indexed nodes.
+- `index_name: string` ➡ The text index to be searched.
+- `search_query: string` ➡ The query applied to the text-indexed nodes or edges.
+- `limit: int (optional, default=1000)` ➡ The maximum number of results to return.
{ Output:
}
-- `node: Node` - A node in `index_name` matching the given `search_query`.
+When the index is defined on nodes:
+
+- `node: Node` ➡ A node in `index_name` matching the given `search_query`.
+- `score: double` ➡ The relevance score of the match. Higher scores indicate more relevant results.
+
+When the index is defined on edges:
+
+- `edge: Relationship` ➡ An edge in `index_name` matching the given `search_query`.
+- `score: double` ➡ The relevance score of the match. Higher scores indicate more relevant results.
{ Usage:
}
@@ -204,6 +246,13 @@ YIELD node
RETURN node;
```
+To search edges:
+```cypher
+CALL text_search.search_all_edges("complianceEdges", "Rules2024")
+YIELD edge
+RETURN edge;
+```
+
{Example
}
```cypher
@@ -230,17 +279,26 @@ Result:
### Regex search
-The `text_search.regex_search` procedure looks for text-indexed nodes where at
+The `text_search.regex_search` and `text_search.regex_search_edges` procedures look for text-indexed nodes or edges where at
least one property value matches the given regular expression (regex).
{ Input:
}
-- `index_name: String` - The text index to be searched.
-- `search_query: String` - The regex applied to the text-indexed nodes.
+- `index_name: string` ➡ The text index to be searched.
+- `search_query: string` ➡ The regex applied to the text-indexed nodes or edges.
+- `limit: int (optional, default=1000)` ➡ The maximum number of results to return.
{ Output:
}
-- `node: Node` - A node in `index_name` matching the given `search_query`.
+When the index is defined on nodes:
+
+- `node: Node` ➡ A node in `index_name` matching the given `search_query`.
+- `score: double` ➡ The relevance score of the match. Higher scores indicate more relevant results.
+
+When the index is defined on edges:
+
+- `edge: Relationship` ➡ An edge in `index_name` matching the given `search_query`.
+- `score: double` ➡ The relevance score of the match. Higher scores indicate more relevant results.
{ Usage:
}
@@ -255,6 +313,13 @@ YIELD node
RETURN node;
```
+To search edges:
+```cypher
+CALL text_search.regex_search_edges("complianceEdges", "wor.*s")
+YIELD edge
+RETURN edge;
+```
+
{Example
}
```cypher
@@ -283,26 +348,27 @@ Result:
Aggregations allow you to perform calculations on text search results. By using
them, you can efficiently summarize the results, calculate averages or totals,
-identify min/max values, and count indexed nodes that meet specific criteria.
+identify min/max values, and count indexed nodes or edges that meet specific criteria.
-The `text_search.aggregate` procedure lets you define an aggregation and apply
+The `text_search.aggregate` and `text_search.aggregate_edges` procedures let you define an aggregation and apply
it to the results of a search query.
{ Input:
}
-- `index_name: String` - The text index to be searched.
-- `search_query: String` - The query applied to the text-indexed nodes.
-- `aggregation_query: String` - The aggregation (JSON-formatted) to be applied
+- `index_name: string` ➡ The text index to be searched.
+- `search_query: string` ➡ The query applied to the text-indexed nodes or edges.
+- `aggregation_query: string` ➡ The aggregation (JSON-formatted) to be applied
to the output of `search_query`.
+- `limit: int (optional, default=1000)` ➡ The maximum number of results to return.
{ Output:
}
-- `aggregation: String` - JSON-formatted string with the output of aggregation.
+- `aggregation: string` ➡ JSON-formatted string with the output of aggregation.
{ Usage:
}
Aggregation queries and results are strings with Elasticsearch-compatible JSON
-format, where `"field"` corresponds to node properties. If the search or
+format, where `"field"` corresponds to node or edge properties. If the search or
aggregation queries contain property names, attach the `data.` prefix to them.
The following query counts all nodes in the `complianceDocuments` index:
@@ -317,6 +383,17 @@ YIELD aggregation
RETURN aggregation;
```
+To aggregate edges:
+```cypher
+CALL text_search.aggregate_edges(
+ "complianceEdges",
+ "data.title:Rules2024",
+ '{"count": {"value_count": {"field": "data.version"}}}'
+)
+YIELD aggregation
+RETURN aggregation;
+```
+
{Example
}
```cypher
@@ -343,26 +420,22 @@ Result:
+-------------------------------+
```
-## Drop text indices
-
-Text indices are dropped with the `DROP TEXT INDEX` command. You need to give
-the name of the index to be deleted.
+## Drop text index
-This statement drops the text index named `complianceDocuments`:
+Text indices are dropped with the `DROP TEXT INDEX` command. You need to give the name of the index to be deleted.
-```cypher
-DROP TEXT INDEX complianceDocuments;
+```shell
+DROP TEXT INDEX text_index_name;
```
## Compatibility
-Even though text search is an experimental feature, it supports most usage modalities
-that are available in Memgraph from version 3.5. Refer to the table below for an overview:
+Text search supports most usage modalities that are available in Memgraph. Refer to the table below for an overview:
| Feature | Support |
|-------------------------|---------------------------------------------------------|
| Multitenancy | ✅ Yes |
| Durability | ✅ Yes |
-| Replication | ✅ Yes (from version 3.5) |
+| Replication | ✅ Yes |
| Concurrent transactions | ⚠️ Yes, but search results may vary within transactions |
| Storage modes | ❌ No (doesn't work in IN_MEMORY_ANALYTICAL) |
\ No newline at end of file
diff --git a/pages/querying/vector-search.mdx b/pages/querying/vector-search.mdx
index 50ce52e36..1e5ab1c62 100644
--- a/pages/querying/vector-search.mdx
+++ b/pages/querying/vector-search.mdx
@@ -160,6 +160,16 @@ for the metric is `l2sq` (squared Euclidean distance).
| `sorensen` | Sørensen-Dice coefficient |
| `jaccard` | Jaccard index |
+### Cosine similarity
+
+You can calculate cosine similarity directly in queries using the `vector_search.cosine_similarity()` function. This is useful when you need to compute similarity between vectors without creating a vector index.
+
+{ Usage:
}
+
+```cypher
+RETURN vector_search.cosine_similarity([1.0, 2.0], [1.0, 3.0]) AS similarity;
+```
+
### Scalar type
Properties are stored as 64-bit values in the property store. However, for efficiency, vector elements in the vector index are stored using 32-bit values by default.
diff --git a/pages/release-notes.mdx b/pages/release-notes.mdx
index 7dcf04021..ad4f6dd5c 100644
--- a/pages/release-notes.mdx
+++ b/pages/release-notes.mdx
@@ -42,6 +42,163 @@ troubleshoot in production.
## 🚀 Latest release
+### Memgraph v3.6.0 - October 8th, 2025
+
+{⚠️ Breaking changes
}
+
+- Removed `mg_import_csv` executable from Memgraph, `LOAD CSV` should be used
+ instead. [#3216](https://github.com/memgraph/memgraph/pull/3216)
+- Changed the behaviour of Memgraph when users set the
+ `--storage-wal-enabled=false`, `--storage-snapshot-interval` and
+ `--storage-snapshot-interval-sec` flags. If Memgraph won't start under
+ replication/HA, if `--storage-wal-enabled=false`. Single instance Memgraph
+ deployments should have the same behaviour.
+ [#3259](https://github.com/memgraph/memgraph/pull/3259)
+- Changed how MEMGRAPH_USER and MEMGRAPH_PASSWORD environment variables behave
+ under coordinators. Memgraph will ignore those two environment variables,
+ resulting in any hacked solution exposing this workaround and using
+ bolt+routing stop to work.
+ [#3278](https://github.com/memgraph/memgraph/pull/3278)
+- Removed `text-search` from the list of experimental features. If you used the
+ text search, please make sure to remove the
+ `--experimental-enabled=text-search` because Memgraph won't be able to start.
+ [#3301](https://github.com/memgraph/memgraph/pull/3301)
+- `SHOW INDEX INFO` output is different. Previously, all text indexes returned
+ the hardcoded string "text" as their type. Now, the output reflects the
+ actual indexed entity more precisely: for node indexes `label_text`, for edge
+ indexes `edge-type_text`.
+ [#3178](https://github.com/memgraph/memgraph/pull/3178)
+
+{✨ New features
}
+
+- Support for RPC versioning has been added to enable ISSU (In-Service Software
+ Upgrades). The user can now expect no-downtime upgrades when following a
+ specific procedure. Before, when exchanging messages, the server could not
+ read an RPC request if it was not of the latest version. Additionally,
+ support has been added to update the protocol in future releases.
+ [#3172](https://github.com/memgraph/memgraph/pull/3172)
+- Allow update and removal of nested properties inside a node or relationship.
+ Users can now perform fine-grained updates inside the map properties of nodes
+ and relationships. [#3052](https://github.com/memgraph/memgraph/pull/3052)
+- A new `cosine_similarity` function was added to the `vector_search` module.
+ This allows users to calculate cosine similarity directly in queries, e.g.,
+ `CALL vector_search.cosine_similarity([1.0, 2.0], [1.0, 2.0, 3.0])`.
+ [#3240](https://github.com/memgraph/memgraph/pull/3240)
+- Added `DROP ALL INDEXES` and `DROP ALL CONSTRAINTS` clauses that allow you to
+ delete all indices or all constraints in your database with a single command.
+ [#3247](https://github.com/memgraph/memgraph/pull/3247)
+- Extended the client-side bolt+routing protocol. It will now work normally
+ with multi-tenancy in HA mode. Specifying env variables MEMGRAPH_USER and
+ MEMGRAPH_PASSWORD on coordinators will no longer create a user to
+ authenticate. [#3278](https://github.com/memgraph/memgraph/pull/3278)
+- A new coordinator setting `max_failover_replica_lag` has been added, which
+ allows users to control how many transactions they are willing to risk losing
+ upon a data failover. If the replica is behind main for more than
+ `max_failover_replica_lag` transactions, the coordinator will forbid failover
+ to such an instance. [#3219](https://github.com/memgraph/memgraph/pull/3219)
+- Added `max_replica_read_lag_` coordinator setting. This new setting prevents
+ routing read queries to replicas that are too far behind the current main
+ instance, ensuring data consistency for read operations.
+ [#3222](https://github.com/memgraph/memgraph/pull/3222)
+- Allow database/tenant renaming. Users can now rename databases, instead of
+ having to create a new one, recover from an old snapshot and then drop the
+ old database. Simplifies admin workflow.
+ [#3161](https://github.com/memgraph/memgraph/pull/3161)
+- Added `FORCE` specifier to `DROP DATABASE` query. Allows users to force a
+ drop even if it is currently being used. The database will be hidden
+ immediately, but deletion is deferred until it is safe.
+ [#3033](https://github.com/memgraph/memgraph/pull/3033)
+- Updated system metrics with `WriteWriteConflicts` and `TransientErrors`
+ counters [#3230](https://github.com/memgraph/memgraph/pull/3230)
+- Updated list of metrics with latency of the GC skiplist cleanup, as well as
+ the latency of the whole GC process
+ [#3235](https://github.com/memgraph/memgraph/pull/3235)
+- Added support for text indexing on edges. Users can now create and query text
+ indexes on edges. The syntax for creating an edge-based text index is:
+ `CREATE TEXT EDGE INDEX edge_index_name ON :EDGE_TYPE(text_property);`.
+ [#3178](https://github.com/memgraph/memgraph/pull/3178)
+- Added fine-grained access control for `SHOW SCHEMA INFO;` query. Users are
+ now restricted from seeing the full graph schema if they don't have the
+ fine-grained privileges.
+ [#3304](https://github.com/memgraph/memgraph/pull/3304)
+
+{🛠️ Improvements
}
+
+- Added python3.12-venv inside Memgraph's build image. It makes it easier to
+ create environments when using Memgraph as part of the K8s cluster.
+ [#3206](https://github.com/memgraph/memgraph/pull/3206)
+- Added improvement in which durability files that are transferred between
+ instances in an HA environment won't require as big a memory allocation on
+ the receiver's end. Before, the whole file had to be read into memory to be
+ used; now we use a buffer of 64KiB to write bytes received on the network to
+ the file on disk. Users with a big scale shouldn't see any memory spikes on
+ replicas. This should help with Memgraph's stability when deployed in
+ environments with tight memory control, e.g, K8s
+ [#3227](https://github.com/memgraph/memgraph/pull/3227)
+- Improved fine-grained access control error messages.
+ [#3287](https://github.com/memgraph/memgraph/pull/3287)
+- Enhanced routing table accuracy. Followers now forward `GetRoutingTableRpc`
+ requests to the cluster leader to guarantee the correctness of the current
+ routing table. This change improves routing accuracy at the expense of
+ additional network
+ latency.[#3222](https://github.com/memgraph/memgraph/pull/3222)
+- Users can expect a slightly faster demotion to replica on the old main when
+ the failover occurs. [#3128](https://github.com/memgraph/memgraph/pull/3182)
+- Text search now includes a relevance score in results, a limit parameter to
+ control result count, and support for aggregation queries on text edge
+ indexes. These enhancements let users rank, filter, and analyze search
+ results more effectively.
+ [#3264](https://github.com/memgraph/memgraph/pull/3264)
+- Added better log messages during user creation.
+ [#3303](https://github.com/memgraph/memgraph/pull/3303)
+- Updated error message about user not having the correct role-based privileges
+ with more information about what the database admin should do in that case.
+ [#3312](https://github.com/memgraph/memgraph/pull/3312)
+
+{🐞 Bug fixes
}
+
+- Fixed a bug where the current WAL file wasn't finalised on the replica when a
+ new epoch was received. Users can expect fewer snapshots and WAL files being
+ transferred in the case of many restarts in the cluster.
+ [#3258](https://github.com/memgraph/memgraph/pull/3258)
+- HA cluster was never supposed to work when flag
+ `--storage-wal-enabled=false`. We now explicitly forbid starting the data
+ instance when we detect such an incorrect state. The default value of
+ `--storage-snapshot-interval-sec` is now 300 in all packaging scenarios. If
+ both `--storage-snapshot-interval-sec` and `--storage-snapshot-interval` are
+ set, the `--storage-snapshot-interval` flag will be used.
+ [#3259](https://github.com/memgraph/memgraph/pull/3259)
+- Fixed a concurrency bug that would cause Memgraph to get stuck rarely during
+ shutdown. [#3285](https://github.com/memgraph/memgraph/pull/3285)
+- Users will now be able to authenticate using SSO and bolt+routing protocol.
+ Previously it was only working if directly connecting to data instances.
+ [#3307](https://github.com/memgraph/memgraph/pull/3307)
+- Corrupted snapshot files won't be deleted anymore. They won't be deleted
+ either, but we found it incorrect to delete durability files that may contain
+ valid data for a user and that are only temporarily unusable in the current
+ release. [#3300](https://github.com/memgraph/memgraph/pull/3300)
+- Replica will now delete the temporarily created file that is used to receive
+ the WAL file replicated from the main.
+ [#3330](https://github.com/memgraph/memgraph/pull/3300)
+
+### MAGE v3.6.0 - October 8th, 2025
+
+{✨ New features
}
+
+- Added support for migrating data from Neo4j under the migration module.
+ [#639](https://github.com/memgraph/mage/pull/639)
+- Added k-nearest neighbors (KNN) algorithm implementation.
+ [#676](https://github.com/memgraph/mage/pull/676)
+
+{🐞 Bug fixes
}
+
+- Fixed closing of database connections after migrations are completed.
+ [#665](https://github.com/memgraph/mage/pull/665)
+
+### Lab v3.6.0 - October 8th, 2025
+
+## Previous releases
+
### Memgraph v3.5.2 - September 24th, 2025
{✨ New features
}
@@ -71,8 +228,6 @@ troubleshoot in production.
- Fixed `map.merge` module to accept nullable arguments.
[#671](https://github.com/memgraph/mage/pull/671)
-## Previous releases
-
### Memgraph v3.5.1 - September 11th, 2025
{🐞 Bug fixes
}