diff --git a/build.sh b/build.sh deleted file mode 100644 index a69632987..000000000 --- a/build.sh +++ /dev/null @@ -1,2 +0,0 @@ -curl https://raw.githubusercontent.com/mongodb/docs-worker-pool/netlify-poc/scripts/build-site.sh -o build-site.sh -sh build-site.sh diff --git a/netlify.toml b/netlify.toml index d0c890406..1fca3d417 100644 --- a/netlify.toml +++ b/netlify.toml @@ -1,6 +1,2 @@ -[[integrations]] -name = "snooty-cache-plugin" - [build] publish = "snooty/public" -command = ". ./build.sh" diff --git a/source/activity.txt b/source/activity.txt index b6f8affc2..c71fd6ebb 100644 --- a/source/activity.txt +++ b/source/activity.txt @@ -71,8 +71,8 @@ Apps log the following event types: - :ref:`Schema `, including any events related to changes to an application's schema. -- :ref:`Trigger `, including Database Triggers, - Authentication Triggers, and Scheduled Triggers. +- :ref:`Trigger `, including Database Triggers and + Scheduled Triggers. Error Logs ~~~~~~~~~~ diff --git a/source/deprecation.txt b/source/deprecation.txt index 3abc2e5e8..4b4bfce2c 100644 --- a/source/deprecation.txt +++ b/source/deprecation.txt @@ -33,8 +33,8 @@ Hosting and GraphQL From App Services page `. Triggers Are Not Deprecated ---------------------------- -Triggers are not deprecated. This service will continue to be available. App -Services Functions will also continue to be available to use with Triggers. +Triggers are not deprecated. This service continues to be available in the +Atlas UI. Functions also continue to be available to use with Triggers. Atlas Data API and HTTPS Endpoints Are Deprecated ------------------------------------------------- @@ -85,7 +85,7 @@ the chosen alternative solution. Functions ~~~~~~~~~ -Functions will continue to be available within the context of Triggers. Use +Functions continue to be available within the context of Triggers. Use cases where a function was being directly accessed through a Device SDK are impacted and must migrate to a different solution. diff --git a/source/functions/handle-errors.txt b/source/functions/handle-errors.txt index 9a3a076a1..090378004 100644 --- a/source/functions/handle-errors.txt +++ b/source/functions/handle-errors.txt @@ -118,7 +118,7 @@ by using recursion in error-handling blocks. Use Database Triggers to Retry ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -You can also retry Functions by using a :ref:`Database Trigger ` to execute retries and a MongoDB collection to track previously-failed executions. +You can also retry Functions by using a `Database trigger `__ to execute retries and a MongoDB collection to track previously-failed executions. On a high-level, this process includes the following components: diff --git a/source/introduction.txt b/source/introduction.txt index 15d50c5d6..1af5f44fe 100644 --- a/source/introduction.txt +++ b/source/introduction.txt @@ -63,8 +63,7 @@ Serverless: Dynamic and responsive: - React to data changes in MongoDB Atlas, process data from HTTPS endpoints, - or run Atlas Functions on a schedule with - Atlas Triggers. + or run Atlas Functions on a schedule with Atlas Triggers. - Get up and running quickly for free, then scale according to the demands of your application. - Pay for and receive only the exact amount of compute you need at any given time with usage-based pricing. Usage under a certain amount per day is always free. @@ -214,9 +213,9 @@ services to do things like: - Use Sync to synchronize data between mobile clients and the linked MongoDB Atlas collection. -- Host a Todo web app using :ref:`Atlas Device SDK for Web ` -- Manage event-driven :ref:`Database Triggers ` - to update views in a separate collection +- Host a Todo web app using :ref:`Atlas Device SDK for Web `. +- Manage event-driven `Database triggers `__ to update + views in a separate collection. Template apps are working apps you can run and change to experiment with App Services. These apps are a good choice for developers who prefer to learn by diff --git a/source/mongodb.txt b/source/mongodb.txt index ad18623a3..3ec30845b 100644 --- a/source/mongodb.txt +++ b/source/mongodb.txt @@ -132,7 +132,7 @@ code and can access a change event for detailed information about the change that caused it to run. To learn more about how triggers work and how to define your own, see -:ref:`Database Triggers `. +`Database triggers `__. .. important:: diff --git a/source/mongodb/preimages.txt b/source/mongodb/preimages.txt index d41707101..24178ae86 100644 --- a/source/mongodb/preimages.txt +++ b/source/mongodb/preimages.txt @@ -18,7 +18,7 @@ Document Preimages Overview -------- -Every :ref:`database trigger ` execution has a related +Every `database trigger `__ execution has a related change event. You can configure these change events to include **document preimages**. A preimage is a snapshot of a document *before* a change. diff --git a/source/reference/service-limitations.txt b/source/reference/service-limitations.txt index 77acd373c..a1d329b53 100644 --- a/source/reference/service-limitations.txt +++ b/source/reference/service-limitations.txt @@ -139,7 +139,7 @@ the limitations for each cluster size: .. note:: App Services opens a single change stream on each collection that is - associated with a :ref:`Database Trigger ` or + associated with a `Database trigger `__ or :ref:`Device Sync ` operation. .. important:: Usage Recommendation diff --git a/source/reference/template-apps.txt b/source/reference/template-apps.txt index 540da6892..a54ad19de 100644 --- a/source/reference/template-apps.txt +++ b/source/reference/template-apps.txt @@ -298,7 +298,7 @@ to the ``--template`` flag of the :ref:`appservices-apps-create` and * - ``triggers`` - Manage Database Views - - Event-driven :ref:`Database Trigger ` template to update a view in a separate collection. + - Event-driven `Database trigger `__ template to update a view in a separate collection. - None * - ``web.mql.todo`` diff --git a/source/triggers.txt b/source/triggers.txt index 10310c2f4..8fcc59715 100644 --- a/source/triggers.txt +++ b/source/triggers.txt @@ -8,145 +8,15 @@ Atlas Triggers .. meta:: :description: Use Atlas Triggers to execute application and database logic in response to events or schedules. -.. toctree:: - :titlesonly: - :caption: Triggers - :hidden: - Database Triggers - Authentication Triggers - Scheduled Triggers - Disable a Trigger - Send Trigger Events to AWS EventBridge - Triggers Code Examples +Atlas Triggers are part of the Atlas database tools. There are 2 trigger types: -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol +- `Database triggers `__, + which respond to document-level, collection-level, or database-level changes. -Atlas Triggers execute application and database logic. Triggers -can respond to events or use pre-defined schedules. +- `Scheduled triggers `__, + which execute functions according to a pre-defined schedule. -Triggers listen for events of a configured type. Each Trigger links to a -specific :doc:`Atlas Function `. -When a Trigger observes an event that matches your -configuration, it *"fires"*. The Trigger passes this event object as the -argument to its linked Function. +For more information on creating and managing Triggers, see +`Triggers `__. -A Trigger might fire on: - -- A specific *operation type* in a given Collection. -- An authentication event, such as user creation or deletion. -- A scheduled time. - -App Services keeps track of the latest execution time for each -Trigger and guarantees that each event is processed at least once. - -.. _trigger-types: - -Trigger Types -------------- - -App Services supports three types of triggers: - -- :doc:`Database triggers ` - respond to document insert, changes, or deletion. You can configure - Database Triggers for each linked MongoDB collection. - -- :doc:`Authentication triggers ` - respond to user creation, login, or deletion. - -- :doc:`Scheduled triggers ` - execute functions according to a pre-defined schedule. - -.. _trigger-limitations: - -Limitations ------------ - -Atlas Function Constraints Apply -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Triggers invoke Atlas Functions. This means they have the same -constraints as all Atlas Functions. - -:ref:`Learn more about Atlas Function constraints.` - -.. _event_processing_throughput: - -Event Processing Throughput -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Triggers process events when capacity becomes available. A Trigger's -capacity is determined by its event ordering configuration: - -- Ordered triggers process events from the change stream one at a time - in sequence. The next event begins processing only after the previous - event finishes processing. - -- Unordered triggers can process multiple events concurrently, up to - 10,000 at once by default. If your Trigger data source is an M10+ - Atlas cluster, you can configure individual unordered triggers to - exceed the 10,000 concurrent event threshold. To learn more, see - :ref:`Maximum Throughput Triggers `. - -Trigger capacity is not a direct measure of throughput or a guaranteed -execution rate. Instead, it is a threshold for the maximum number of -events that a Trigger can process at one time. In practice, the rate at -which a Trigger can process events depends on the Trigger function's run -time logic and the number of events that it receives in a given -timeframe. - -To increase the throughput of a Trigger, you can try to: - -- Optimize the Trigger function's run time behavior. For example, you - might reduce the number of network calls that you make. - -- Reduce the size of each event object with the Trigger's - :ref:`projection filter `. For the best - performance, limit the size of each change event to 2KB or less. - -- Use a match filter to reduce the number of events that the Trigger - processes. For example, you might want to do something only if a - specific field changed. Instead of matching every update event and - checking if the field changed in your Function code, you can use the - Trigger's match filter to fire only if the field is included in the - event's ``updateDescription.updatedFields`` object. - -Number of Triggers Cannot Exceed Available Change Streams -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -App Services limits the total number of Database Triggers. The size of your -Atlas cluster drives this limit. - -Each Atlas cluster tier has a maximum number of supported change -streams. A Database Trigger requires its own change stream. Other App Services -also use change streams, such as Atlas Device Sync. Database Triggers -may not exceed the number of available change streams. - -:ref:`Learn more about the number of supported change streams for Atlas tiers. -` - -.. _trigger-diagnose-duplicate-events: - -Diagnose Duplicate Events -------------------------- - -During normal Trigger operation, Triggers do not send duplicate events. -However, when some failure or error conditions occur, Triggers may deliver -duplicate events. You may see a duplicate Trigger event when: - -- A server responsible for processing and tracking events experiences a - failure. This failure prevents the server from recording its progress in a - durable or long-term storage system, making it "forget" it has processed - some of the latest events. -- Using unordered processing where events 1 through 10 are sent simultaneously. - If event 9 fails and leads to Trigger suspension, events like event 10 might - get processed again when the system resumes from event 9. This can lead to - duplicates, as the system doesn't strictly follow the sequence of events and - may reprocess already-handled events. - -If you notice duplicate Trigger events, check the :ref:`logs` for suspended -Triggers or server failures. diff --git a/source/triggers/authentication-triggers.txt b/source/triggers/authentication-triggers.txt deleted file mode 100644 index 81ae8500a..000000000 --- a/source/triggers/authentication-triggers.txt +++ /dev/null @@ -1,250 +0,0 @@ -.. _authentication-triggers: - -======================= -Authentication Triggers -======================= - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -An authentication trigger fires when a user interacts with an -:ref:`authentication provider `. You can -use authentication triggers to implement advanced user management. Some uses include: - -- Storing new user data in your linked cluster -- Maintaining data integrity upon user deletion -- Calling a service with a user's information when they log in. - -.. _create-an-authentication-trigger: - -Create an Authentication Trigger --------------------------------- - -.. tabs-realm-admin-interfaces:: - - .. tab:: - :tabid: ui - - To open the authentication trigger configuration screen in the Atlas App Services UI, - click :guilabel:`Triggers` in the left navigation menu, click - :guilabel:`Create a Trigger`, and then select the :guilabel:`Authentication` tab next - to :guilabel:`Trigger Type`. - - Configure the trigger and then click :guilabel:`Save` at the bottom of the - page to add it to your current deployment draft. - - .. figure:: /images/auth-trigger-example-config.png - :alt: An example of a configured authentication trigger in the UI - :width: 750px - :lightbox: - - .. tab:: - :tabid: cli - - To create an authentication trigger with :doc:`{+cli+} - `: - - 1. Add an authentication trigger :ref:`configuration file - ` to the ``triggers`` subdirectory of a - local application directory. - - .. note:: - - App Services does not enforce specific filenames for Atlas Trigger - configuration files. However, once imported, App Services will - rename each configuration file to match the name of the - trigger it defines, e.g. ``mytrigger.json``. - - 2. :ref:`Deploy ` the trigger: - - .. code-block:: shell - - {+cli-bin+} push - -.. _authentication-triggers-configuration: - -Configuration -------------- - -Authentication Triggers have the following configuration options: - -.. list-table:: - :header-rows: 1 - :widths: 15 30 - - * - Field - - Description - - * - :guilabel:`Trigger Type` - - - The type of the trigger. For authentication triggers, - set this value to ``AUTHENTICATION``. - - * - :guilabel:`Action Type` - - - The :ref:`authentication operation - type ` that causes the trigger to - fire. - - * - :guilabel:`Providers` - - - A list of one or more :ref:`authentication provider - ` types. The trigger only listens for - :ref:`authentication events ` produced by these - providers. - - * - :guilabel:`Event Type` - - - Choose what action is taken when the trigger fires. You can choose to - run a :ref:`function ` or use :ref:`AWS EventBridge `. - - * - :guilabel:`Function` - - - The name of the function that the trigger - executes when it fires. An :ref:`authentication - event object ` causes the trigger to fire. - This object is the only argument the trigger passes to the function. - - * - :guilabel:`Trigger Name` - - - The name of the trigger. - -.. _authentication-events: - -Authentication Events ---------------------- - -.. _authentication-event-operation-types: - -Authentication events represent user interactions with an authentication -provider. Each event corresponds to a single user action with one of the -following operation types: - -.. list-table:: - :header-rows: 1 - :widths: 10 30 - - * - Operation Type - - Description - - * - ``LOGIN`` - - Represents a single instance of a user logging in. - - * - ``CREATE`` - - Represents the creation of a new user. - - * - ``DELETE`` - - Represents the deletion of a user. - -Authentication event objects have the following form: - -.. code-block:: json - :copyable: false - - { - "operationType": , - "providers": , - "user": , - "time": - } - -.. list-table:: - :header-rows: 1 - :widths: 10 30 - - * - Field - - Description - * - ``operationType`` - - The :ref:`operation type ` - of the authentication event. - * - ``providers`` - - The :ref:`authentication providers ` - that emitted the event. - - One of the following names represents each authentication provider: - - .. include:: /includes/auth-provider-internal-names.rst - - .. note:: - - Generally, only one authentication provider emits each event. - However, you may need to delete a user linked to multiple providers. - In this case, the ``DELETE`` event for that user includes all linked providers. - * - ``user`` - - The :doc:`user object ` of the user that interacted with - the authentication provider. - * - ``time`` - - The time at which the event occurred. - -.. _authentication-triggers-example: - -Example -------- - -An online store wants to store custom metadata for each of its customers -in `Atlas `_. -Each customer needs a document in the ``store.customers`` collection. -Then, the store can record and query metadata in the customer's document. - -The collection must represent each customer. To guarantee this, the store -creates an Authentication Trigger. This Trigger listens for newly created users -in the :doc:`email/password ` authentication -provider. Then, it passes the -:ref:`authentication event object ` to its linked -function, ``createNewUserDocument``. The function creates a new document -which describes the user and their activity. The function then inserts the document -into the ``store.customers`` collection. - -.. tabs-realm-admin-interfaces:: - - .. tab:: - :tabid: ui - - .. figure:: /images/auth-trigger-example-config-em-pass.png - :alt: Example UI that configures the trigger - - .. tab:: - :tabid: cli - - .. code-block:: json - :caption: Trigger Configuration - - { - "type": "AUTHENTICATION", - "name": "newUserHandler", - "function_name": "createNewUserDocument", - "config": { - "providers": ["local-userpass"], - "operation_type": "CREATE" - }, - "disabled": false - } - -.. code-block:: javascript - :caption: createNewUserDocument - - exports = async function(authEvent) { - const mongodb = context.services.get("mongodb-atlas"); - const customers = mongodb.db("store").collection("customers"); - - const { user, time } = authEvent; - const isLinkedUser = user.identities.length > 1; - - if(isLinkedUser) { - const { identities } = user; - return users.updateOne( - { id: user.id }, - { $set: { identities } } - ) - - } else { - return users.insertOne({ _id: user.id, ...user }) - .catch(console.error) - } - await customers.insertOne(newUser); - } - -.. include:: /includes/triggers-examples-github.rst diff --git a/source/triggers/aws-eventbridge.txt b/source/triggers/aws-eventbridge.txt deleted file mode 100644 index 8ef123e95..000000000 --- a/source/triggers/aws-eventbridge.txt +++ /dev/null @@ -1,548 +0,0 @@ -.. _aws-eventbridge: - -====================================== -Send Trigger Events to AWS EventBridge -====================================== - -.. facet:: - :name: genre - :values: tutorial - -.. meta:: - :description: Learn how to set up AWS EventBridge to handle Atlas Trigger events. - -.. default-domain:: mongodb - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -Overview --------- - -MongoDB offers an `AWS Eventbridge -`_ partner event source that lets -you send Atlas Trigger events to an event bus instead of -calling an Atlas Function. You can configure any Trigger type to send events to -EventBridge. Database Triggers also support custom error handling, -to reduce trigger suspensions due to non-critical errors. - -All you need to send Trigger events to EventBridge is an AWS account ID. -This guide walks through finding your account ID, configuring the -Trigger, associating the Trigger event source with an event bus, and setting -up custom error handling. - -.. note:: Official AWS Partner Event Source Guide - - This guide is based on Amazon's :aws-docs:`Receiving Events from a - SaaS Partner - ` - documentation. - -Procedure ---------- - -.. note:: - - The AWS put entry for an EventBridge trigger event must be smaller than 256 KB. - - Learn how to reduce the size of your PutEvents entry in the :ref:`Performance Optimization - ` section. - -.. procedure:: - - .. _setup-eventbridge: - - .. step:: Set Up the MongoDB Partner Event Source - - To send trigger events to AWS EventBridge, you need the :guilabel:`AWS - account ID` of the account that should receive the events. - Open the `Amazon EventBridge console - `_ and click - :guilabel:`Partner event sources` in the navigation menu. Search for - the :guilabel:`MongoDB` partner event source and then click - :guilabel:`Set up`. - - On the :guilabel:`MongoDB` partner event source page, click - :guilabel:`Copy` to copy your AWS account ID to the clipboard. - - .. step:: Configure the Trigger - - Once you have the :guilabel:`AWS account ID`, you can configure a - trigger to send events to EventBridge. - - .. tabs-realm-admin-interfaces:: - - .. tab:: - :tabid: ui - - In the App Services UI, create and configure a new :doc:`database - trigger `, :doc:`authentication - trigger `, or :doc:`scheduled - trigger ` and select the - :guilabel:`EventBridge` event type. - - Paste in the :guilabel:`AWS Account ID` that you copied from - EventBridge and select an :guilabel:`AWS Region` to send the trigger events - to. - - Optionally, you can configure a function for handling trigger errors. - Custom error handling is only valid for database triggers. - For more details, refer to the :ref:`Custom Error Handling ` - section on this page. - - .. figure:: /images/eventbridge-trigger-config.png - :alt: The EventBridge input boxes in the trigger configuration. - - By default, triggers convert the BSON types in event objects into - standard JSON types. To preserve BSON type information, you can - serialize event objects into :manual:`Extended JSON format - ` instead. Extended JSON preserves type - information at the expense of readability and interoperability. - - To enable Extended JSON, - click the :guilabel:`Enable Extended JSON` toggle in the - :guilabel:`Advanced (Optional)` section. - - .. tab:: - :tabid: cli - - Create a :ref:`trigger configuration file ` - in the ``/triggers`` directory. Omit the ``function_name`` field - and define an ``AWS_EVENTBRIDGE`` event processor. - - Set the ``account_id`` field to the :guilabel:`AWS Account ID` - that you copied from EventBridge and set the ``region`` field to - an AWS Region. - - By default, triggers convert the BSON types in event objects into - standard JSON types. To preserve BSON type information, you can - serialize event objects into :manual:`Extended JSON format - ` instead. Extended JSON preserves type - information at the expense of readability and interoperability. - - To enable Extended JSON, set the ``extended_json_enabled`` field to ``true``. - - Optionally, you can configure a function for handling trigger errors. - Custom error handling is only valid for database triggers. - For more details, refer to the :ref:`Custom Error Handling ` - section on this page. - - The trigger configuration file should resemble the following: - - .. code-block:: json - - { - "name": "...", - "type": "...", - "event_processors": { - "AWS_EVENTBRIDGE": { - "config": { - "account_id": "", - "region": "", - "extended_json_enabled": - } - } - } - } - - .. note:: Supported AWS Regions - - For a full list of supported AWS regions, refer to Amazon's - :aws-docs:`Receiving Events from a SaaS Partner - ` - guide. - - - .. step:: Associate the Trigger Event Source with an Event Bus - - Go back to the EventBridge console and choose Partner event sources in - the navigation pane. In the :guilabel:`Partner event sources` table, - find and select the :guilabel:`Pending` trigger source and then click - :guilabel:`Associate with event bus`. - - On the :guilabel:`Associate with event bus` screen, define any - required access permissions for other accounts and organizations and - then click :guilabel:`Associate`. - - Once confirmed, the status of the trigger event source changes from - :guilabel:`Pending` to :guilabel:`Active`, and the name of the event - bus updates to match the event source name. You can now start creating - rules that trigger on events from that partner event source. For more - information, see :aws-docs:`Creating a Rule That Triggers on a SaaS Partner Event `. - -.. _eventbridge-error-handling: - -Custom Error Handling ---------------------- - -.. note:: Only Database Triggers Support Custom Error Handlers - - Currently, only database triggers support custom error handling. - Authentication triggers and scheduled triggers do not support - custom error handling at this time. - -You can create an error handler to be executed on a trigger failure, -when retry does not succeed. Custom error handling allows you to determine -whether an error from AWS EventBridge is critical enough to suspend the Trigger, -or if it is acceptable to ignore the error and continue processing other events. -For more information on suspended database triggers, refer to -:ref:`Suspended Triggers `. - -Create a New Custom Error Handler -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. tabs-realm-admin-interfaces:: - - .. tab:: - :tabid: ui - - You can create the new function directly in the Create a Trigger page, as below, - or from the Functions tab. For more information on how to define functions in - App Services, refer to :ref:`Define a Function `. - - .. figure:: /images/eventbridge-custom-function.png - :alt: The EventBridge custom error handling configuration in the UI. - - .. procedure:: - - .. step:: Create a New Error Handler - - In the :guilabel:`Configure Error Function` section, select - :guilabel:`+ New Function`. - - You can also select an existing Function, if one is already defined, - from the dropdown. - - .. step:: Name the New Function - - Enter a unique, identifying name for the function in the :guilabel:`Name` field. - This name must be distinct from all other functions in the application. - - .. step:: Write the Function Code - - In the :guilabel:`Function` section, write the JavaScript code directly in - the function editor. The function editor contains a default function that - you can edit as needed. For more information on creating functions, refer - to the :ref:`Functions ` documentation. - - .. step:: Test the Function - - In the :guilabel:`Testing Console` tab beneath the function editor, you can - test the function by passing in example values to the ``error`` and - ``changeEvent`` parameters, as shown in the comments of the testing console. - - For more information on these paramaters, refer to the - :ref:`Error Handler Parameters ` - section on this page. - - Click :guilabel:`Run` to run the test. - - .. step:: Save the Function - - Once you are satisfied with the custom error handler, click - :guilabel:`Save`. - - .. tab:: - :tabid: cli - - In order to update your trigger's configuration with an error handler, - follow these steps to :ref:`Update an App `. When you - update your configuration files in Step 3, do the following: - - .. procedure:: - - .. step:: Write the Error Handler - - Follow the steps in :ref:`Define a Function ` - to write your error handler source code and configuration file. - - For the error handler source code, see the following template error handler: - - .. code-block:: js - :caption: .js - - exports = async function(error, changeEvent) { - // This sample function will log additional details if the error is not - // a DOCUMENT_TOO_LARGE error - if (error.code === 'DOCUMENT_TOO_LARGE') { - console.log('Document too large error'); - - // Comment out the line below in order to skip this event and not suspend the Trigger - throw new Error(`Encountered error: ${error.code}`); - } - - console.log('Error sending event to EventBridge'); - console.log(`DB: ${changeEvent.ns.db}`); - console.log(`Collection: ${changeEvent.ns.coll}`); - console.log(`Operation type: ${changeEvent.operationType}`); - - // Throw an error in your function to suspend the trigger and stop processing additional events - throw new Error(`Encountered error: ${error.message}`); - }; - - .. step:: Add an Error Handler to Your Trigger Configuration - - Add an ``error_handler`` attribute to your trigger configuration file - in the ``Triggers`` folder. The trigger configuration file should - resemble the following: - - .. code-block:: json - :emphasize-lines: 13-18 - :caption: .json - - { - "name": "...", - "type": "DATABASE", - "event_processors": { - "AWS_EVENTBRIDGE": { - "config": { - "account_id": "", - "region": "", - "extended_json_enabled": - } - } - }, - "error_handler": { - "config": { - "enabled": , - "function_name": "" - } - } - } - - For more information on trigger configuration files, see - :ref:`Trigger Configuration Files `. - - .. tab:: - :tabid: api - - .. procedure:: - - .. step:: Authenticate a MongoDB Atlas User - - .. include:: /includes/api-authenticate-instructions.rst - - .. step:: Create a Deployment Draft (Optional) - - A draft represents a group of application changes that you - can deploy or discard as a single unit. If you don't create - a draft, updates automatically deploy individually. - - To create a draft, send a ``POST`` request with no body to - the :admin-api-endpoint:`Create a Deployment Draft - ` endpoint: - - .. code-block:: bash - - curl -X POST 'https://services.cloud.mongodb.com/api/admin/v3.0/groups/{groupId}/apps/{appId}/drafts' \ - -H 'Content-Type: application/json' \ - -H 'Authorization: Bearer ' - - .. step:: Create the Error Handler Function - - Create the function to handle errors for a failed AWS - EventBridge trigger via a ``POST`` request to the - :admin-api-endpoint:`Create a new - Function ` endpoint. - - .. code-block:: bash - - curl -X POST \ - https://services.cloud.mongodb.com/api/admin/v3.0/groups/{groupId}/apps/{appId}/functions \ - -H 'Authorization: Bearer ' \ - -d '{ - "name": "string", - "private": true, - "source": "string", - "run_as_system": true - }' - - .. step:: Create the AWS EventBridge Trigger - - Create the AWS EventBridge Trigger with error handling - enabled via a ``POST`` request to the - :admin-api-endpoint:`Create a Trigger ` endpoint. - - .. code-block:: bash - - curl -X POST \ - https://services.cloud.mongodb.com/api/admin/v3.0/groups/{groupId}/apps/{appId}/triggers \ - -H 'Authorization: Bearer ' \ - -d '{ - "name": "string", - "type": "DATABASE", - "config": { - "service_id": "string", - "database": "string", - "collection": "string", - "operation_types": { - "string" - }, - "match": , - "full_document": false, - "full_document_before_change": false, - "unordered": true - }, - "event_processors": { - "AWS_EVENTBRIDGE": { - "account_id": "string", - "region": "string", - "extended_json_enabled": false - }, - }, - "error_handler": { - "enabled": true, - "function_id": "string" - } - }' - - .. step:: Deploy the Draft - - If you created a draft, you can deploy all changes in - the draft by sending a ``POST`` request with no body to the - :admin-api-endpoint:`Deploy a deployment draft - ` endpoint. - If you did not create a draft as a first step, the - individual function and trigger requests deployed automatically. - - .. code-block:: shell - - curl -X POST \ - 'https://services.cloud.mongodb.com/api/admin/v3.0/groups/{groupId}/apps/{appId}/drafts/{draftId}/deployment' \ - --header 'Content-Type: application/json' \ - --header 'Authorization: Bearer ' \ - -.. _eventbridge-error-handler-parameters: - -Error Handler Parameters -~~~~~~~~~~~~~~~~~~~~~~~~ - -The default error handler has two parameters: ``error`` and ``changeEvent``. - -``error`` -````````` -Has the following two attributes: - -- ``code``: The code for the errored EventBridge put request. For a list of - error codes used by the error handler, see the below section. - -- ``message``: The unfiltered error message from an errored EventBridge - put request. - -``changeEvent`` -``````````````` - -The requested change to your data made by EventBridge. For more information -on types of change events and their configurations, see -:ref:`Change Event Types `. - -.. _eventbridge-error-codes: - -Error Codes -~~~~~~~~~~~ - -If an error was recevied from EventBridge, the event processor will parse the -error as either ``DOCUMENT_TOO_LARGE`` or ``OTHER``. This parsed error is passed -to the error handler function through the ``error`` parameter. - -``DOCUMENT_TOO_LARGE`` -`````````````````````` -If the put entry for an EventBridge trigger event is larger -than 256 KB, EventBridge will throw an error. The error will contain either: - -- `status code: 400 `_ and - ``total size of the entries in the request is over the limit``. - -- `status code: 413 `_, - which indicates a too large payload. - -For more information on reducing put entry size, see the below :ref:`Performance -Optimization ` section. - -``OTHER`` -````````` -The default bucket for all other errors. - -.. tip:: Optimize Error Handling for Errors with ``OTHER`` Code - - You can make special error handling cases for - your most common error messages to optimize your error handling for - errors with an ``OTHER`` code. To determine which errors need - special cases, we recommended keeping track of - the most common error messages you receive in ``error.message``. - -Error Handler Logs -~~~~~~~~~~~~~~~~~~ - -You can view :ref:`Trigger Error Handler logs ` for -your EventBridge Trigger error handler in the application logs. - -.. tabs-realm-admin-interfaces:: - - .. tab:: - :tabid: ui - - 1. Click ``Logs`` in the left navigation of the App Services UI. - - 2. Click the :guilabel:`Filter by Type` dropdown and select - :guilabel:`Triggers Error Handlers` to view all error handler - logs for the App. - - .. tab:: - :tabid: cli - - Pass the ``trigger_error_handler`` value to the ``--type`` flag to - view all error handler logs for the App. - - .. code-block:: shell - - {+cli-bin+} logs list --type=trigger_error_handler - - .. tab:: - :tabid: api - - Retrieve ``TRIGGER_ERROR_HANDLER`` type logs via a ``GET`` request to - the :admin-api-endpoint:`Retreive App Services Logs - ` endpoint: - - .. code-block:: shell - - curl -X GET 'https://services.cloud.mongodb.com/api/admin/v3.0/groups/{groupId}/apps/{appId}/logs' \ - -H 'Content-Type: application/json' \ - -H 'Authorization: Bearer ' - -d '{ - "type": "TRIGGER_ERROR_HANDLER" - }' - -To learn more about viewing application logs, see :ref:`View Application Logs `. - -.. _event_processor_example: - -Example Event -------------- - -The following object configures a trigger to send events to AWS -Eventbridge and handle errors: - -.. include:: /includes/event-processor-example.rst - -.. _send-aws-eventbridge-performance-optimization: - -Performance Optimization ------------------------- - -The AWS put entry for an EventBridge trigger event must be smaller than 256 KB. - -For more information, see the :aws:`AWS Documentation to calculate Amazon -PutEvents event entry size `. - -When using Database Triggers, the Project Expression can be useful reduce the document size -before sending messages to EventBridge. -This expression lets you include only specified fields, reducing document size. - -:ref:`Learn more in the Database Trigger Project Expression documentation. -` diff --git a/source/triggers/database-triggers.txt b/source/triggers/database-triggers.txt deleted file mode 100644 index 47b99fc0e..000000000 --- a/source/triggers/database-triggers.txt +++ /dev/null @@ -1,938 +0,0 @@ -.. _database-trigger: - -================= -Database Triggers -================= - -.. meta:: - :description: Use Database Triggers to execute server-side logic when database changes occur - -.. facet:: - :name: genre - :values: reference - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -Database Triggers allow you to execute server-side logic whenever a database -change occurs on a linked MongoDB Atlas cluster. You can configure triggers on -individual collections, entire databases, and on an entire cluster. - -Unlike SQL data triggers, which run on the database server, triggers run -on a serverless compute layer that scales independently of the database -server. Triggers automatically call :ref:`Atlas Functions ` -and can forward events to external handlers through AWS EventBridge. - -Use database triggers to implement event-driven data interactions. For -example, you can automatically update information in one document when a -related document changes or send a request to an external service -whenever a new document is inserted. - -Database triggers use MongoDB :manual:`change streams ` -to watch for real-time changes in a collection. A change stream is a -series of :ref:`database events ` that each -describe an operation on a document in the collection. Your app opens a -new change stream for each database trigger created in a collection. - -.. important:: Change Stream Limitations - - There are limits on the total number of change streams you can open - on a cluster, depending on the cluster's size. Refer to :ref:`change - stream limitations ` for - more information. - - You cannot define a database trigger on a :ref:`serverless instance - ` or :ref:`{+adf-instance+} - ` because they do not support change streams. - -You control which operations cause a trigger to fire as well as what -happens when it does. For example, you can run a function whenever a -specific field of a document is updated. The function can access the -entire change event, so you always know what changed. You can also pass -the change event to :ref:`AWS EventBridge ` to handle -the event outside of Atlas. - -Triggers support :manual:`$match ` -expressions to filter change events and :manual:`$project ` -expressions to limit the data included in each event. - -.. _trigger-recursion: - -.. warning:: - - In deployment and database level triggers, it is possible to configure triggers - in a way that causes other triggers to fire, resulting in recursion. - Examples include a database-level trigger writing to a collection within the - same database, or a cluster-level logger or log forwarder writing logs to - another database in the same cluster. - -.. _create-a-database-trigger: - -Create a Database Trigger -------------------------- - -.. tabs-realm-admin-interfaces:: - - .. tab:: - :tabid: ui - - To open the database trigger configuration screen in the App Services UI, click - :guilabel:`Triggers` in the left navigation menu, click - :guilabel:`Create a Trigger`, and then select the :guilabel:`Database` tab next - to :guilabel:`Trigger Type`. - - Configure the trigger and then click :guilabel:`Save` at the bottom of - the page to add it to your current deployment draft. - - .. figure:: /images/db-trigger-example-config.png - :alt: Example UI that configures the trigger - - .. tab:: - :tabid: cli - - To create a database trigger with the {+cli-ref+}: - - 1. Add a database trigger :ref:`configuration file - ` to the ``triggers`` subdirectory of a - local application directory. - - 2. :ref:`Deploy ` the trigger: - - .. code-block:: shell - - {+cli-bin+} push - - .. note:: - - Atlas App Services does not enforce specific filenames for Trigger - configuration files. However, once imported, Atlas App Services will rename - each configuration file to match the name of the Trigger it defines, - e.g. ``mytrigger.json``. - -.. _database-triggers-configuration: - -Configuration -------------- - -Database triggers have the following configuration options: - -Trigger Details -~~~~~~~~~~~~~~~ - -Within the :guilabel:`Trigger Details` section, you first select the -:guilabel:`Trigger Type`. Set this value to :guilabel:`Database` for database triggers. - -Next, you select the :guilabel:`Watch Against`, based on the level of granularity you -want. Your options are: - -- :guilabel:`Collection`, when a change occurs on a specified collection -- :guilabel:`Database`, when a change occurs on any collection in a - specified database -- :guilabel:`Deployment`, when deployment changes occur on a specified - cluster. If you select the Deployment source type, the following - databases are **not watched** for changes: - - - The admin databases ``admin``, ``local``, and ``config`` - - The sync databases ``__realm_sync`` and ``__realm_sync_`` - - .. important:: - - The deployment-level source type is only available on dedicated tiers. - -Depending on which source type you are using, the additional options differ. The -following table describes these options. - -.. list-table:: - :header-rows: 1 - :widths: 15 30 - - * - Source Type - - Options - - * - :guilabel:`Collection` - - | - - - :guilabel:`Cluster Name`. The name of the MongoDB cluster that the - Trigger is associated with. - - - :guilabel:`Database Name`. The MongoDB database that contains the watched - collection. - - - :guilabel:`Collection Name`. The MongoDB collection to watch. Optional. - If you leave this option blank, the Source Type changes to "Database." - - - :guilabel:`Operation Type`. The :ref:`operation types - ` that cause the Trigger to fire. - Select the operation types you want the trigger to respond to. Options - include: - - - Insert - - Update - - Replace - - Delete - - .. note:: - - Update operations executed from MongoDB Compass or the MongoDB Atlas - Data Explorer fully replace the previous document. As a result, - update operations from these clients will generate **Replace** - change events rather than **Update** events. - - - :guilabel:`Full Document`. If enabled, **Update** change events include - the latest :manual:`majority-committed ` - version of the modified document *after* the change was applied in - the ``fullDocument`` field. - - .. note:: - - Regardless of this setting, **Insert** and **Replace** events always - include the ``fullDocument`` field. **Delete** events never include - the ``fullDocument`` field. - - - :guilabel:`Document Preimage`. When enabled, change events include a - copy of the modified document from immediately *before* the change was - applied in the ``fullDocumentBeforeChange`` field. This has - :ref:`performance considerations `. All change events - except for **Insert** events include the document preimage. - - * - :guilabel:`Database` - - | - - - :guilabel:`Cluster Name`. The name of the MongoDB cluster that the - Trigger is associated with. - - - :guilabel:`Database Name`. The MongoDB database to watch. Optional. - If you leave this option blank, the Source Type changes to "Deployment," - unless you are on a shared tier, in which case App Services will not - let you save the trigger. - - - :guilabel:`Operation Type`. The :ref:`operation types - ` that cause the Trigger to fire. - Select the operation types you want the trigger to respond to. - Options include: - - - Create Collection - - Modify Collection - - Rename Collection - - Drop Collection - - Shard Collection - - Reshard Collection - - Refine Collection Shard Key - - .. note:: - - Update operations executed from MongoDB Compass or the MongoDB Atlas - Data Explorer fully replace the previous document. As a result, - update operations from these clients will generate **Replace** - change events rather than **Update** events. - - - :guilabel:`Full Document`. If enabled, **Update** change events include - the latest :manual:`majority-committed ` - version of the modified document *after* the change was applied in - the ``fullDocument`` field. - - .. note:: - - Regardless of this setting, **Insert** and **Replace** events always - include the ``fullDocument`` field. **Delete** events never include - the ``fullDocument`` field. - - - :guilabel:`Document Preimage`. When enabled, change events include a - copy of the modified document from immediately *before* the change was - applied in the ``fullDocumentBeforeChange`` field. This has - :ref:`performance considerations `. All change events - except for **Insert** events include the document preimage. Disabled - for Database and Deployment sources to limit unnecessary watches on the - cluster for a new collection being created. - - * - :guilabel:`Deployment` - - | - - - :guilabel:`Cluster Name`. The name of the MongoDB cluster that the - Trigger is associated with. - - - :guilabel:`Operation Type`. The :ref:`operation types - ` that occur in the cluster that cause - the Trigger to fire. Select the operation types you want the trigger - to respond to. Options include: - - - Drop Database - - - :guilabel:`Full Document`. If enabled, **Update** change events include - the latest :manual:`majority-committed ` - version of the modified document *after* the change was applied in - the ``fullDocument`` field. - - .. note:: - - Regardless of this setting, **Insert** and **Replace** events always - include the ``fullDocument`` field. **Delete** events never include - the ``fullDocument`` field. - - - :guilabel:`Document Preimage`. When enabled, change events include a - copy of the modified document from immediately *before* the change was - applied in the ``fullDocumentBeforeChange`` field. This has - :ref:`performance considerations `. All change events - except for **Insert** events include the document preimage. Disabled - for Database and Deployment sources to limit unnecessary watches on the - cluster for a new collection being created. - -.. _pre-image-perf: - -.. tip:: Preimages and Performance Optimization - - Preimages require additional storage overhead that may affect - performance. If you're not using preimages on a collection, - you should disable preimages. To learn more, see :ref:`Disable - Collection-Level Preimages - `. - - Document preimages are supported on non-sharded Atlas clusters running - MongoDB 4.4+, and on sharded Atlas clusters running MongoDB 5.3 and later. - You can upgrade a non-sharded cluster (with preimages) to a - sharded cluster, as long as the cluster is running 5.3 or later. - -Trigger Configurations -~~~~~~~~~~~~~~~~~~~~~~ - -.. list-table:: - :header-rows: 1 - :widths: 15 30 - - * - Field - - Description - - * - :guilabel:`Auto-Resume Triggers` - - - .. include:: /includes/trigger-auto-resume.rst - - * - :guilabel:`Event Ordering` - - - If enabled, trigger events are processed in the order in which they occur. - If disabled, events can be processed in parallel, which is faster when - many events occur at the same time. - - .. include:: /includes/trigger-event-ordering.rst - - * - :guilabel:`Skip Events On Re-Enable` - - - Disabled by default. If enabled, any change events that occurred while this - trigger was disabled will not be processed. - -Event Type -~~~~~~~~ - -Within the :guilabel:`Event Type` section, you choose what action is taken when -the trigger fires. You can choose to run a :ref:`function ` or use -:ref:`AWS EventBridge `. - -Advanced -~~~~~~~~ - -Within the :guilabel:`Advanced` section, the following **optional** configuration -options are available: - -.. list-table:: - :header-rows: 1 - :widths: 15 30 - - * - Field - - Description - - * - :guilabel:`Project Expression` - - - .. _trigger-project-expression: - - .. include:: /includes/trigger-project-expression.rst - - * - :guilabel:`Match Expression` - - - .. include:: /includes/trigger-match-expression.rst - - * - :guilabel:`Maximum Throughput` - - - .. _triggers-maximum-throughput: - - If the linked data source is a dedicated server (M10+ Tier), - you can increase the :ref:`maximum throughput ` - beyond the default 10,000 concurrent processes. - - .. important:: - - To enable maximum throughput, you must disable Event Ordering. - - Before increasing the maximum throughput, consider whether one or more of - your triggers are calling a rate-limited external API. Increasing the - trigger rate might result in exceeding those limits. - - Increasing the throughput may also add a larger workload, affecting - overall cluster performance. - -.. _database-events: - -Change Event Types ------------------- - -.. _database-event-operation-types: - -Database change events represent individual changes in a specific -collection of your linked MongoDB Atlas cluster. - -Every database event has the same operation type and structure as the -:manual:`change event ` object that was -emitted by the underlying change stream. Change events have the -following operation types: - -.. list-table:: - :header-rows: 1 - :widths: 30 15 - - * - Operation Type - - Description - - * - **Insert Document** (All trigger types) - - Represents a new document added to the collection. - - * - **Update Document** (All trigger types) - - Represents a change to an existing document in the collection. - - * - **Delete Document** (All trigger types) - - Represents a document deleted from the collection. - - * - **Replace Document** (All trigger types) - - Represents a new document that replaced a document in the collection. - - * - **Create Collection** (Database and Deployment trigger types only) - - Represents the creation of a new collection. - - * - **Modify Collection** (Database and Deployment trigger types only) - - Represents the modification collection. - - * - **Rename Collection** (Database and Deployment trigger types only) - - Represents collection being renamed. - - * - **Drop Collection** (Database and Deployment trigger types only) - - Represents a collection being dropped. - - * - **Shard Collection** (Database and Deployment trigger types only) - - Represents a collection changing from unsharded to sharded. - - * - **Reshard Collection** (Database and Deployment trigger types only) - - Represents a change to a collection's sharding. - - * - **Refine Collection Shard Key** (Database and Deployment trigger types only) - - Represents a change in the shard key of a collection. - - * - **Create Indexes** (Database and Deployment trigger types only) - - Represents the creation of a new index. - - * - **Drop Indexes** (Database and Deployment trigger types only) - - Represents an index being dropped. - - * - **Drop Database** (Deployment trigger type only) - - Represents a database being dropped. - -Database change event objects have the following general form: - -.. code-block:: json - - { - _id : , - "operationType": , - "fullDocument": , - "fullDocumentBeforeChange": , - "ns": { - "db" : , - "coll" : - }, - "documentKey": { - "_id": - }, - "updateDescription": , - "clusterTime": - } - -Database Trigger Example ------------------------- - -An online store wants to notify its customers whenever one of their -orders changes location. They record each order in the ``store.orders`` -collection as a document that resembles the following: - -.. code-block:: json - - { - _id: ObjectId("59cf1860a95168b8f685e378"), - customerId: ObjectId("59cf17e1a95168b8f685e377"), - orderDate: ISODate("2018-06-26T16:20:42.313Z"), - shipDate: ISODate("2018-06-27T08:20:23.311Z"), - orderContents: [ - { qty: 1, name: "Earl Grey Tea Bags - 100ct", price: NumberDecimal("10.99") } - ], - shippingLocation: [ - { location: "Memphis", time: ISODate("2018-06-27T18:22:33.243Z") }, - ] - } - -To automate this process, the store creates a Database Trigger that -listens for **Update** change events in the ``store.orders`` collection. -When the trigger observes an **Update** event, it passes the -:ref:`change event object ` to its associated Function, -``textShippingUpdate``. The Function checks the change event for any -changes to the ``shippingLocation`` field and, if it was updated, sends -a text message to the customer with the new location of the order. - -.. tabs-realm-admin-interfaces:: - - .. tab:: - :tabid: ui - - .. figure:: /images/db-trigger-example-config-update.png - :alt: Example UI that configures the trigger - - .. tab:: - :tabid: cli - - .. code-block:: json - :caption: Trigger Configuration - - { - "type": "DATABASE", - "name": "shippingLocationUpdater", - "function_name": "textShippingUpdate", - "config": { - "service_name": "mongodb-atlas", - "database": "store", - "collection": "orders", - "operation_types": ["UPDATE"], - "unordered": false, - "full_document": true, - "match": {} - }, - "disabled": false - } - -.. code-block:: javascript - :caption: textShippingUpdate - - exports = async function (changeEvent) { - // Destructure out fields from the change stream event object - const { updateDescription, fullDocument } = changeEvent; - - // Check if the shippingLocation field was updated - const updatedFields = Object.keys(updateDescription.updatedFields); - const isNewLocation = updatedFields.some(field => - field.match(/shippingLocation/) - ); - - // If the location changed, text the customer the updated location. - if (isNewLocation) { - const { customerId, shippingLocation } = fullDocument; - const mongodb = context.services.get("mongodb-atlas"); - const customers = mongodb.db("store").collection("customers"); - const { location } = shippingLocation.pop(); - const customer = await customers.findOne({ _id: customerId }); - - const twilio = require('twilio')( - // Your Account SID and Auth Token from the Twilio console: - context.values.get("TwilioAccountSID"), - context.values.get("TwilioAuthToken"), - ); - - await twilio.messages.create({ - To: customer.phoneNumber, - From: context.values.get("ourPhoneNumber"), - Body: `Your order has moved! The new location is ${location}.` - }) - } - }; - -.. _suspended_triggers: -.. _resume-a-suspended-trigger: - -Suspended Triggers ------------------- - -Database Triggers may enter a suspended state in response to an event -that prevents the Trigger's change stream from continuing. Events that -can suspend a Trigger include: - -- :manual:`invalidate events ` - such as ``dropDatabase``, ``renameCollection``, or those caused by - a network disruption. - -- the **resume token** required to resume the change stream is no longer in the - cluster :manual:`oplog `. The App logs - refer to this as a ``ChangeStreamHistoryLost`` error. - -In the event of a suspended or failed trigger, Atlas App Services sends the -project owner an email alerting them of the issue. - -.. _automatically-resume-a-suspended-trigger: - -Automatically Resume a Suspended Trigger -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -You can configure a Trigger to automatically resume if the Trigger was suspended -because the resume token is no longer in the oplog. -The Trigger does not process any missed change stream events between -when the resume token is lost and when the resume process completes. - -.. tabs-realm-admin-interfaces:: - - .. tab:: - :tabid: ui - - When :ref:`creating or updating a Database Trigger ` - in the App Services UI, navigate to the configuration page of the Trigger - you want to automatically resume if suspended. - - In the :guilabel:`Advanced (Optional)` section, select :guilabel:`Auto-Resume Triggers`. - - Save and deploy the changes. - - .. tab:: - :tabid: cli - - When :ref:`creating or updating a Database Trigger ` - with the Realm CLI, create or navigate to the configuration file for the Trigger - you want to automatically resume if suspended. - - In the :ref:`Trigger's configuration file `, - include the following: - - .. code-block:: js - :caption: triggers/.json - :emphasize-lines: 5 - - { - "name": "", - "type": "DATABASE", - "config": { - "tolerate_resume_errors": true, - // ...rest of Database Trigger configuration - }, - // ...rest of Trigger general configuration - } - - Deploy the changes with the following command: - - .. code-block:: shell - - {+cli-bin+} push --remote= - -.. _manually-resume-a-suspended-trigger: - -Manually Resume a Suspended Trigger -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -When you manually resume a suspended Trigger, your App attempts to resume the Trigger -at the next change stream event after the change stream stopped. -If the resume token is no longer in the cluster oplog, the Trigger -must be started without a resume token. This means the Trigger begins -listening to new events but does not process any missed past events. - -You can adjust the oplog size to keep the resume token for more time after -a suspension by :atlas:`scaling your Atlas cluster `. -Maintain an oplog size a few times greater than -your cluster's peak oplog throughput (GB/hour) to reduce the risk of a -suspended trigger's resume token dropping off the oplog -before the trigger executes. -View your cluster's oplog throughput in the **Oplog GB/Hour** graph in the -:atlas:`Atlas cluster metrics `. - -You can attempt to restart a suspended Trigger from the App Services UI or by -importing an application directory with the {+cli-ref+}. - -.. tabs-realm-admin-interfaces:: - - .. tab:: - :tabid: ui - - .. procedure:: - - .. step:: Find the Suspended Trigger - - On the :guilabel:`Database Triggers` tab of the :guilabel:`Triggers` - page, find the trigger that you want to resume in the list of - triggers. App Services marks suspended triggers - with a :guilabel:`Status` of :guilabel:`Suspended`. - - .. figure:: /images/suspended-db-trigger.png - :alt: A database trigger that is marked Suspended in the UI - - - .. step:: Restart the Trigger - - Click :guilabel:`Restart` in the trigger's :guilabel:`Actions` column. - You can choose to restart the trigger with a change stream - :manual:`resume token ` or - open a new change stream. Indicate whether or not to use a resume - token and then click :guilabel:`Resume Database Trigger`. - - .. note:: Resume Tokens - - If you use a :manual:`resume token - `, App Services - attempts to resume the trigger's underlying change - stream at the event immediately following the last - change event it processed. If successful, the trigger - processes any events that occurred while it was - suspended. If you do not use a resume token, the - trigger begins listening for new events but will not - fire for any events that occurred while it was - suspended. - - .. figure:: /images/resume-database-trigger-modal.png - :alt: The resume database trigger modal in the UI - - .. tab:: - :tabid: cli - - .. procedure:: - - .. step:: Pull Your App's Latest Configuration Files - - .. code-block:: shell - - {+cli-bin+} pull --remote= - - - .. step:: Verify that the Trigger Configuration File Exists - - If you exported a new copy of your application, it should already - include an up-to-date configuration file for the suspended trigger. - You can confirm that the configuration file exists by looking - in the ``/triggers`` directory for a :ref:`trigger configuration file - ` with the same name as the trigger. - - - .. step:: Redeploy the Trigger - - After you have verified that the trigger configuration file exists, - push the configuration back to your app. App Services - automatically attempts to resume any suspended triggers included - in the deployment. - - .. code-block:: shell - - {+cli-bin+} push - -.. _last-cluster-time-processed: - -Trigger Time Reporting ----------------------- - -The list of Triggers in the Atlas App Services UI shows three timestamps: - -**Last Modified** - -This is the time the Trigger was created or most recently changed. - -**Latest Heartbeat** - -Atlas App Services keeps track of the last time a trigger was run. If the trigger -is not sending any events, the server sends a heartbeat to ensure the trigger's -resume token stays fresh. Whichever event is most recent is shown as the -:guilabel:`Latest Heartbeat`. - -**Last Cluster Time Processed** - -Atlas App Services also keeps track of the :guilabel:`Last Cluster Time Processed`, -which is the last time the change stream backing a Trigger emitted an event. It -will be older than the :guilabel:`Latest Heartbeat` if there have been no events -since the most recent heartbeat. - - -Performance Optimization ------------------------- - -.. _database-triggers-disable-event-ordering: - -Disable Event Ordering for Burst Operations -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Consider disabling event ordering if your trigger fires on a collection that -receives short bursts of events (e.g. inserting data as part of a daily batch -job). - -Ordered Triggers wait to execute a Function for a particular event until -the Functions of previous events have finished executing. As a -consequence, ordered Triggers are effectively rate-limited by the run -time of each sequential Trigger function. This may cause a significant -delay between the database event appearing on the change stream and the -Trigger firing. In certain extreme cases, database events might fall off -the oplog before a long-running ordered trigger processes them. - -Unordered Triggers execute functions in parallel if possible, which can be -significantly faster (depending on your use case) but does not guarantee that -multiple executions of a Trigger Function occur in event order. - -.. _database-triggers-disable-collection-level-preimages: - -Disable Collection-Level Preimages -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Document preimages require your cluster to record additional data about -each operation on a collection. Once you enable preimages for any -trigger on a collection, your cluster stores preimages for every -operation on the collection. - -The additional storage space and compute overhead may degrade trigger -performance depending on your cluster configuration. - -To avoid the storage and compute overhead of preimages, you must disable -preimages for the entire underlying MongoDB collection. This is a -separate setting from any individual trigger's preimage setting. - -If you disable collection-level preimages, then no active trigger on -that collection can use preimages. However, if you delete or disable all -preimage triggers on a collection, then you can also disable -collection-level preimages. - -To learn how, see :ref:`Disable Preimages for a Collection -`. - -.. _database-triggers-match-expression: - -Use Match Expressions to Limit Trigger Invocations -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -You can limit the number of Trigger invocations by specifying a :manual:`$match -` expression in the :guilabel:`Match -Expression` field. App Services evaluates the match expression against the -change event document and invokes the Trigger only if the expression evaluates -to true for the given change event. - -The match expression is a JSON document that specifies the query conditions -using the :manual:`MongoDB read query syntax `. - -We recommend only using match expressions when the volume of Trigger events -measurably becomes a performance issue. Until then, receive all events and -handle them individually in the Trigger function code. - -The exact shape of the change event document depends on the event that caused -the trigger to fire. For details, see the reference for each event type: - -- :manual:`insert ` -- :manual:`update ` -- :manual:`replace ` -- :manual:`delete ` -- :manual:`create ` -- :manual:`modify ` -- :manual:`rename ` -- :manual:`drop ` -- :manual:`shardCollection ` -- :manual:`reshardCollection ` -- :manual:`refineCollectionShardKey ` -- :manual:`dropDatabase ` - -.. example:: - - The following match expression allows the Trigger to fire - only if the change event object specifies that the ``status`` field in - a document changed. - - ``updateDescription`` is a field of the :manual:`update Event object `. - - .. code-block:: javascript - - { - "updateDescription.updatedFields.status": { - "$exists": true - } - } - - The following match expression allows the Trigger to fire only when a - document's ``needsTriggerResponse`` field is ``true``. The ``fullDocument`` - field of the :manual:`insert `, - :manual:`update `, and :manual:`replace - ` events represents a document after the - given operation. To receive the ``fullDocument`` field, you must enable - :guilabel:`Full Document` in your Trigger configuration. - - .. code-block:: javascript - - { - "fullDocument.needsTriggerResponse": true - } - - -Testing Match Expressions -````````````````````````` - -The following procedure shows one way to test whether your match expression -works as expected: - -1. Download `the MongoDB Shell (mongosh) - `__ and use it to - :atlas:`connect to your cluster `. -#. Replacing ``DB_NAME`` with your database name, ``COLLECTION_NAME`` with your - collection name, and ``YOUR_MATCH_EXPRESSION`` with the match expression you - want to test, paste the following into mongosh to open a change stream on an - existing collection: - - .. code-block:: js - - db.getSiblingDB(DB_NAME).COLLECTION_NAME.watch([{$match: YOUR_MATCH_EXPRESSION}]) - while (!watchCursor.isClosed()) { - if (watchCursor.hasNext()) { - print(tojson(watchCursor.next())); - } - } - -#. In another terminal window, use mongosh to make changes to some test - documents in the collection. -#. Observe what the change stream filters in and out. - -.. _database-triggers-project-expression: - -Use Project Expressions to Reduce Input Data Size -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -In the :guilabel:`Project Expression` field, -limit the number of fields that the Trigger processes by using a -:manual:`$project ` expression. - -.. note:: Project is inclusive only - - When using Triggers, a projection expression is inclusive *only*. - Project does not support mixing inclusions and exclusions. - The project expression must be inclusive because Triggers require you - to include ``operationType``. - - If you want to exclude a single field, the projection expression must - include every field *except* the one you want to exclude. - You can only explicitly exclude ``_id``, which is included by default. - -.. example:: - - A trigger is configured with the following :guilabel:`Project Expression`: - - .. code-block:: json - - { - "_id": 0, - "operationType": 1, - "updateDescription.updatedFields.status": 1 - } - - The change event object that App Services passes to the trigger function - only includes the fields specifed in the projection, as in the following - example: - - .. code-block:: json - - { - "operationType": "update", - "updateDescription": { - "updatedFields": { - "status": "InProgress" - } - } - } - -.. include:: /includes/triggers-examples-github.rst diff --git a/source/triggers/disable.txt b/source/triggers/disable.txt deleted file mode 100644 index d3f29d339..000000000 --- a/source/triggers/disable.txt +++ /dev/null @@ -1,138 +0,0 @@ -.. _disable-a-trigger: - -================= -Disable a Trigger -================= - -.. default-domain:: mongodb - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -Overview --------- -Triggers may enter a :guilabel:`suspended` state in response to -an event that prevents the Trigger's change stream from continuing, such -as a network disruption or change to the underlying cluster. When a -Trigger enters a suspended state, it does not receive change events and will not -fire. - -.. note:: - - In the event of a suspended or failed trigger, Atlas App Services sends the - project owner an email alerting them of the issue. - -You can suspend a Trigger from the Atlas App Services UI or by -importing an application directory with the {+cli-ref+}. - -.. tabs-realm-admin-interfaces:: - - .. tab:: - :tabid: ui - - .. procedure:: - - .. step:: Find the Trigger - - On the :guilabel:`Database Triggers` tab of the :guilabel:`Triggers` - page, find the trigger that you want to disable in the list of - Triggers. - - .. figure:: /images/suspended-db-trigger.png - :alt: A list of Triggers in an App in the App Services UI - - - .. step:: Disable the Trigger - - Switch the :guilabel:`Enabled` toggle to the "off" setting. - - .. figure:: /images/auth-trigger-example-config.png - :alt: The "Edit Trigger" screen in the App Services UI - :width: 750px - :lightbox: - - - .. step:: Deploy Your Changes - - If Development Mode is not enabled, press the - :guilabel:`review draft & deploy` button to release your changes. - - - .. tab:: - :tabid: cli - - .. procedure:: - - .. step:: Pull Your App's Latest Configuration Files - - .. code-block:: shell - - {+cli-bin+} pull --remote= - - - .. step:: Verify that the Trigger Configuration File Exists - - If you exported a new copy of your application, it should already include an - up-to-date configuration file for the suspended trigger. You can confirm that - the configuration file exists by looking in the ``/triggers`` directory for a - :ref:`trigger configuration file ` with the same name - as the trigger. - - - .. step:: Disable the Trigger - - After you have verified that the trigger configuration file exists, add - a field named ``"disabled"`` with the value ``true`` to the top level - of the trigger json definition: - - .. code-block:: json - :emphasize-lines: 9 - - { - "id": "6142146e2f052a39d38e1605", - "name": "steve", - "type": "SCHEDULED", - "config": { - "schedule": "*/1 * * * *" - }, - "function_name": "myFunc", - "disabled": true - } - - - .. step:: Deploy Your Changes - - Finally, push the configuration back to your app: - - .. code-block:: shell - - {+cli-bin+} push - -Restoring from a Snapshot -------------------------- - -Consider the following scenario: - -1. A database trigger is disabled or suspended. - -#. New documents are added while the trigger is disabled. - -#. The database is restored from a snapshot to a time prior to the new documents - being added. - -#. The database trigger is restarted. - -In this case, the trigger picks up all of the newly-added documents and fires -for each document. It will not fire again for events that have already been -processed. - -.. note:: - - If a previously-enabled database trigger is running during snapshot restoration, - you will see an error in the Edit Trigger section of the Atlas UI because the - trigger cannot connect to the Atlas cluster during the restore process. Once - snapshot restoration completes, the error disappears and the trigger continues - to execute normally. diff --git a/source/triggers/scheduled-triggers.txt b/source/triggers/scheduled-triggers.txt deleted file mode 100644 index ae258a152..000000000 --- a/source/triggers/scheduled-triggers.txt +++ /dev/null @@ -1,472 +0,0 @@ -.. _scheduled-trigger: -.. _scheduled-triggers: - -================== -Scheduled Triggers -================== - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -Scheduled triggers allow you to execute server-side logic on a -:ref:`regular schedule that you define `. -You can use scheduled triggers to do work that happens on a periodic -basis, such as updating a document every minute, generating a nightly -report, or sending an automated weekly email newsletter. - -.. _create-a-scheduled-trigger: - -Create a Scheduled Trigger --------------------------- - -.. tabs-realm-admin-interfaces:: - - .. tab:: - :tabid: ui - - To create a scheduled Trigger in the Atlas App Services UI: - - 1. Click :guilabel:`Triggers` under :guilabel:`Build` in the - left navigation menu. - - 2. Click :guilabel:`Create a Trigger` to open the Trigger configuration page. - - 3. Select :guilabel:`Scheduled` for the :guilabel:`Trigger Type`. - - .. figure:: /images/trigger-example-scheduled.png - :figwidth: 750px - :alt: Creating a Trigger in the App Services UI. - - .. tab:: - :tabid: cli - - To create a scheduled Trigger with the {+cli-ref+}: - - 1. Add a scheduled Trigger :ref:`configuration file - ` to the ``triggers`` subdirectory of a local - application directory. - - .. note:: - - You cannot create a Trigger that runs on a :guilabel:`Basic` - schedule using {+cli+}. All imported scheduled Trigger - configurations must specify a :ref:`CRON expression - `. - - Scheduled Trigger configuration files have the following form: - - .. code-block:: none - :caption: /triggers/.json - - { - "type": "SCHEDULED", - "name": "", - "function_name": "", - "config": { - "schedule": "" - }, - "disabled": - } - - 2. :ref:`Deploy ` the trigger: - - .. code-block:: shell - - {+cli-bin+} push - -.. _scheduled-triggers-configuration: - -Configuration -------------- - -Scheduled Triggers have the following configuration options: - -.. list-table:: - :header-rows: 1 - :widths: 15 30 - - * - Field - - Description - - * - | :guilabel:`Trigger Type` - | ``type: `` - - - Select :guilabel:`Scheduled`. - - * - | :guilabel:`Schedule Type` - | ``config.schedule: `` - - - Required. You can select :guilabel:`Basic` or :guilabel:`Advanced`. A Basic - schedule executes the Trigger periodically based on the interval you set, - such as "every five minutes" or "every Monday". - - An Advanced schedule runs the Trigger based on the custom - :ref:`CRON expression ` that you define. - - * - | :guilabel:`Skip Events on Re-Enable` - | ``skip_catchup_event: `` - - - Disabled by default. If enabled, any change events that occurred while - this trigger was disabled will not be processed. - - * - | :guilabel:`Event Type` - | ``function_name: `` - - - Within the :guilabel:`Event Type` section, you choose what action is taken when - the trigger fires. You can choose to run a :ref:`function ` or use - :ref:`AWS EventBridge `. - - .. note:: - - A Scheduled Trigger does not pass any arguments to its linked - Function. - - * - | :guilabel:`Trigger Name` - | ``name: `` - - - The name of the trigger. - -.. _CRON-expressions: - -CRON Expressions ----------------- - -CRON expressions are user-defined strings that use standard -:wikipedia:`cron ` job syntax to define when a :doc:`scheduled -trigger ` should execute. -App Services executes Trigger CRON expressions based on :wikipedia:`UTC time `. -Whenever all of the fields in a CRON expression match the current date and time, -App Services fires the trigger associated with the expression. - -Expression Syntax ------------------ - -Format -~~~~~~ - -CRON expressions are strings composed of five space-delimited fields. -Each field defines a granular portion of the schedule on which its -associated trigger executes: - -.. code-block:: text - - * * * * * - │ │ │ │ └── weekday...........[0 (SUN) - 6 (SAT)] - │ │ │ └──── month.............[1 (JAN) - 12 (DEC)] - │ │ └────── dayOfMonth........[1 - 31] - │ └──────── hour..............[0 - 23] - └────────── minute............[0 - 59] - -.. list-table:: - :header-rows: 1 - :widths: 15 25 60 - - * - Field - - Valid Values - - Description - - * - ``minute`` - - [0 - 59] - - Represents one or more minutes within an hour. - - .. example:: - - If the ``minute`` field of a CRON expression has a value of - ``10``, the field matches any time ten minutes after the hour - (e.g. ``9:10 AM``). - - * - ``hour`` - - [0 - 23] - - Represents one or more hours within a day on a 24-hour clock. - - .. example:: - - If the ``hour`` field of a CRON expression has a value of - ``15``, the field matches any time between ``3:00 PM`` and - ``3:59 PM``. - - * - ``dayOfMonth`` - - [1 - 31] - - Represents one or more days within a month. - - .. example:: - - If the ``dayOfMonth`` field of a CRON expression has a value - of ``3``, the field matches any time on the third day of the - month. - - * - ``month`` - - | ``1 (JAN)`` ``7 (JUL)`` - | ``2 (FEB)`` ``8 (AUG)`` - | ``3 (MAR)`` ``9 (SEP)`` - | ``4 (APR)`` ``10 (OCT)`` - | ``5 (MAY)`` ``11 (NOV)`` - | ``6 (JUN)`` ``12 (DEC)`` - - Represents one or more months within a year. - - A month can be represented by either a number (e.g. ``2`` for - February) or a three-letter string (e.g. ``APR`` for April). - - .. example:: - - If the ``month`` field of a CRON expression has a value of - ``9``, the field matches any time in the month of September. - - * - ``weekday`` - - | ``0 (SUN)`` - | ``1 (MON)`` - | ``2 (TUE)`` - | ``3 (WED)`` - | ``4 (THU)`` - | ``5 (FRI)`` - | ``6 (SAT)`` - - Represents one or more days within a week. - - A weekday can be represented by either a number (e.g. ``2`` for a - Tuesday) or a three-letter string (e.g. ``THU`` for a Thursday). - - .. example:: - - If the ``weekday`` field of a CRON expression has a value of - ``3``, the field matches any time on a Wednesday. - -Field Values -~~~~~~~~~~~~ - -Each field in a CRON expression can contain either a specific value or -an expression that evaluates to a set of values. The following table -describes valid field values and expressions: - -.. list-table:: - :header-rows: 1 - :widths: 50 50 - - * - Expression Type - - Description - - * - | **All Values** - | ``(*)`` - - Matches all possible field values. - - Available in all expression fields. - - .. example:: - - The following CRON expression schedules a trigger to execute - once every minute of every day: - - .. code-block:: text - - * * * * * - - * - | **Specific Value** - | ``()`` - - Matches a specific field value. For fields other than ``weekday`` - and ``month`` this value will always be an integer. A ``weekday`` - or ``month`` field can be either an integer or a three-letter - string (e.g. ``TUE`` or ``AUG``). - - Available in all expression fields. - - .. example:: - - The following CRON expression schedules a trigger to execute - once every day at 11:00 AM UTC: - - .. code-block:: text - - 0 11 * * * - - * - | **List of Values** - | ``(,,...)`` - - - Matches a list of two or more field expressions or specific - values. - - Available in all expression fields. - - .. example:: - - The following CRON expression schedules a trigger to execute - once every day in January, March, and July at 11:00 AM UTC: - - .. code-block:: text - - 0 11 * 1,3,7 * - - * - | **Range of Values** - | ``(-)`` - - Matches a continuous range of field values between and including - two specific field values. - - Available in all expression fields. - - .. example:: - - The following CRON expression schedules a trigger to execute - once every day from January 1st through the end of April at - 11:00 AM UTC: - - .. code-block:: text - - 0 11 * 1-4 * - - * - | **Modular Time Step** - | ``(/)`` - - Matches any time where the step value evenly divides the - field value with no remainder (i.e. when ``Value % Step == 0``). - - Available in the ``minute`` and ``hour`` expression fields. - - .. example:: - - The following CRON expression schedules a trigger to execute - on the 0th, 25th, and 50th minutes of every hour: - - .. code-block:: text - - */25 * * * * - -.. _scheduled-trigger-example: - -Example -------- - -An online store wants to generate a daily report of all sales from the -previous day. They record all orders in the ``store.orders`` collection -as documents that resemble the following: - -.. code-block:: json - - { - _id: ObjectId("59cf1860a95168b8f685e378"), - customerId: ObjectId("59cf17e1a95168b8f685e377"), - orderDate: ISODate("2018-06-26T16:20:42.313Z"), - shipDate: ISODate("2018-06-27T08:20:23.311Z"), - orderContents: [ - { qty: 1, name: "Earl Grey Tea Bags - 100ct", price: Decimal128("10.99") } - ], - shippingLocation: [ - { location: "Memphis", time: ISODate("2018-06-27T18:22:33.243Z") }, - ] - } - -To generate the daily report, the store creates a scheduled Trigger -that fires every day at ``7:00 AM UTC``. When the -Trigger fires, it calls its linked Atlas Function, -``generateDailyReport``, which runs an aggregation -query on the ``store.orders`` collection to generate the report. The -Function then stores the result of the aggregation in the -``store.reports`` collection. - -.. tabs-realm-admin-interfaces:: - - .. tab:: - :tabid: ui - - .. figure:: /images/trigger-example-scheduled-advanced.png - :alt: Example UI that configures the trigger - - .. tab:: - :tabid: cli - - .. code-block:: json - :caption: Trigger Configuration - - { - "type": "SCHEDULED", - "name": "reportDailyOrders", - "function_name": "generateDailyReport", - "config": { - "schedule": "0 7 * * *" - }, - "disabled": false - } - -.. code-block:: javascript - :caption: generateDailyReport - - exports = function() { - // Instantiate MongoDB collection handles - const mongodb = context.services.get("mongodb-atlas"); - const orders = mongodb.db("store").collection("orders"); - const reports = mongodb.db("store").collection("reports"); - - // Generate the daily report - return orders.aggregate([ - // Only report on orders placed since yesterday morning - { $match: { - orderDate: { - $gte: makeYesterdayMorningDate(), - $lt: makeThisMorningDate() - } - } }, - // Add a boolean field that indicates if the order has already shipped - { $addFields: { - orderHasShipped: { - $cond: { - if: "$shipDate", // if shipDate field exists - then: 1, - else: 0 - } - } - } }, - // Unwind individual items within each order - { $unwind: { - path: "$orderContents" - } }, - // Calculate summary metrics for yesterday's orders - { $group: { - _id: "$orderDate", - orderIds: { $addToSet: "$_id" }, - numSKUsOrdered: { $sum: 1 }, - numItemsOrdered: { $sum: "$orderContents.qty" }, - totalSales: { $sum: "$orderContents.price" }, - averageOrderSales: { $avg: "$orderContents.price" }, - numItemsShipped: { $sum: "$orderHasShipped" }, - } }, - // Add the total number of orders placed - { $addFields: { - numOrders: { $size: "$orderIds" } - } } - ]).next() - .then(dailyReport => { - reports.insertOne(dailyReport); - }) - .catch(err => console.error("Failed to generate report:", err)); - }; - - function makeThisMorningDate() { - return setTimeToMorning(new Date()); - } - - function makeYesterdayMorningDate() { - const thisMorning = makeThisMorningDate(); - const yesterdayMorning = new Date(thisMorning); - yesterdayMorning.setDate(thisMorning.getDate() - 1); - return yesterdayMorning; - } - - function setTimeToMorning(date) { - date.setHours(7); - date.setMinutes(0); - date.setSeconds(0); - date.setMilliseconds(0); - return date; - } - - -Performance Optimization ------------------------- - -Use the Query API with a a :manual:`$match` -expression to reduce the number of documents your Function looks at. -This helps your Function improve performance and not reach -:ref:`Function memory limits `. - -:ref:`Refer the Example section for a Scheduled Trigger using a $match expression. ` - -.. include:: /includes/triggers-examples-github.rst diff --git a/source/tutorial/triggers-fts.txt b/source/tutorial/triggers-fts.txt index b886ec75a..3519461ba 100644 --- a/source/tutorial/triggers-fts.txt +++ b/source/tutorial/triggers-fts.txt @@ -44,7 +44,7 @@ The feature includes: - Lets you enter the terms to be alerted on. - Stores the specified terms in Atlas. -- A :ref:`Database Trigger ` with a custom +- A `Database trigger `__ with a custom :ref:`Atlas Function ` that: - Alerts you when a new task contains a term you've entered. @@ -234,7 +234,7 @@ alert terms. Triggers can execute application and database logic in response to change event. Each trigger links to an :ref:`Atlas Function ` that defines the trigger's behavior. -In this section, you create a :ref:`database trigger ` +In this section, you create a `database trigger `__ that runs whenever a user creates a new task. In the trigger's function, you define: diff --git a/source/tutorials.txt b/source/tutorials.txt index 1e759233e..e6f515c90 100644 --- a/source/tutorials.txt +++ b/source/tutorials.txt @@ -49,7 +49,7 @@ customize. The following template apps are available: - Todo list mobile apps written with Atlas Device SDK that sync data with App Services using Device Sync -- An Event-driven :doc:`Database Trigger ` +- An Event-driven `Database Trigger `__ template that updates a view in a separate collection. To learn more: