Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,15 @@ jobs:
lint:
runs-on: ubuntu-24.04
steps:
- uses: percona-platform/checkout@v2
- name: Use Node.js 16.x
uses: percona-platform/setup-node@v2
with:
node-version: 16.x
- name: Install Dependencies
run: npm ci
- name: Run lint:tests
run: npm run lint:tests
- uses: percona-platform/checkout@v2
- name: Use Node.js 22.x
uses: percona-platform/setup-node@v2
with:
node-version: 22.x
- name: Install Dependencies
run: npm ci
- name: Run lint:tests
run: npm run lint:tests

e2e_fb_tests:
name: e2e FB tests
Expand Down
1 change: 1 addition & 0 deletions .husky/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
_
2 changes: 2 additions & 0 deletions .husky/pre-commit
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
npm run build:types
npm run lint:all
8 changes: 8 additions & 0 deletions .prettierrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"semi": true,
"trailingComma": "all",
"bracketSpacing": true,
"printWidth": 130,
"arrowParens": "always",
"singleQuote": true
}
188 changes: 101 additions & 87 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,88 +1,102 @@
# Contributing

### Project Licenses

- All modules use [Apache License v2.0](LICENSE.md).

## Coding Conventions

### Naming Conventions

* **Acronyms**
Whenever an acronym is included as part of a type name or method name, keep the first
letter of the acronym uppercase and use lowercase for the rest of the acronym. Otherwise,
it becomes potentially very difficult to read or reason about the element without
reading documentation (if documentation even exists).

      Consider for example a use case needing to support an HTTP URL. Calling the method
`getHTTPURL()` is absolutely horrible in terms of usability; whereas, `getHttpUrl()` is
great in terms of usability. The same applies for types `HTTPURLProvider` vs
`HttpUrlProvider`.
|
Whenever an acronym is included as part of a field name or parameter name:
* If the acronym comes at the start of the field or parameter name, use lowercase for the entire acronym, ex: `url;`
* Otherwise, keep the first letter of the acronym uppercase and use lowercase for the rest of the acronym, ex: `baseUrl;`


* **Methods.**
Methods should be named as actions with camelCase (changeSorting, changeGrouping, etc..)
* use "change" instead of "apply" for methods
* add postfix "Locator" for each method that returns locator


* **Assertion methods.**
Assertion methods should start with ‘verify’ This will add more readability into our code and simplify search of the assertion


* **Test Files.**
Test files should be named with camelCase and end with _test. Ending is mandatory. TBD - Roman

### Locators

* **Locators outside of a test.**
This is a bad practice to use hard coded locators inside a test. All locators should ‘live’ inside a [Page Object](https://codecept.io/pageobjects/)


* **Try to use stable locators.**
Ideally there should exist a dedicated attribute for each interactive element (“data-qa” attribute). But id, classname, text also used frequently. (try to use small xpath)


* **Locators preference: locate() > CSS > Xpath**
Try to use `locate()` builder as a first priority, and only then CSS. Use XPath as a last stand.

### Assertions

* **Assertion can be used directly in a test.**
CodeceptJS provides simple assertion methods that can be used directly inside a test. (I.seeElement, I.dontSeeElement, I.seeInCurrentUrl, etc.. )


* **Assertion (non-CodeceptJS)**
can be used directly from test only if it cannot be used in other places

### Test Data

* **Use Data Provider where applicable.**
In order to decrease the amount of code and increase maintainability, if we have the same tests with the same scenario but different data - Data Provider should be used. Add a comment why you did that!
([CodeceptJS Data](https://codecept.io/advanced/#data-driven-tests) for ex. Login test with correct and incorrect credentials )


* **Declaration of a test variable** should be done on top of the test

### Test Files

* **One feature per File.**
This will help CodeceptJS to split tests for workers and parallel execution will be more effective and increase maintainability of Automation suite. For ex. QANPagination_test.js, QANFilters_test.js, QANDetails_test.js. (Multiple features can exist only if there are some common BeforeSteps, but they needed for some part of tests)

### Scenario

* **Scenario title should contain Test Case ID.**
In order to make searching for a test much easier we could add the original Test Case ID from TMT.


* **Scenario title template.**
{TEST_CASE_ID} Title {annotation}

### Annotations

* **Add a test to some Group if needed.**
Add annotations for the test in the end on test Title. For ex. “Open Remote Instance Page and Add mysql instances @pmm-pre-update"
# Contributing to PMM UI Tests

Thank you for your interest in contributing to the Percona Monitoring and Management (PMM) UI automated tests! This document outlines the guidelines, conventions, and best practices for contributing to this repository.

---

## Table of Contents
- [Project Overview](#project-overview)
- [How to Contribute](#how-to-contribute)
- [Code Style & Patterns](#code-style--patterns)
- [Test Organization](#test-organization)
- [Environment & Setup](#environment--setup)
- [Pull Requests](#pull-requests)
- [Reporting Issues](#reporting-issues)

---

## Project Overview
- This repository contains end-to-end automated tests for PMM UI using CodeceptJS and Playwright.
- Major directories:
- `tests/` — CodeceptJS test suites, helpers, and page objects
- `playwright-tests/` — Playwright test suites and configs (DEPRECATED)
- `cli/` — CLI test automation
- `docker-compose*.yml` — PMM server and test environment orchestration

---

## How to Contribute
1. **Fork the repository** and create your branch from `v3`.
2. **Install dependencies** using `npm ci` in the root and any subproject you need (`playwright-tests/`, `cli/`).
3. **Follow the code style and patterns** described below.
4. **Test your changes locally** using the recommended script or manual steps.
5. **Lint your code**: Linting is enforced via git hooks. Fix all lint errors before committing.
6. **Open a Pull Request** with a clear description of your changes and reference any related issues.

---

## Code Style & Patterns
- **Tags:** Use tags to group and select tests. Add new tags if needed, and document them in the README.
- **Page Object Pattern:** Encapsulate UI logic in page objects (`tests/PageObjects/`, `tests/pages` , `tests/ia/pages/`).
- **APIs:** Place API automation in (`tests/pages/api`,`tests/ia/pages/api/`, etc).
- **Helpers:** Place helpers or setup logic in (`tests/helper`, etc).
- **Data-Driven Tests:** Use CodeceptJS DataTable or Playwright test annotations for parameterized tests.
- **Naming:**
- **Acronyms:** Use `HttpUrl` not `HTTPURL` or `getHTTPURL()`. For fields, use `url` or `baseUrl` as appropriate.
- **Methods:** Use camelCase, name as actions (e.g., `changeSorting`). Use `change` instead of `apply`. Add `Locator` postfix for locator-returning methods.
- **Assertion methods:** Start with `verify` for readability and searchability.
- **Test files:** Use camelCase and end with `_test` (e.g., `qanPagination_test.js`).
- **Assertions:** Use clear, explicit assertions. Prefer built-in assertion libraries. Use `I.seeElement`, `I.dontSeeElement`, etc. for CodeceptJS.
- **Locators:**
- Avoid hardcoded locators in tests; use Page Objects.
- Prefer stable selectors (e.g., `data-qa` attributes). Use `locate()` > CSS > XPath.
- **Test Data:**
- Use Data Providers for repeated scenarios with different data. Add a comment explaining why.
- Declare test variables at the top of the test.
- **Scenario Titles:**
- Include Test Case ID for traceability (e.g., `{TEST_CASE_ID} Title {annotation}`).
- Add group/tag annotations at the end of the title (e.g., `@pmm-pre-update`).

---

## Test Organization
- Organize tests by feature and scenario.
- Place shared logic in helpers or page objects.
- Use tags for grouping and selective execution.
- Add new test files in the appropriate directory and update the README if you introduce new tags or workflows.
- One feature per file is preferred for parallelization and maintainability.

---

## Environment & Setup
- Use Docker Compose and scripts in `testdata/` for environment orchestration.
- PMM server is required for most tests; start it locally via Docker Compose.
- For backup management and DB tests, use the setup scripts in `testdata/backup-management/`.
- Use a `.env` file for environment variables (e.g., `PMM_UI_URL`).
- See the main [README.md](README.md) for detailed setup and execution instructions.

---

## Pull Requests
- Ensure your branch is up to date with `v3` branch.
- Run all tests and ensure they pass locally before opening a PR.
- Address all lint errors and warnings.
- Provide a clear, descriptive PR title and body.
- If your change affects test execution or environment setup, update the documentation accordingly.
- Do not update Playwright or MongoDB dependencies without addressing breaking changes and documenting them in the PR.

---

## Reporting Issues
- Use GitHub Issues to report bugs, request features, or suggest improvements.
- Provide as much detail as possible, including steps to reproduce, logs, and environment information.

---

## Feedback
If any section of these guidelines is unclear or missing, please open an issue or PR to help us improve the documentation.

---

Happy testing!
Loading
Loading