Skip to content

[docs]: Added More Glossary Terms to Testing Glossary Page #567

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions docusaurus.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,6 @@ module.exports = {
darkTheme: prismThemes.dracula,
additionalLanguages: ["java", "ruby", "php", "bash"],
},
// hideableSidebar: true,
navbar: {
hideOnScroll: false,
logo: {
Expand Down Expand Up @@ -257,7 +256,6 @@ module.exports = {
}
node.value = "// @ts-nocheck\n" + node.value.trim();
}

visit(tree, "code", visitor);
},
{},
Expand Down
19 changes: 19 additions & 0 deletions static/data/glossaryEntries.js
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,13 @@ export const glossaryEntries = {
description: "Same input gives same result every time.",
},
],
K: [
{
name: "Keyword Driven Testing",
link: "/docs/concepts/reference/glossary/keyword-driven-testing",
description: "Test scripts use keywords to define actions, separating test design from programming work.",
},
],
M: [
{
name: "Manual Testing",
Expand Down Expand Up @@ -142,6 +149,18 @@ export const glossaryEntries = {
link: "/docs/concepts/reference/glossary/unit-testing",
description: "Tests specific code components in isolation.",
},
{
name: "Usability Testing",
link: "/docs/concepts/reference/glossary/usability-testing",
description: "Evaluates how user-friendly and intuitive the software interface is for end users.",
},
],
V: [
{
name: "Volume Testing",
link: "/docs/concepts/reference/glossary/volume-testing",
description: "Assesses system performance and stability when handling large volumes of data.",
},
],
W: [
{
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
---
id: keyword-driven-testing
title: Keyword Driven Testing
sidebar_label: Keyword Driven Testing
description: Learn how Keyword Driven Testing separates test logic from implementation using keywords.
tags:
- explanation
keywords:
- automation
- framework
---

### What is Keyword Driven Testing?

Keyword Driven Testing is a software testing methodology that separates test design from test implementation by using keywords to represent actions or operations. Each keyword corresponds to a specific action (such as "Click", "Input Text", "Verify Element") and is mapped to code that performs the action. Testers create test cases by combining these keywords, often in a tabular format, making test creation accessible to both technical and non-technical users.

### How Keyword Driven Testing Works

- **Defining Keywords:** Identify and document keywords that represent actions in the application under test. These can be low-level (e.g., "Click Button") or high-level (e.g., "Login User").
- **Implementing Keywords:** Developers implement functions or methods for each keyword, which interact with the application.
- **Creating Test Cases:** Testers build test cases by sequencing keywords and providing test data, often in spreadsheets or tables.
- **Executing Tests:** A test execution engine reads the test cases, interprets the keywords, and runs the corresponding code.
- **Reviewing Results:** Test results are analyzed to determine which keywords (and thus which actions) passed or failed.

### Components of Keyword Driven Testing

- **Keyword Library:** A collection of reusable functions or methods mapped to keywords.
- **Test Data:** Data required for executing test cases, usually stored separately (e.g., spreadsheets, databases).
- **Driver Script/Test Execution Engine:** Software that reads test cases, interprets keywords, and invokes corresponding actions.

### Types of Keywords

- **Low-level keywords:** Represent basic actions (e.g., click, input, select).
- **High-level keywords:** Combine multiple low-level keywords to represent business processes (e.g., "Checkout", "Register User").

### Benefits of Keyword Driven Testing

- **Separation of Concerns:** Test design is independent from implementation, enabling non-programmers to write tests.
- **Reusability:** Keywords can be reused across multiple test cases, reducing duplication.
- **Maintainability:** Changes in application logic only require updates to keyword implementations, not all test cases.
- **Collaboration:** Enables both technical and business stakeholders to contribute to test design.

### Challenges

- **Initial Setup:** Building a robust keyword library and test execution engine requires upfront investment.
- **Keyword Management:** As the application grows, maintaining and organizing keywords can become complex.
- **Debugging:** Errors may be harder to trace if keywords are too generic or poorly documented.

### Example

A simple table-driven test case might look like:

| Keyword | Argument 1 | Argument 2 |
| -------------- | ------------ | --------------- |
| StartApp | MyApp | |
| InputText | username | user1 |
| InputText | password | pass123 |
| ClickButton | login | |
| VerifyElement | dashboard | visible |
| TerminateApp | MyApp | |

### Tools Supporting Keyword Driven Testing

- **Selenium with TestNG/JUnit:** Allows mapping of keywords to methods, often using configuration files.
- **Robot Framework:** Provides built-in and custom keywords with a tabular syntax for test case design.
- **Squish, testRigor, and others:** Offer keyword-driven approaches with varying degrees of abstraction and automation.

### Conclusion

Keyword Driven Testing empowers teams to create maintainable, reusable, and business-friendly test cases by abstracting technical details behind human-readable keywords. It is especially valuable in complex or rapidly changing projects where collaboration between testers and developers is essential.

---

#### FAQs

**1. What is the main advantage of Keyword Driven Testing?**
It enables non-programmers to design and maintain tests, increasing collaboration and reducing maintenance effort.

**2. How does it differ from data-driven testing?**
Data-driven testing focuses on varying input data, while keyword-driven testing focuses on varying actions using keywords.

**3. Can it be used for both manual and automated testing?**
Yes, keyword-driven testing is suitable for both manual and automated testing scenarios.

**4. What are common pitfalls?**
Poorly defined or overly generic keywords can make debugging difficult and reduce test clarity.

**5. Which tools support this methodology?**
Popular tools include Selenium (with TestNG/JUnit), Robot Framework, Squish, and testRigor.
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
---
id: usability-testing
title: Usability Testing
sidebar_label: Usability Testing
description: Discover how Usability Testing ensures software is user-friendly and meets user expectations.
tags:
- explanation
keywords:
- user experience
- UX
- testing
---

### What is Usability Testing?

Usability Testing is a software testing technique that evaluates how easy and intuitive a software application is for end users. The goal is to identify usability problems, collect qualitative and quantitative data, and determine the participant's satisfaction with the product. Usability testing focuses on the user’s experience and measures how effectively, efficiently, and satisfactorily users can achieve their goals within the system.

### How Usability Testing Works

- **Test Planning:** Define the objectives, target users, and tasks to be tested.
- **Recruiting Participants:** Select representative users who match the target audience.
- **Test Execution:** Observe users as they perform specific tasks using the software, often in a controlled environment.
- **Data Collection:** Gather feedback through observation, interviews, questionnaires, and screen recordings.
- **Analysis:** Identify usability issues, pain points, and areas for improvement based on user behavior and feedback.
- **Reporting:** Summarize findings and provide actionable recommendations to enhance the user experience.

### Components of Usability Testing

- **Test Scenarios:** Realistic tasks that users are asked to complete.
- **Test Participants:** Actual or potential end users of the application.
- **Facilitator/Moderator:** Guides the session, answers questions, and observes user behavior.
- **Usability Metrics:** Measures such as task success rate, time on task, error rate, and user satisfaction.

### Types of Usability Testing

- **Moderated vs. Unmoderated:** Moderated tests are conducted with a facilitator present, while unmoderated tests are completed by users on their own.
- **Remote vs. In-person:** Tests can be conducted remotely using screen-sharing tools or in person in a usability lab.
- **Explorative, Assessment, and Comparative:** Explorative tests are early-stage, assessment tests evaluate a working system, and comparative tests compare two or more products.

### Benefits of Usability Testing

- **Improved User Satisfaction:** Identifies and resolves issues that frustrate users.
- **Increased Efficiency:** Ensures users can complete tasks quickly and easily.
- **Reduced Training & Support Costs:** A more intuitive interface means less need for help and documentation.
- **Higher Adoption Rates:** User-friendly products are more likely to be adopted and recommended.

### Challenges

- **Recruitment:** Finding representative users can be time-consuming.
- **Bias:** Facilitators or users may unintentionally influence results.
- **Resource Intensive:** Planning, conducting, and analyzing usability tests require time and effort.

### Example

A usability test scenario might look like:

| Task Description | Success Criteria | Observations |
| ---------------------------------- | ---------------------------- | ------------------------- |
| Register a new account | User completes registration | User hesitated at password requirements |
| Find and purchase a product | User checks out successfully | User struggled to find cart button |
| Change account password | Password is updated | User found the option easily |

### Tools Supporting Usability Testing

- **UserTesting:** Platform for remote usability testing with real users.
- **Lookback:** Records user sessions and provides analytics.
- **Optimal Workshop:** Suite of tools for card sorting, tree testing, and surveys.
- **Morae:** Comprehensive usability testing and analysis software.

### Conclusion

Usability Testing is critical for delivering software that meets user needs and expectations. By focusing on real user interactions, teams can uncover hidden issues, improve user satisfaction, and create more successful products.

---

#### FAQs

**1. Why is usability testing important?**
It ensures the software is easy to use, reducing user frustration and increasing satisfaction.

**2. When should usability testing be performed?**
Ideally, usability testing should be conducted throughout the development lifecycle, from early prototypes to final releases.

**3. How many users are needed for usability testing?**
Testing with 5-8 users often uncovers most major usability issues.

**4. What is the difference between usability testing and user acceptance testing?**
Usability testing focuses on user experience, while user acceptance testing checks if the software meets business requirements.

**5. Can usability testing be automated?**
While some aspects can be automated (like tracking clicks or time on task), true usability insights require observation of real users.
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
---
id: volume-testing
title: Volume Testing
sidebar_label: Volume Testing
description: Understand how Volume Testing evaluates system performance and stability with large data volumes.
tags:
- explanation
keywords:
- performance testing
- data volume
- non-functional testing
---

### What is Volume Testing?

Volume Testing, also known as flood testing, is a type of non-functional performance testing that evaluates how a software system behaves when subjected to large volumes of data. The primary goal is to ensure the application remains stable, responsive, and accurate as the data load increases, simulating real-world scenarios where databases or files grow significantly over time.

### How Volume Testing Works

- **Test Planning:** Define objectives, data volume requirements, and success metrics.
- **Data Preparation:** Generate or import large datasets to mimic expected or extreme real-world data volumes.
- **Test Execution:** Run the system with the loaded data, monitoring performance, stability, and resource usage.
- **Result Analysis:** Analyze metrics such as response time, throughput, error rates, and system crashes to identify bottlenecks or failures.
- **Optimization:** Address issues found, make improvements, and retest as needed.

### Key Features of Volume Testing

- **Realistic Data Simulation:** Uses large, realistic datasets to mimic production environments.
- **Performance and Stability Focus:** Evaluates how well the system performs and maintains integrity under heavy data loads.
- **Resource Monitoring:** Tracks CPU, memory, disk, and network usage during tests.
- **Data Integrity Verification:** Ensures data is neither lost nor corrupted during high-volume processing.

### Benefits of Volume Testing

- **Early Detection of Bottlenecks:** Identifies performance issues before they impact users.
- **Improved Scalability:** Helps plan for future growth and scalability needs.
- **Reduced Maintenance Costs:** Prevents costly fixes by addressing issues early.
- **Assurance of Real-World Readiness:** Confirms the system can handle expected and peak data volumes.

### Challenges

- **Test Data Generation:** Creating and managing large, realistic datasets can be complex.
- **Resource Constraints:** High data volumes may require significant hardware and time.
- **Data Integrity:** Ensuring no loss or corruption during testing is critical.
- **Complex Analysis:** Interpreting results and pinpointing root causes can be challenging.

### Example

Suppose an e-commerce platform expects to store millions of customer records. In volume testing, millions of records are generated and loaded into the database. Testers then monitor:

| Metric | Expected Outcome |
|-----------------------|---------------------------------|
| Response Time | Remains within acceptable range |
| Data Integrity | No loss or corruption |
| Resource Utilization | Within hardware limits |
| Error Handling | No unexpected failures |

### Tools Supporting Volume Testing

- **Apache JMeter**
- **LoadRunner**
- **Custom scripts for data generation**
- **Database management tools**

### Best Practices

- **Incremental Data Loading:** Gradually increase data volume to observe system thresholds.
- **Automate Where Possible:** Use scripts and CI/CD pipelines for repeatable, efficient testing.
- **Monitor All Resources:** Track CPU, memory, disk, and network for comprehensive analysis.
- **Test in Production-like Environments:** Mimic real-world settings for accurate results.

### Conclusion

Volume Testing is essential for systems expected to manage significant data growth. By simulating high data volumes, teams can ensure their applications remain robust, performant, and reliable as they scale.

---

#### FAQs

**1. How is volume testing different from load testing?**
Volume testing focuses on the system’s ability to handle large data sets, while load testing measures performance under a high number of simultaneous users or transactions.

**2. When should volume testing be performed?**
Ideally, during development and before deployment, especially for systems expected to process large or growing datasets.

**3. What issues can volume testing uncover?**
Performance degradation, data loss, corruption, slow response times, and resource exhaustion.

**4. Can volume testing be automated?**
Yes, using scripts and automation tools to generate data and execute tests efficiently.

**5. What types of systems benefit most from volume testing?**
Applications with large databases, such as e-commerce, social media, banking, and enterprise software.