Skip to content

Commit 5aff937

Browse files
committed
Adds videos to testing chapter
1 parent 05cb253 commit 5aff937

File tree

4 files changed

+42
-37
lines changed

4 files changed

+42
-37
lines changed

reader/content/_index.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,12 @@ This is a high-level overview of what the course will be about.
1212
The materials roughly break down into 6 high-level modules that are spread across the 13 week of standard academic semester at UBC.
1313
Readings and videos are available for most course concepts.
1414

15+
Videos throughout the reader provide additional explanations of the course material. The video below introduces the codebase that is used throughout the course reader.
16+
17+
{{< expand title="Introduction to running example" >}}
18+
{{% youtube S4GEa6JMo4I %}}
19+
{{< /expand >}}
20+
1521
## License
1622

1723
The readings for this course are licensed using [CC-by-SA](https://creativecommons.org/licenses/by-sa/3.0/). However, it is important to note that the deliverable descriptions, code implementing the deliverables, exams, and exam solutions are considered private materials. We go to considerable lengths to make the project an interesting and useful learning experience for this course. This is a great deal of work, and while future students may be tempted by your solutions, posting them does not do them any real favours. Please be considerate with these private materials and not pass them along to others, make your repos public, or post them to other sites online.

reader/content/testing/_index.md

Lines changed: 32 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -5,32 +5,23 @@ title: "Software Testing"
55
---
66

77

8-
{{% notice default "Learning Outcomes" "graduation-cap" %}}
9-
By the end of this chapter you should:
8+
[//]: # ({{% notice default "Learning Outcomes" "graduation-cap" %}})
109

11-
* [x] Identify
12-
* [x] [Interpreted vs Complied Languages]({{< ref "languages#interpreted-vs-compiled" >}})
13-
* [x] Static vs Dynamic Representations of the System
14-
* [x] [Static vs Dynamic Types]({{< ref "languages#static-vs-dynamic-types" >}})
15-
{{% /notice %}}
10+
[//]: # (By the end of this chapter you should:)
1611

12+
[//]: # ()
13+
[//]: # (* [x] Identify)
1714

18-
{{< youtube Uamo4Ej0tWk >}}
19-
20-
21-
{{< youtube ll1k3Pks3ZA >}}
15+
[//]: # (* [x] [Interpreted vs Complied Languages]&#40;{{< ref "languages#interpreted-vs-compiled" >}}&#41;)
2216

23-
* [Properties of Tests]&#40;http://www.youtube.com/watch?v=ll1k3Pks3ZA)
17+
[//]: # (* [x] Static vs Dynamic Representations of the System)
2418

25-
[//]: # ( * [Kinds of Tests]&#40;http://www.youtube.com/watch?v=_Th3f9vks_w&#41;)
19+
[//]: # (* [x] [Static vs Dynamic Types]&#40;{{< ref "languages#static-vs-dynamic-types" >}}&#41;)
2620

27-
[//]: # ( * [Unit and System Properties]&#40;http://www.youtube.com/watch?v=x2DWjxDiOQo&#41;)
21+
[//]: # ( {{% /notice %}})
2822

29-
[//]: # ( * [Red Green Refactor]&#40;http://www.youtube.com/watch?v=v0q1MKhSQVM&#41;)
3023

31-
{{< youtube x2DWjxDiOQo >}}
32-
33-
{{< youtube v0q1MKhSQVM >}}
24+
{{< youtube Uamo4Ej0tWk >}}
3425

3526

3627
Software systems are only useful if they do what they are supposed to do. This is increasingly important given the vital safety-critical roles modern software systems play. Unfortunately, proving that a system is correct is exceedingly difficult and expensive for large software systems. This leads to the fundamental tradeoff at the heart of most commonly applied testing approaches:
@@ -43,6 +34,10 @@ Given the constraints above, the most prevalent quality validation approach in u
4334
4435
This is because what we are evaluating is whether the probability that there are no bugs in a system given that the tests pass is not actually 0. Since tests themselves are programs (and can have bugs) and specifications are often incomplete or imprecise we therefore must admit that the chance of a defect slipping through a set of tests is > 0 [(online discussion)](http://tonyxzt.blogspot.ca/2010/01/tests-can-show-presence-of-bugs-not.html).
4536

37+
## Testing Process
38+
39+
{{< youtube v0q1MKhSQVM >}}
40+
4641
The modern test cycle is summarized below. The cycle between steps 1-7 can occur many times before the code is ready to commit.
4742

4843
1. Develop some new code.
@@ -60,20 +55,28 @@ Steps 2/7 may not happen on small teams or when testing happens solely on a sing
6055

6156
{{< figure src="test-cycle.png" >}}
6257

63-
### Terminology
58+
## Terminology
6459

6560
{{< youtube WKrvx7qCUDI >}}
6661

6762
A number of different terms are commonly used in the testing space:
6863

6964
* ```SUT/CUT```: System / code under test. This is the thing that you are actually trying to validate.
70-
* ```Glass box testing```: When testing in a glass box manner one typically carefully examines the program source code in order to identify potentially problematic sets of inputs or control flow paths. More details can be found in the [Glass box testing](GlassBoxTesting.md) reading.
71-
* ```Black box testing```: Black box testing validates programs without any knowledge of how the system is implemented. This form of testing relies heavily on predicting problematic inputs by examining public API signatures and any available documentation for the CUT. More details can be found in the [Black box testing](BlackBoxTesting.md) reading.
65+
* ```Glass box testing```: When testing in a glass box manner one typically carefully examines the program source code in order to identify potentially problematic sets of inputs or control flow paths. More details can be found in the [Glass box testing]({{% ref "glassbox" %}}) reading.
66+
* ```Black box testing```: Black box testing validates programs without any knowledge of how the system is implemented. This form of testing relies heavily on predicting problematic inputs by examining public API signatures and any available documentation for the CUT. More details can be found in the [Black box testing]({{% ref "blackbox" %}}) reading.
7267
* ```Effectiveness```: The simplest way to reason about the effectiveness of a test or test suite is to measure the probability the test will find a real fault (per unit of effort, which can be something like developer creation / maintenance time or number of test executions).
73-
* ```Higher/lower testability```: Some systems are significantly easier to test than others due to the way they are constructed. A highly testable system will enable more effective tests for the same cost than a system whose tests are largely ineffective (or require an outsized amount of creation and maintenance effort). More details can be found in the [Testability](TestabilityAssertions.md) reading.
68+
* ```Testability```: Some systems are significantly easier to test than others due to the way they are constructed. A highly testable system will enable more effective tests for the same cost than a system whose tests are largely ineffective (or require an outsized amount of creation and maintenance effort). More details can be found in the [Testability]({{% ref "testability" %}}) reading.
7469
* ```Repeatability```: The likelihood that running the same test twice under the same conditions will yield the same result.
7570

7671

72+
## Properties of tests
73+
74+
{{< youtube ll1k3Pks3ZA >}}
75+
76+
## Kinds of tests
77+
78+
{{< youtube _Th3f9vks_w >}}
79+
7780
There are a number of different *levels* of test; these range in size, complexity, execution duration, repeatability, along with how easy they are to write, maintain, and debug.
7881

7982
* ```Unit```: Unit tests exercise individual components, usually methods or functions, in isolation. This kind of testing is usually quick to write and the tests incur low maintenance effort since they touch such small parts of the system. They typically ensure that the unit fulfills its contract making test failures more straightforward to understand.
@@ -84,17 +87,21 @@ There are a number of different *levels* of test; these range in size, complexit
8487

8588
{{< figure src="test-levels.png" >}}
8689

90+
{{< expand title="Comparing unit and system tests" >}}
91+
{{% youtube x2DWjxDiOQo %}}
92+
{{< /expand >}}
93+
8794
For additional reference, take a look at this in-depth talk about how [Google tests](https://www.infoq.com/presentations/Continuous-Testing-Build-Cloud) their systems.
8895

89-
### Why not test?
96+
## Why not test?
9097

9198
There are a number of reasons why software systems are not tested with automated suites. These range from "bad design" to "slow", "boring", "doesn't catch bugs" and "that's QA's job". Ultimately, testing does have a cost: tests are programs too and take time to write, debug, and execute and must also be evolved along with the system.
9299

93100
One core paradox of automated test suites is how they are used. If you think that a test only provides value when it fails by catching a bug that would have otherwise made it to production, then that all passing tests are just a waste of computational cycles. At the same time though, passing tests _do_ give us the warm feeling that we have not introduced unexpected regression bugs into our systems. But at the same time, these feelings only work if we believe our test suite is capable of finding faults and _could_ fail. Yes: this contradicts the 'we should only run failing tests' implication above.
94101

95102
An area of interesting future research would be to figure out how often a suite needs to fail to impart trust, while also figuring out how few passing tests you could run to have a sense of confidence in your automated suite.
96103

97-
### Common testing assumptions
104+
## Common testing assumptions
98105

99106
It has been long held that the cost of fixing a fault rises exponentially with how late in the development process (e.g., requirements, design, implementation, deployment) the fault is detected. This statement arises from several influential studies Barry Bohem performed in the 60s and 70s. A more in-depth description of these costs can be found [here](http://www.agilemodeling.com/essays/costOfChange.htm) or [here](http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20100036670.pdf).
100107

@@ -106,7 +113,7 @@ It is also important to accurately consider the costs of automated testing. Writ
106113
Testing as an engineering discipline notes
107114
-->
108115

109-
### References
116+
## References
110117

111118
* Another great talk about how [Google stores](https://www.youtube.com/watch?v=W71BTkUbdqE) its source code.
112119

reader/content/testing/blackbox/_index.md

Lines changed: 2 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -32,8 +32,7 @@ There are a variety of black box testing approaches that are used in practice:
3232
3333
Use case testing is similar to a customer validating a user story by evaluating the definitions of done for a story in a sprint review. In this way use case testing and user story testing form a kind of user acceptance test.
3434
35-
<a name="input"></a>
36-
## Input partitioning
35+
## Input partitioning {id="input"}
3736
3837
{{< youtube 5PtxXnwyU3Y >}}
3938
@@ -109,8 +108,7 @@ number). The final step is combining the sets to arrive at four test cases, as e
109108
* Sometimes the inputs to a function are less interesting than the outputs; in these cases, output partitioning may be a better choice.
110109
* Defects often arise at the boundaries of partitions. Input partitioning is often combined with boundary value analysis to cover these cases more comprehensively.
111110

112-
<a name="output"></a>
113-
## Output partitioning
111+
## Output partitioning {id="output"}
114112

115113
{{< youtube 0yvHDKI-DSA >}}
116114

@@ -166,9 +164,3 @@ The inputs to these functions are just both non-negative numbers. But the output
166164

167165
* Functions usually have fewer outputs than inputs (simplistically a function has a single output value and can have many input parameters). This usually drives testers to explore 'easier' input space more thoroughly than the output space (although it should be noted that the complexity of the output space is often underestimated, and the 'single' return value could in fact be a complex object or side effecting operation).
168166
* The effort required to derive inputs to achieve a given output is often challenging. This can be especially observed in erroneous situations (e.g., an output behaviour dictates how a system should behave when disk space is exhausted, but actually testing this can be hard).
169-
170-
171-
---
172-
[![](figures/CCSA.png "Creative Commons: Attribution-ShareAlike")](https://creativecommons.org/licenses/by-sa/3.0/) [Reid Holmes](https://www.cs.ubc.ca/~rtholmes/)
173-
174-

reader/content/testing/testability/_index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,12 +27,12 @@ Sometimes the CUT can be invoked by a test but its outcome cannot be observed. F
2727

2828
{{< youtube Z93-c4ngxGw >}}
2929

30-
{{< youtube NMuhE-XnFe8 >}}
31-
3230
Being able to isolate a fault within the code under test is crucial to be able to quickly determine what has caused a failure so it can be resolved. This is challenging in large modern systems due to the number of (often third party) dependencies software systems have. For example, if a data access routine fails is it the logic in the routine or is it a failure in the underlying database? At its simplest level, isolateability can be increased by decomposing larger functions into smaller more self-contained functions that can be tested independently.
3331

3432
Sometimes code will have complex dependencies, requiring isolation through simulation. In simulation-based environments code dependencies are _mocked_ or _stubbed_ whereby they are replaced with developer-created fake components that take known inputs and return known values (e.g., a ```MockLoginRejectController```) would always return false for a ```login(user, pass)``` without needing to check a user store, database, or external system. In this way the developer can test their code that uses ```login(..)``` and ensure it handles the false case correctly without fear that a bug in the real login controller may return an incorrect or inconsistent value. In addition to isolation, mocking also greatly increases performance and makes components less prone to non-determinism as the result being returned is usually fixed and not dependent on some external complex computation. Mocking can also make it possible to test program states that would otherwise be hard to trigger in practice (for instance if you want to test a situation where a remote service is down you can have a ```MockTimeoutService``` that just does not respond to requests).
3533

34+
{{< expand title="Code Example" >}} {{% youtube NMuhE-XnFe8 %}} {{< /expand >}}
35+
3636
### Automatability
3737

3838
{{< youtube Q83W5zH8LUY >}}

0 commit comments

Comments
 (0)