You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: reader/content/_index.md
+6Lines changed: 6 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -12,6 +12,12 @@ This is a high-level overview of what the course will be about.
12
12
The materials roughly break down into 6 high-level modules that are spread across the 13 week of standard academic semester at UBC.
13
13
Readings and videos are available for most course concepts.
14
14
15
+
Videos throughout the reader provide additional explanations of the course material. The video below introduces the codebase that is used throughout the course reader.
16
+
17
+
{{< expand title="Introduction to running example" >}}
18
+
{{% youtube S4GEa6JMo4I %}}
19
+
{{< /expand >}}
20
+
15
21
## License
16
22
17
23
The readings for this course are licensed using [CC-by-SA](https://creativecommons.org/licenses/by-sa/3.0/). However, it is important to note that the deliverable descriptions, code implementing the deliverables, exams, and exam solutions are considered private materials. We go to considerable lengths to make the project an interesting and useful learning experience for this course. This is a great deal of work, and while future students may be tempted by your solutions, posting them does not do them any real favours. Please be considerate with these private materials and not pass them along to others, make your repos public, or post them to other sites online.
*[x][Interpreted vs Complied Languages]({{< ref "languages#interpreted-vs-compiled" >}})
13
-
*[x] Static vs Dynamic Representations of the System
14
-
*[x][Static vs Dynamic Types]({{< ref "languages#static-vs-dynamic-types" >}})
15
-
{{% /notice %}}
10
+
[//]: #(By the end of this chapter you should:)
16
11
12
+
[//]: #()
13
+
[//]: #(* [x] Identify)
17
14
18
-
{{< youtube Uamo4Ej0tWk >}}
19
-
20
-
21
-
{{< youtube ll1k3Pks3ZA >}}
15
+
[//]: #(* [x] [Interpreted vs Complied Languages]({{< ref "languages#interpreted-vs-compiled" >}}))
22
16
23
-
* [Properties of Tests](http://www.youtube.com/watch?v=ll1k3Pks3ZA)
17
+
[//]: #(* [x] Static vs Dynamic Representations of the System)
24
18
25
-
[//]: #(* [Kinds of Tests](http://www.youtube.com/watch?v=_Th3f9vks_w))
19
+
[//]: #(* [x] [Static vs Dynamic Types]({{< ref "languages#static-vs-dynamic-types" >}}))
26
20
27
-
[//]: #( * [Unit and System Properties](http://www.youtube.com/watch?v=x2DWjxDiOQo))
21
+
[//]: #({{% /notice %}})
28
22
29
-
[//]: #( * [Red Green Refactor](http://www.youtube.com/watch?v=v0q1MKhSQVM))
30
23
31
-
{{< youtube x2DWjxDiOQo >}}
32
-
33
-
{{< youtube v0q1MKhSQVM >}}
24
+
{{< youtube Uamo4Ej0tWk >}}
34
25
35
26
36
27
Software systems are only useful if they do what they are supposed to do. This is increasingly important given the vital safety-critical roles modern software systems play. Unfortunately, proving that a system is correct is exceedingly difficult and expensive for large software systems. This leads to the fundamental tradeoff at the heart of most commonly applied testing approaches:
@@ -43,6 +34,10 @@ Given the constraints above, the most prevalent quality validation approach in u
43
34
44
35
This is because what we are evaluating is whether the probability that there are no bugs in a system given that the tests pass is not actually 0. Since tests themselves are programs (and can have bugs) and specifications are often incomplete or imprecise we therefore must admit that the chance of a defect slipping through a set of tests is > 0 [(online discussion)](http://tonyxzt.blogspot.ca/2010/01/tests-can-show-presence-of-bugs-not.html).
45
36
37
+
## Testing Process
38
+
39
+
{{< youtube v0q1MKhSQVM >}}
40
+
46
41
The modern test cycle is summarized below. The cycle between steps 1-7 can occur many times before the code is ready to commit.
47
42
48
43
1. Develop some new code.
@@ -60,20 +55,28 @@ Steps 2/7 may not happen on small teams or when testing happens solely on a sing
60
55
61
56
{{< figure src="test-cycle.png" >}}
62
57
63
-
###Terminology
58
+
## Terminology
64
59
65
60
{{< youtube WKrvx7qCUDI >}}
66
61
67
62
A number of different terms are commonly used in the testing space:
68
63
69
64
*```SUT/CUT```: System / code under test. This is the thing that you are actually trying to validate.
70
-
*```Glass box testing```: When testing in a glass box manner one typically carefully examines the program source code in order to identify potentially problematic sets of inputs or control flow paths. More details can be found in the [Glass box testing](GlassBoxTesting.md) reading.
71
-
*```Black box testing```: Black box testing validates programs without any knowledge of how the system is implemented. This form of testing relies heavily on predicting problematic inputs by examining public API signatures and any available documentation for the CUT. More details can be found in the [Black box testing](BlackBoxTesting.md) reading.
65
+
*```Glass box testing```: When testing in a glass box manner one typically carefully examines the program source code in order to identify potentially problematic sets of inputs or control flow paths. More details can be found in the [Glass box testing]({{% ref "glassbox" %}}) reading.
66
+
*```Black box testing```: Black box testing validates programs without any knowledge of how the system is implemented. This form of testing relies heavily on predicting problematic inputs by examining public API signatures and any available documentation for the CUT. More details can be found in the [Black box testing]({{% ref "blackbox" %}}) reading.
72
67
*```Effectiveness```: The simplest way to reason about the effectiveness of a test or test suite is to measure the probability the test will find a real fault (per unit of effort, which can be something like developer creation / maintenance time or number of test executions).
73
-
*```Higher/lower testability```: Some systems are significantly easier to test than others due to the way they are constructed. A highly testable system will enable more effective tests for the same cost than a system whose tests are largely ineffective (or require an outsized amount of creation and maintenance effort). More details can be found in the [Testability](TestabilityAssertions.md) reading.
68
+
*```Testability```: Some systems are significantly easier to test than others due to the way they are constructed. A highly testable system will enable more effective tests for the same cost than a system whose tests are largely ineffective (or require an outsized amount of creation and maintenance effort). More details can be found in the [Testability]({{% ref "testability" %}}) reading.
74
69
*```Repeatability```: The likelihood that running the same test twice under the same conditions will yield the same result.
75
70
76
71
72
+
## Properties of tests
73
+
74
+
{{< youtube ll1k3Pks3ZA >}}
75
+
76
+
## Kinds of tests
77
+
78
+
{{< youtube _Th3f9vks_w >}}
79
+
77
80
There are a number of different *levels* of test; these range in size, complexity, execution duration, repeatability, along with how easy they are to write, maintain, and debug.
78
81
79
82
*```Unit```: Unit tests exercise individual components, usually methods or functions, in isolation. This kind of testing is usually quick to write and the tests incur low maintenance effort since they touch such small parts of the system. They typically ensure that the unit fulfills its contract making test failures more straightforward to understand.
@@ -84,17 +87,21 @@ There are a number of different *levels* of test; these range in size, complexit
84
87
85
88
{{< figure src="test-levels.png" >}}
86
89
90
+
{{< expand title="Comparing unit and system tests" >}}
91
+
{{% youtube x2DWjxDiOQo %}}
92
+
{{< /expand >}}
93
+
87
94
For additional reference, take a look at this in-depth talk about how [Google tests](https://www.infoq.com/presentations/Continuous-Testing-Build-Cloud) their systems.
88
95
89
-
###Why not test?
96
+
## Why not test?
90
97
91
98
There are a number of reasons why software systems are not tested with automated suites. These range from "bad design" to "slow", "boring", "doesn't catch bugs" and "that's QA's job". Ultimately, testing does have a cost: tests are programs too and take time to write, debug, and execute and must also be evolved along with the system.
92
99
93
100
One core paradox of automated test suites is how they are used. If you think that a test only provides value when it fails by catching a bug that would have otherwise made it to production, then that all passing tests are just a waste of computational cycles. At the same time though, passing tests _do_ give us the warm feeling that we have not introduced unexpected regression bugs into our systems. But at the same time, these feelings only work if we believe our test suite is capable of finding faults and _could_ fail. Yes: this contradicts the 'we should only run failing tests' implication above.
94
101
95
102
An area of interesting future research would be to figure out how often a suite needs to fail to impart trust, while also figuring out how few passing tests you could run to have a sense of confidence in your automated suite.
96
103
97
-
###Common testing assumptions
104
+
## Common testing assumptions
98
105
99
106
It has been long held that the cost of fixing a fault rises exponentially with how late in the development process (e.g., requirements, design, implementation, deployment) the fault is detected. This statement arises from several influential studies Barry Bohem performed in the 60s and 70s. A more in-depth description of these costs can be found [here](http://www.agilemodeling.com/essays/costOfChange.htm) or [here](http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20100036670.pdf).
100
107
@@ -106,7 +113,7 @@ It is also important to accurately consider the costs of automated testing. Writ
106
113
Testing as an engineering discipline notes
107
114
-->
108
115
109
-
###References
116
+
## References
110
117
111
118
* Another great talk about how [Google stores](https://www.youtube.com/watch?v=W71BTkUbdqE) its source code.
Copy file name to clipboardExpand all lines: reader/content/testing/blackbox/_index.md
+2-10Lines changed: 2 additions & 10 deletions
Original file line number
Diff line number
Diff line change
@@ -32,8 +32,7 @@ There are a variety of black box testing approaches that are used in practice:
32
32
33
33
Use case testing is similar to a customer validating a user story by evaluating the definitions of done for a story in a sprint review. In this way use case testing and user story testing form a kind of user acceptance test.
34
34
35
-
<a name="input"></a>
36
-
## Input partitioning
35
+
## Input partitioning {id="input"}
37
36
38
37
{{< youtube 5PtxXnwyU3Y >}}
39
38
@@ -109,8 +108,7 @@ number). The final step is combining the sets to arrive at four test cases, as e
109
108
* Sometimes the inputs to a function are less interesting than the outputs; in these cases, output partitioning may be a better choice.
110
109
* Defects often arise at the boundaries of partitions. Input partitioning is often combined with boundary value analysis to cover these cases more comprehensively.
111
110
112
-
<aname="output"></a>
113
-
## Output partitioning
111
+
## Output partitioning {id="output"}
114
112
115
113
{{< youtube 0yvHDKI-DSA >}}
116
114
@@ -166,9 +164,3 @@ The inputs to these functions are just both non-negative numbers. But the output
166
164
167
165
* Functions usually have fewer outputs than inputs (simplistically a function has a single output value and can have many input parameters). This usually drives testers to explore 'easier' input space more thoroughly than the output space (although it should be noted that the complexity of the output space is often underestimated, and the 'single' return value could in fact be a complex object or side effecting operation).
168
166
* The effort required to derive inputs to achieve a given output is often challenging. This can be especially observed in erroneous situations (e.g., an output behaviour dictates how a system should behave when disk space is exhausted, but actually testing this can be hard).
Copy file name to clipboardExpand all lines: reader/content/testing/testability/_index.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -27,12 +27,12 @@ Sometimes the CUT can be invoked by a test but its outcome cannot be observed. F
27
27
28
28
{{< youtube Z93-c4ngxGw >}}
29
29
30
-
{{< youtube NMuhE-XnFe8 >}}
31
-
32
30
Being able to isolate a fault within the code under test is crucial to be able to quickly determine what has caused a failure so it can be resolved. This is challenging in large modern systems due to the number of (often third party) dependencies software systems have. For example, if a data access routine fails is it the logic in the routine or is it a failure in the underlying database? At its simplest level, isolateability can be increased by decomposing larger functions into smaller more self-contained functions that can be tested independently.
33
31
34
32
Sometimes code will have complex dependencies, requiring isolation through simulation. In simulation-based environments code dependencies are _mocked_ or _stubbed_ whereby they are replaced with developer-created fake components that take known inputs and return known values (e.g., a ```MockLoginRejectController```) would always return false for a ```login(user, pass)``` without needing to check a user store, database, or external system. In this way the developer can test their code that uses ```login(..)``` and ensure it handles the false case correctly without fear that a bug in the real login controller may return an incorrect or inconsistent value. In addition to isolation, mocking also greatly increases performance and makes components less prone to non-determinism as the result being returned is usually fixed and not dependent on some external complex computation. Mocking can also make it possible to test program states that would otherwise be hard to trigger in practice (for instance if you want to test a situation where a remote service is down you can have a ```MockTimeoutService``` that just does not respond to requests).
0 commit comments