-
Notifications
You must be signed in to change notification settings - Fork 14
Types and Programming Languages Chapter 5 The Untyped Lambda Calculus


t ::=
x // variable
λx.t // abstraction
t t // application
Abstraction is analogous to a function definition:
function (n) { t }
λn. t
Application is analogous to a function call:
(function (n) { t }(x))
(λn. t) x
s t u = (s t) u // application associates to the left
λx. λy. x y x = λx. (λy. ((x y) x) // abstraction bodies extend to the right

x is free when it is not bound by an enclosing abstraction on x, e.g.
x y
λy. x y
x is bound:
λx. x
λz. λx. λy. x (y z)
The first x is bound and the second is free:
(λx.x) x
A term with no free variables is closed (also called a combinator):
id = λx.x

Tom lead us in a discussion of the notion of "free" and "bound" variables in terms of a graph where "bound" variable can be thought of as a pointer to an ancestor that declares an abstraction with that variable. In these cases, the name of a variable (e.g. x or y) is irrelevant as it is only really useful for writing down the terms but they are better modelled as pointers to one another.
"Free" variables on the other hand have no ancestor abstraction in the graph and might be considered as pointers to some unseen encapsulating abstraction (depicted by "Globals" in the photograph above).
We discussed beta reduction and several different strategies in some detail by working through the given example term (without the potentially confusing id shorthand given in the chapter):
(λx.x) ((λx.x) (λz. (λx.x) z))

Under the normal order strategy, the leftmost, outermost reducible expression is always reduced first.
(λx.x) ((λx.x) (λz. (λx.x) z))
------------------------------
→ (λx.x) (λz. (λx.x) z)
---------------------
→ λz. (λx.x) z
--------
→ λz.z
The call by name strategy is yet more restrictive, allowing no reductions inside abstractions.
(λx.x) ((λx.x) (λz. (λx.x) z))
------------------------------
→ (λx.x) (λz. (λx.x) z)
---------------------
→ λz. (λx.x) z
Most languages use a call by value strategy, in which only outermost reducible expressions are reduced and where a reducible expression is reduced only when its right-hand side has already been reduced to a value — a closed term that is finished computing and cannot be reduced any further.
(λx.x) ((λx.x) (λz. (λx.x) z))
-----------------------
→ (λx.x) (λz. (λx.x) z)
---------------------
→ λz. (λx.x) z
A subtlety here is the notion of "value" which currently only describes abstractions and no other types of term (that is, neither variables nor applications).
tru = λt. λf. t;
fls = λt. λf. f;
We briefly discussed defining the concept of the booleans true and false as functions that take two arguments and return the first if true and the second if false. This matches how we typically define the conditional if, e.g.
function tru(t, f) { return t; }
function fls(t, f) { return f; }
if true then t else f
if false then t else fWe discussed the lack of some sort of input value into the Lambda Calculus and how that might explain the design of booleans: Tom made the point that the notions of true and false typically only exist in a program to eventually be used in some sort of function application, e.g. in an if statement to choose between two branches of a program.

We walked through the initially confusing substitution notation in the book and touched on how it adds significant power to our evaluation rules.
Taking the following definition:
(λx.t12)t2 ⟶ [x ↦ t2]t12
We applied it to a JavaScript example:
function (x) { x * 2 }(21) λx.
------------
function (x) { x * 2 }(21) t12
---------
function (x) { x * 2 }(21) t2
----
function (x) { x * 2 }(21)This means that:
[x ↦ t2]t12
Is equivalent to the following in our example:
[x ↦ 21]x * 2
Giving us:
21 * 2

We finished the meeting by talking through the inference rules presented in the book that describe the call-by-value strategy for evaluating our programs.
We worked through an example of identifiying the values and terms in an expression:
(λx. x) ((λx. x) (λz. (λx. x) z))

Then applied the inference rules to reduce the expression. We realised that it wasn't obvious on first reading, but the inference rules for call-by-value are mutually exclusive. If you can apply any rule to an expression there is only one rule you can apply.
We then tried the exercise to re-write the inference rules to describe full beta reduction.

Mostly this required us to replace references to values in the rules with references to terms and this showed us that we now had a choice of which rule to apply at any given moment. The final rule we added was E-ABS:
t1 → t1ʹ
----------------
λx. t1 → λx. t1ʹ
There was some confusion in the room about why this was neccessary, and we explained that this is because in λx. x the other rules don't apply because they apply to two terms next to each other, and λx. is not a term on it's own, so we need E-ABS to let us "go inside" the abstraction and work on the x (the function definition).
Thanks to Leo and Geckoboard for hosting and lending use of their bread knife. Thanks to all those who provided snacks, especially James for his homemade Lambda Focaccia.
- Home
- Documentation
- Choosing a Topic
- Shows & Tells
- Miscellaneous
- Opt Art
- Reinforcement Learning: An Introduction
- 10 Technical Papers Every Programmer Should Read (At Least Twice)
- 7 More Languages in 7 Weeks
- Lua, Day 1: The Call to Adventure
- Lua, Day 2: Tables All the Way Down
- Lua, Day 3
- Factor, Day 1: Stack On, Stack Off
- Factor, Day 2: Painting the Fence
- Factor, Day 3: Balancing on a Boat
- Elm, Day 1: Handling the Basics
- Elm, Day 2: The Elm Architecture
- Elm, Day 3: The Elm Architecture
- Elixir, Day 1: Laying a Great Foundation
- Elixir, Day 2: Controlling Mutations
- Elixir, Day 3: Spawning and Respawning
- Julia, Day 1: Resistance Is Futile
- Julia, Day 2: Getting Assimilated
- Julia, Day 3: Become One With Julia
- Minikanren, Days 1-3
- Minikanren, Einstein's Puzzle
- Idris Days 1-2
- Types and Programming Languages
- Chapter 1: Introduction
- Chapter 2: Mathematical Preliminaries
- Chapter 3: Untyped Arithmetic Expressions
- Chapter 4: An ML Implementation of Arithmetic Expressions
- Chapter 5: The Untyped Lambda-Calculus
- Chapters 6 & 7: De Bruijn Indices and an ML Implementation of the Lambda-Calculus
- Chapter 8: Typed Arithmetic Expressions
- Chapter 9: The Simply-Typed Lambda Calculus
- Chapter 10: An ML Implementation of Simple Types
- Chapter 11: Simple Extensions
- Chapter 11 Redux: Simple Extensions
- Chapter 13: References
- Chapter 14: Exceptions
- Chapter 15: Subtyping – Part 1
- Chapter 15: Subtyping – Part 2
- Chapter 16: The Metatheory of Subtyping
- Chapter 16: Implementation
- Chapter 18: Case Study: Imperative Objects
- Chapter 19: Case Study: Featherweight Java
- The New Turing Omnibus
- Errata
- Chapter 11: Search Trees
- Chapter 8: Random Numbers
- Chapter 35: Sequential Sorting
- Chapter 58: Predicate Calculus
- Chapter 27: Perceptrons
- Chapter 9: Mathematical Research
- Chapter 16: Genetic Algorithms
- Chapter 37: Public Key Cryptography
- Chapter 6: Game Trees
- Chapter 5: Gödel's Theorem
- Chapter 34: Satisfiability (also featuring: Sentient)
- Chapter 44: Cellular Automata
- Chapter 47: Storing Images
- Chapter 12: Error-Correcting Codes
- Chapter 32: The Fast Fourier Transform
- Chapter 36: Neural Networks That Learn
- Chapter 41: NP-Completeness
- Chapter 55: Iteration and Recursion
- Chapter 19: Computer Vision
- Chapter 61: Searching Strings
- Chapter 66: Church's Thesis
- Chapter 52: Text Compression
- Chapter 22: Minimum spanning tree
- Chapter 64: Logic Programming
- Chapter 60: Computer Viruses
- Show & Tell
- Elements of Computing Systems
- Archived pages