0% found this document useful (0 votes)
11 views455 pages

Multi-Paradigm Programming - Marty Yoo

This book explores the integration of functional, object-oriented, and Lisp programming paradigms for software design and implementation, primarily using TypeScript. It discusses the evolution of multiparadigm languages, their advantages, and practical applications in both frontend and backend development. The content includes various programming patterns, asynchronous programming techniques, and examples to enhance problem-solving capabilities and improve technical proficiency.

Uploaded by

joyprajapti77
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views455 pages

Multi-Paradigm Programming - Marty Yoo

This book explores the integration of functional, object-oriented, and Lisp programming paradigms for software design and implementation, primarily using TypeScript. It discusses the evolution of multiparadigm languages, their advantages, and practical applications in both frontend and backend development. The content includes various programming patterns, asynchronous programming techniques, and examples to enhance problem-solving capabilities and improve technical proficiency.

Uploaded by

joyprajapti77
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 455

Multi-Paradigm Programming

Combining Functional, Object-Oriented, and Lisp Paradigms


for Software Design and Implementation

Marty Yoo

This book is available at https://leanpub.com/multi-paradigm-


programming

This version was published on 2025-04-28

* * * * *

This is a Leanpub book. Leanpub empowers authors and publishers


with the Lean Publishing process. Lean Publishing is the act of publishing
an in-progress ebook using lightweight tools and many iterations to get
reader feedback, pivot until you have the right book and build traction once
you do.

* * * * *

© 2025 Marty Yoo

OceanofPDF.com
Table of Contents
Multi-Paradigm Programming
From the Author
Environment Setup and Example Code
1. How Multiparadigm Is Expanding Modern Languages
1.1 The Iterator Pattern in OOP and First-Class Functions
1.2 Generators: Building Iterators with Imperative Programming
1.3 The Iterator Pattern in TypeScript: The Iteration Protocol
1.4 Functional Programming with Iterables
1.5 Why the Iteration Protocol Is Designed as an Interface Rather
Than Inheritance
1.6 Summary
2. Functional Programming, Type Systems, and Lisp
2.1 Type Inference, Function Types, and Generics
2.2 Functional Type Systems in a Multi-Paradigm Language
2.3 Multiparadigm Languages and Metaprogramming – From LISP
2.4 Mid-Chapter Summary
3. Code:Object:Function = Generator:Iterator:LISP = IP:OOP:FP
3.1 Code Is Data – A List Containing Logic
3.2 Learning from Haskell
3.3 Taking a Closer Look at Lazy Evaluation
3.4 Generator:Iterator:LISP – Lazy Evaluation and Safe Composition
3.5 Summary
4. Asynchronous Programming
4.1 Asynchronous as a Value
4.2 Asynchronous Handling with Laziness
4.3 Treating Asynchronous as a Type
4.4 Asynchronous Error Handling
4.5 Summary
5. Practical Functional Programming
5.1 Working With Real World Data
5.2 Applying to More Problems
5.3 Backend Asynchronous Programming
5.4 Patternizing List Processing
5.5 Summary
6. Multi-Paradigm Programming
6.1 Building an HTML Template Engine
6.2 Handling Concurrency with a Multi-Paradigm Approach
6.3 Summary
7. Object-Oriented Front-End Development and Multi-Paradigm
Approaches in Practice
7.1 Building a Settings App
7.2 Building a Todo App
7.3 Building a Todo App, Part 2
7.4 Building a Todo App, Part 3
7.5 Asynchronous Flows with UI, Promise, and Class

OceanofPDF.com
Multi-Paradigm Programming
Combining Functional, Object-Oriented, and Lisp Paradigms for
Software Design and Implementation

From the Author


Environment Setup and Example Code

1. How Multiparadigm Is Expanding Modern Languages


a. The Iterator Pattern in OOP and First-Class Functions
i. GoF’s Iterator Pattern
ii. Creating an Iterator from an ArrayLike Object
iii. Creating a Reverse Iterator Function from an ArrayLike
Object
iv. A Lazy-Evaluating map Function
v. The Crossroads of Multiparadigm: The Iterator Pattern and
First-Class Functions
b. Generators: Building Iterators with Imperative Programming
i. Basic Generator Syntax
ii. Rewriting reverse with Generators
c. The Iterator Pattern in TypeScript: The Iteration Protocol
i. Iterators and Iterables
ii. Interaction Between the Language and Iterables
iii. Generator-Created Iterators Are Also Iterables
d. Functional Programming with Iterables
i. forEach Function
ii. map Function
iii. filter Function
iv. Composing Higher-Order Functions ((()))
v. A Fun Twist on filter
e. Why the Iteration Protocol Is Designed as an Interface Rather
Than Inheritance
i. Even NodeList in the Web API Is Iterable
ii. Why Use an Interface Instead of Inheritance for This?
iii. Interfaces vs. Class Inheritance
f. Summary
2. Functional Programming, Type Systems, and Lisp
a. Type Inference, Function Types, and Generics
i. Type Inference
ii. Function Types and Generics
b. Functional Type Systems in a Multi-Paradigm Language
i. Revisiting the Iteration Protocol and Types
ii. Functional Higher-Order Functions with a Type System
iii. Function Signatures and Nested Type Inference
c. Multiparadigm Languages and Metaprogramming – From LISP
i. Pipe Operator
ii. Combining Classes, Higher-Order Functions, Iterators, and
the Type System
iii. Learning from LISP (Clojure) – Code is Data, Data is Code
iv. When map Executes in Clojure
v. Making User-Created Code and Classes into Lists in a
Multiparadigm Language
vi. LISP’s Extensibility – Macros and Metaprogramming
vii. Dynamically Extending Functionality at Runtime
viii. The Joy of Extending a Language
d. Summary
3. Code:Object:Function = Generator:Iterator:LISP = IP:OOP:FP
a. Code Is Data – A List Containing Logic
i. [for, i++, if, break] – Thinking of Code as a List
ii. List Processing in Modern Languages – Clojure, Kotlin,
Swift, Scala, C#, Java
iii. Concepts and Paradigms That Can Be Applied Across
Languages
b. Learning from Haskell
i. Functions and Function Signatures in Haskell
ii. Elegant Syntax with Language-Level Support for Currying
iii. The main Function and IO
iv. head, map, filter, foldl Function Signatures
v. Function Composition – The . Operator and the $ Operator
vi. sumOfSquaresOfOddNumbers
vii. Pipeline Style – The & Operator
viii. Error Handling via Either
ix. Pattern Matching
c. Taking a Closer Look at Lazy Evaluation
i. The Execution Order of Nested Iterators — Verifying with
Generators
ii. Looking in More Detail
iii. Examining It with Direct Iterators
iv. Simplifying Our View
d. Generator:Iterator:LISP – Lazy Evaluation and Safe Composition
i. The find Function Signature
ii. find in Haskell and Safe Composition
iii. Rethinking Lazy Evaluation and List Processing via find
iv. Safe Composition in TypeScript: ? ?? !
v. every
vi. some
vii. Insert a “Break” Logic with Lazy Evaluation
viii. Abstracting Shared Logic in every and some Functionally
ix. Adding with concat
e. Summary
4. Asynchronous Programming
a. Asynchronous as a Value
i. Promise
ii. Have You Used new Promise() Directly?
iii. Promise.race
iv. Setting a Timeout for IO Operations
v. Rendering UI Depending on Response Speed
vi. Promise.all
vii. Promise.allSettled
viii. Promise.any
b. Asynchronous Handling with Laziness
i. How to Defer a Promise’s Execution?
ii. ChatGPT’s Imperative Implementation of Concurrency –
executeWithLimit
iii. A Functional Implementation of Concurrency –
executeWithLimit
iv. A “Step” Toward Effective Async Handling – Laziness
c. Treating Asynchronous as a Type
i. AsyncIterator, AsyncIterable, AsyncGenerator Protocols
ii. Higher-Order Functions for AsyncIterable
iii. A Convention for Functions Supporting Both Sync and
Async – toAsync
iv. Type System + Asynchronous Functional Functions + Class
d. Asynchronous Error Handling
i. Loading images and getting their heights
ii. Improved Asynchronous Logic
iii. Ensuring that errors occur properly is key
iv. Stable Software and Asynchronous Programming
e. Summary
5. Practical Functional Programming
a. Working With Real World Data
i. Working With Numbers in a 2D Array
ii. Handling Basketball Team Data
iii. Handling Commerce Data
iv. Handling Commerce Data 2
v. Solving Problems With a Consistent Approach
b. Applying to More Problems
i. pipe
ii. Composing Asynchronous Functions With pipe
iii. zip
iv. When You Need the Index as a Value
v. Collatz Conjecture: Counting Until It Becomes 1
vi. take, takeUntilInclusive, takeWhile Instead of break
vii. Logic Created by Combining Functions
c. Backend Asynchronous Programming
i. The Payment Process Problem in a Commerce Platform
ii. Creating a Payment History Synchronization Scheduler
iii. Optimizing Time Complexity Using Hashing
iv. Maintaining a Stable Interval for Asynchronous Work
v. Handling Maximum Request Size Limits Efficiently
vi. Counting Up Front for Efficiency
vii. Improving Efficiency With Parallelism
viii. List-Processing-Based Async/Concurrent Programming
d. Patternizing List Processing
i. Transform-Accumulate (map-reduce)
ii. Nested-Transform (nested-map)
iii. Iterator-Effect (Iterator-forEach)
iv. Filter-Terminate (filter-take)
v. Infinite-Terminate (range-take)
vi. Split-Flatten (chunk-flat)
vii. Transform-Flatten (map-flat)
viii. Combine-Accumulate (zip-reduce)
ix. Hash-Match (reduce-map)
x. Conceptual Summary of Each Type of List-Processing
Function
e. Summary
6. Multi-Paradigm Programming
a. Building an HTML Template Engine
i. Tagged Templates
ii. Implementing with List Processing
iii. Replacing push with concat
iv. Preventing XSS (Cross Site Scripting)
v. Enabling Component-Style Development with Nested Data
vi. Solve Structural Issues with OOP, Solve Logic Issues with
FP
vii. Creating HTML Strings from Arrays
viii. Adding Objects Functionally via +
ix. Moving Array Handling into the Class for Convenience
x. Abstracting into a Higher-Order Function
xi. Building a Small Front-End Development Library
xii. The Opportunities Offered by Multi-Paradigm Languages
b. Handling Concurrency with a Multi-Paradigm Approach
i. Revisiting executeWithLimit
ii. runTasksWithPool Implemented Imperatively by ChatGPT
iii. runTasksWithPool Implemented via a Multi-Paradigm
Approach
iv. Making the Concurrency (Load) Size Dynamically
Adjustable
v. Handling Infinite, Ongoing Tasks
vi. runAllSettled
c. Summary
7. Object-Oriented Front-End Development and Multi-Paradigm
Approaches in Practice
a. Building a “Settings” App
i. SwitchView
ii. SettingItemView
iii. SettingListView
iv. SettingPage
v. Adding a Global Toggle Feature
vi. Inter-Component Communication via Custom Events
vii. Why Do Our Events Keep Looping, Causing Side Effects?
viii. A Type-Safe Pattern for Custom Event Communication
ix. A Reusable SwitchView Component
x. How Paradigms Create Reactive Code
b. Building a Todo App
i. CheckView
ii. Abstract Class and Inheritance
iii. Headless UI
iv. TodoItemView
v. Domain–UI Naming Mismatch Management Through
Hierarchical Encapsulation
vi. TodoListView
vii. Generalizing with a Generic Class
viii. TodoPage
ix. Removing Duplication via Inter-Object Communication
Based on Interfaces
x. From a GoF Design Pattern Perspective – The Strategy
Pattern
xi. Inter-Object Communication with First-Class Functions – A
“Callback Injection” Pattern
xii. A Multiparadigm Code Design
c. Building a Todo App, Part 2
i. Simplifying Code with Decorators
ii. TextSubmitView
iii. Adding a Helper Method to ListView
iv. Creating a New Todo
v. SegmentControlView
vi. Optimizing DOM Updates via a Heuristic Diff
d. Building a Todo App, Part 3
i. Making It Flexible with the State Pattern
ii. Benefits of Applying the State Pattern
iii. Dynamically Changing Code and Software Behavior at
Runtime
iv. Further Expanding with State Objects
v. A Multiparadigm Object-Oriented Design
e. Asynchronous Flows with UI, Promise, and Class
i. Using Promises for Custom Alerts and Confirms
ii. Promises as Return Values, Mediating Communication
iii. Selecting Friends for a Group Chat

From the Author


Programming languages used to be divided into functional, object-
oriented, and procedural categories. However, this trend is shifting. With
the emergence of hybrid programming languages that incorporate both
functional and object-oriented features, most major programming languages
today—such as Java, C#, Scala, TypeScript, JavaScript, Kotlin, and Swift—
have become multiparadigm languages. This shift has become a shared
characteristic of many languages widely used in the industry.

Multiparadigm languages provide not only classic object-oriented


features like classes, inheritance, and interfaces, but also a wide range of
functional programming capabilities, such as first-class functions, list
processing, lazy evaluation, and asynchronous/concurrent processing. They
also offer flexible type systems, type inference, generics, and utility types.
In some cases, they even mix static and dynamic typing. Why have most
languages evolved into these multiparadigm forms? There must be clear
advantages driving this change.

Mike Loukides, Vice President at O’Reilly Media and author of


numerous technical books, recently stated in his article, “We need to learn
how to use multiparadigm languages that provide functional, object-
oriented, and imperative paradigms more effectively,” adding that “many
tools have improved, and now it’s our turn to learn how to use them well.
It’s time to blend paradigms that were once distinct—and even in conflict—
for a long time.”

In this current environment, using only a single paradigm could be a


missed opportunity. Some problems are best solved with a functional
approach, while others are better suited for object-oriented programming.
Programmers no longer need to argue over whether functional or object-
oriented programming is superior, or whether static or dynamic typing is
better. Organizations and committees that develop languages and define
standards already support multiple paradigms within a single language and
platform, enabling us to combine these paradigms within a single project—
even within a single function.

We might be missing out on the full range of language-level features


and paradigms if we rely only on specific libraries or frameworks. In fact,
many technologies rooted in functional and object-oriented programming
boast a long and established history. They have already been verified
through solving numerous problems over the years. Programmers who
master a variety of language-level skills can solve more problems quickly,
safely, and flexibly. They can also write code in a more fundamental way,
free from overreliance on any particular library.

This book primarily uses TypeScript, but the techniques and concepts
covered are also applicable to Java, C#, Scala, Kotlin, Swift, and other
languages. It aims to help you apply functional programming in practical
settings, while continuing to use your existing environment and language.
By providing solutions that combine object-oriented and functional
programming, this book shows how to effectively leverage the power of
multiparadigm languages, thus expanding your problem-solving capabilities
and improving your technical proficiency.

For frontend developers, this book focuses on the language features


and the latest Web APIs, demonstrating examples of high-performance,
smooth UI implementations. For backend developers, it explains how to
work effectively with asynchronous programming and how to combine data
efficiently and safely. It also discusses patterns and ideas for writing easily
understandable, readable code when working with team members—
ultimately increasing productivity and reducing errors. I hope this book
helps you experience the wide-ranging possibilities of multiparadigm
languages and deepens your enjoyment of programming.

Environment Setup and Example Code


Official GitHub Page

All the TypeScript examples from this book can be found in the
following GitHub repository:

GitHub: https://github.com/marpple/multi-paradigm-programming

The repository includes everything needed to run the examples, such


as installation instructions, tsconfig.json settings, and required packages.
By simply following the setup process, you can quickly run the examples
provided in the book.

Tips for Learning with Example Code

While the example code is fully prepared, we encourage you to create


new files in the same environment and write the code yourself. By writing
and running the code while referring to the book, you can achieve a deeper
and more effective learning experience.

Additional Resources and Information

The official GitHub page also provides updates on various resources,


such as video lectures that align with the book’s content, links to online
communities for collaborative learning, and a YouTube channel. If you’re
interested, feel free to visit the GitHub repository for more information.
1. How Multiparadigm Is Expanding Modern
Languages
In the past, if you wanted to use the functional paradigm, you had to
choose a functional language; if you wanted to use object-oriented
programming, you had to choose an object-oriented language. Today,
however, most programming languages have evolved into multiparadigm
languages. This shift began in earnest around 2010, and especially after
2020, nearly all languages commonly used in the industry now support
imperative, object-oriented, and functional paradigms simultaneously.

Many languages started with object-oriented foundations but have


grown in various ways—introducing first-class functions, iteration helper
methods that leverage lazy evaluation, built-in features for handling
asynchrony, and more. As a result, developers can now leverage multiple
paradigms at a mature level within a single language.

I myself have only recently come to fully embrace this change. Even
though today’s languages make it possible to combine multiple paradigms
in a single function, I sometimes found myself leaning toward just one
paradigm—perhaps the one I preferred or had recently learned. In the past, I
was a big fan of object-oriented programming, yet I assumed these
paradigms could not be mixed. It seems I wasn’t alone; many developers, as
well as open-source library and framework designers, often structure an
entire project around a single paradigm or concept. Ironically, this happens
even though the very languages we use—the core of programming—have
already become multiparadigm.

Every programming paradigm is a tool for building successful


software. We often tend to design and integrate our projects and code so
that only a single paradigm or concept is in use, but doing so can actually
limit our opportunities for better programming. Sometimes we might even
do this knowingly. Some problems are best solved using an imperative
style, others using a functional approach, and still others using object-
oriented techniques. Yet we may have forced our software or code into an
ill-fitting paradigm—potentially sacrificing performance or user experience
along the way.

Today, we live in an era when we can combine multiple paradigms


effectively to write better code. In this book, we’ll explore the beautiful
code and problem-solving methods that emerge when different paradigms
come together. This first chapter tells the story of how these paradigms
began to meet in harmony, setting the stage for the journey that lies ahead.

1.1 The Iterator Pattern in OOP and First-


Class Functions
One day, while working on functional programming in multiparadigm
languages, I realized that the Iterator pattern—one of the classic GoF design
patterns I had learned about over a decade ago—lies at the heart of the way
modern multiparadigm languages implement functional programming.

Object-oriented languages initially adopted the Iterator pattern to


implement lazy iteration protocols, and later introduced first-class
functions. These additions led to the creation of various iterator helper
methods such as map, filter, reduce, and take. Over time, these helper
functions became core features, provided as built-ins in many languages.

So, a design pattern originating from object-oriented programming—


the Iterator pattern—has come together with functional programming’s
first-class functions to implement lazy evaluation and list processing1.
Multiple paradigms, working in tandem, further advanced the functional
capabilities of languages, ultimately paving the way for more robust
multiparadigm languages.

I think the Iterator pattern was the starting point for this
transformation. More than just a single design pattern, it has played an
important role in enabling us to utilize multiple paradigms in programming
today.
GoF’s Iterator Pattern

The Iterator pattern is a classic object-oriented design pattern that


proposes a protocol for sequentially traversing the elements of a collection.
It was introduced in 1994 in the seminal book Design Patterns: Elements of
Reusable Object-Oriented Software, authored by the Gang of Four (GoF)2.

Below is a TypeScript interface representation of the Iterator and


related types.

[Code 1-1] Iterator Interfaces

1 interface IteratorYieldResult<T> {
2 done?: false;
3 value: T;
4 }
5
6 interface IteratorReturnResult {
7 done: true;
8 value: undefined;
9 }
10
11 interface Iterator<T> {
12 next(): IteratorYieldResult<T> | IteratorReturnResult;
13 }

1. IteratorYieldResult<T>

Represents a result where done is false and value is of type T.


This indicates the iterator is not yet finished.

2. IteratorReturnResult

Represents a result where done is true and value is undefined.


This indicates the iterator has completed.

3. Iterator<T>

An interface with a next method that returns either an


IteratorYieldResult<T> or an IteratorReturnResult.
The Iterator pattern allows you to traverse collection elements via a
public method like next() without exposing the internal structure of the
collection. This means you can iterate through any style of collection in a
uniform way, regardless of how it’s actually implemented under the hood.

Note: [Code 1-1] has been simplified to convey the main idea. If you
paste it directly into a TypeScript editor, you may encounter duplicate
type declarations or inference errors. Check out
lib.es2015.iterable.d.ts for TypeScript’s official Iterator
interfaces.

Creating an Iterator from an ArrayLike Object

Below is a class that creates an Iterator from an ArrayLike object.


While you could implement this more succinctly using functions and object
literals, we’re taking a more traditional OOP approach here. We also aren’t
using the JavaScript iteration protocol; in this chapter, we want to focus on
how OOP languages themselves adopted functional paradigms, beginning
with the Iterator pattern.

[Code 1-2] A Class That Creates an Iterator from ArrayLike

1 /* lib.es5.ts
2 interface ArrayLike<T> {
3 readonly length: number;
4 readonly [n: number]: T;
5 }
6 */
7
8 class ArrayLikeIterator<T> implements Iterator<T> {
9 private index = 0;
10 constructor(private arrayLike: ArrayLike<T>) {}
11
12 next(): IteratorResult<T> {
13 if (this.index < this.arrayLike.length) {
14 return {
15 value: this.arrayLike[this.index++],
16 done: false
17 };
18 } else {
19 return {
20 value: undefined,
21 done: true
22 };
23 }
24 }
25 }
26
27 const arrayLike: ArrayLike<number> = {
28 0: 10,
29 1: 20,
30 2: 30,
31 length: 3
32 };
33
34 const iterator: Iterator<number> = new ArrayLikeIterator(arrayLike);
35
36 console.log(iterator.next()); // { value: 10, done: false }
37 console.log(iterator.next()); // { value: 20, done: false }
38 console.log(iterator.next()); // { value: 30, done: false }
39 console.log(iterator.next()); // { value: undefined, done: true }

ArrayLikeIterator follows the GoF’s Iterator pattern. It supports a


collection of type ArrayLike, meaning that it has a length property and
zero-based numeric keys. In JavaScript, many types fit this description—
such as Array, arguments, and NodeList. Thus, ArrayLikeIterator
doesn’t just work with arrays, but with any collection that meets these
conditions.

[Code 1-3] Using ArrayLikeIterator with an Array

1 const array: Array<string> = ["a", "b", "c"];


2 const iterator2: Iterator<string> = new ArrayLikeIterator(array);
3
4 console.log(iterator2.next()); // { value: 'a', done: false }
5 console.log(iterator2.next()); // { value: 'b', done: false }
6 console.log(iterator2.next()); // { value: 'c', done: false }
7 console.log(iterator2.next()); // { value: undefined, done: true }

By calling iterator.next(), we traverse the elements of both


arrayLike and array. Notice that an element is accessed only when next()
is called. This property allows us to implement lazy evaluation3 with an
iterator.
Creating a Reverse Iterator Function from an ArrayLike
Object

Next, we’ll create an iterator that traverses array elements in reverse


order. Let’s start by reviewing the reverse method of arrays.

The reverse Method in Arrays

When you call array.reverse(), the original array is immediately


reversed in place. In [Code 1-4], the order of elements is already changed
by the time we try to access the array with an index.

[Code 1-4] array.reverse()

1 const array = ['A', 'B'];


2 array.reverse(); // Reverses the array in place
3 console.log(array[0], array[1]); // B A

While this behavior might seem natural, for large data sets or
performance-critical applications, the overhead of rearranging the entire
array can be costly.

Using Iterator’s Lazy Properties to Create a reverse Function

By leveraging an iterator, you can traverse an array in reverse without


actually reversing it. An iterator performs “lazy evaluation,” retrieving one
element at a time only when needed. This avoids unnecessary operations
and memory usage by deferring work until it’s actually required.

[Code 1-5] A reverse Function That Returns an Iterator

1 function reverse<T>(arrayLike: ArrayLike<T>): Iterator<T> {


2 let idx = arrayLike.length;
3 return {
4 next() {
5 if (idx === 0) {
6 return { value: undefined, done: true };
7 } else {
8 return { value: arrayLike[--idx], done: false };
9 }
10 }
11 };
12 }
13
14 const array = ['A', 'B'];
15 const reversed = reverse(array);
16 console.log(array); // ['A', 'B'] (the original array remains unchanged)
17
18 console.log(reversed.next().value, reversed.next().value);
19 // B A

In [Code 1-5], the reverse function takes an ArrayLike object and


returns an iterator that can traverse it in reverse order—without modifying
the original. While it’s nice that the original array remains unchanged, the
bigger benefit is that nothing happens at the moment of calling reverse.
Each element is retrieved in reverse order only when
reversed.next().value is called.

The Efficiency of Lazy Evaluation

Comparing [Code 1-4] and [Code 1-5] alone may not fully illustrate
the benefits of lazy evaluation, but consider the scenario in [Code 1-6].

[Code 1-6] When Lazy Evaluation Is More Efficient

1 const array = ['A', 'B', 'C', 'D', 'E', 'F'];


2 array.reverse(); // Immediately reverses the entire array
3 console.log(array); // ['F', 'E', 'D', 'C', 'B', 'A']
4 console.log(array[0], array[1]); // F E
5
6 const array2 = ['A', 'B', 'C', 'D', 'E', 'F'];
7 const reversed = reverse(array2);
8 console.log(array2); // ['A', 'B', 'C', 'D', 'E', 'F']
9 console.log(reversed.next().value, reversed.next().value); // F E

By using reverse(array2), only the elements you actually need (F, E)


are accessed in reverse order. This can reduce operations and memory
usage, especially for large collections.
[Code 1-7] If You Need the Original Array, Too

1 const array = ['A', 'B', 'C', 'D', 'E', 'F'];


2 const reversed = [...array].reverse(); // Copy the entire array before
reversing
3 console.log(reversed[0], reversed[1], array[0], array[1]);
4 // F E A B
5
6 const array2 = ['A', 'B', 'C', 'D', 'E', 'F'];
7 const reversed2 = reverse(array2);
8 console.log(reversed2.next().value, reversed2.next().value, array2[0],
array2[1]);
9 // F E A B

In the first case, you copy the entire array to keep the original, then
reverse the copy. In the second, you never need to copy because the original
is never changed in the first place.

A Lazy-Evaluating map Function

[Code 1-8] shows a lazy map function that takes an Iterator<A> and a
function that transforms A to B, returning a new Iterator<B>.

1. First-Class Functions

The map function takes transform as an argument, treating


functions as values that can be passed around. This ability is a
defining characteristic of first-class functions and enables higher-
order functions.

2. Higher-Order Functions

A higher-order function accepts one or more functions as


arguments or returns a function. Because map accepts a function
as its argument, it’s a textbook example of a higher-order
function.

Such first-class and higher-order functions are core components of the


functional programming paradigm, making logic easier to modularize and
combine.
[Code 1-8] A map Function That Takes Iterator<A> and Returns Iterator<B>

1 function map<A, B>(transform: (value: A) => B, iterator: Iterator<A>):


Iterator<B> {
2 return {
3 next(): IteratorResult<B> {
4 const { value, done } = iterator.next();
5 return done
6 ? { value, done }
7 : { value: transform(value), done };
8 }
9 };
10 }

Just like reverse, the map function does nothing until its next method
is called. When called, it in turn calls iterator.next(). If done is true, it
simply returns that result; if done is false, it applies transform to value
and returns the transformed result.

[Code 1-9] map(f, reverse(array))

1 const array = ['A', 'B', 'C', 'D', 'E', 'F'];


2 const iterator = map(str => str.toLowerCase(), reverse(array));
3 console.log(iterator.next().value, iterator.next().value); // f e

reverse(array) creates an iterator that yields items in reverse order,


while map applies the transformation str => str.toLowerCase() to each
item. In [Code 1-9], only the first two items are processed because next()
is called only twice.

When an iterator pattern offers lazy traversal and first-class functions


enable higher-order functions, combining them yields advanced list
processing methods like map, filter, take, and reduce—all supporting
lazy evaluation.

In summary, we’ve seen how we can use the lazy nature of the Iterator
pattern to defer function execution. We’ll continue to explore how the
Iterator pattern, first-class functions, generators, and the iteration protocol
come together, shedding light on how different paradigms interrelate within
multiparadigm languages.
In the following sections, I’ll offer a brief look at this book’s overall
storyline on multiparadigm. If some concepts or technologies feel
unfamiliar, feel free to return to this text after finishing Chapter 1.

The Crossroads of Multiparadigm: The Iterator Pattern


and First-Class Functions

Starting with ES6, JavaScript standardized the Iterator protocol


across all core collection types—such as Map, Set, Array, and NodeList—
ensuring a unified iteration framework. The iteration protocol works
seamlessly with for...of loops, the spread operator, and destructuring, and
ES6 also introduced generator functions that produce IterableIterators.
Iteration and generators played a big role in JavaScript’s ES6 overhaul, and
they continue to evolve: features like AsyncGenerator, Array.fromAsync,
and the future Iterator Helpers show that the protocol is still advancing.

ES6 also introduced class syntax, significantly expanding


JavaScript’s object-oriented capabilities. At the time, I was deeply
interested in functional programming and paid less attention to these OOP
changes. However, while building products and open-source libraries using
the iteration protocol, I came to see that modern programming languages—
JavaScript, TypeScript, and beyond—use the Iterator pattern, an OOP
design pattern, as a central backbone for functional programming features.

What’s more interesting is that this traditional, object-oriented design


pattern meets functional programming’s first-class functions in a way that
amplifies their strengths. In addition, the imperative style in generators also
fits right in, showing that these three paradigms—object-oriented,
functional, and imperative—can coexist and reinforce one another within a
single language, making it genuinely multiparadigm. Revisiting and
rewriting OOP code I used to love highlighted the benefits of object-
oriented programming, and opened my eyes to the future direction of
modern language development.

Looking back, I believe the meeting point—and gateway—where three


programming paradigms converge is the combination of the Iterator pattern
and first‑class functions. This book aims to present concrete ways to
leverage a multiparadigm language to its fullest, and the opening chapter
sets the stage by telling the story of the Iterator pattern and the allure of
multiparadigm design.

What makes this story even more compelling is that the Iterator
pattern, as presented in Design Patterns (the GoF book), is not a built‑in
feature tied to any single language. As long as a language offers first‑class
values and callable methods, the pattern can be implemented in virtually
any environment—truly a language‑agnostic methodology. The GoF
authors systematized a variety of traversal strategies through user‑defined
Iterator classes, encapsulating what to traverse, how to traverse, and who
drives the traversal as explicit objects (values). Separating the iteration
contract from its clients and layering additional behavior on top resonates
with the philosophy of LISP4, where code is treated as data and developers
are encouraged to extend the language’s expressive power themselves.

Functional programming, object-oriented programming, and design


patterns each have a long theoretical history and a wealth of real-world
experience behind them. More than just coding methodologies, they
represent a collective body of knowledge that has tackled a wide range of
problems throughout the history of software. Understanding these practices
deeply can give you proven principles and abundant insights for solving
new challenges.

I hope this book helps you dive into the solid, long-evolving
techniques and rich stories of the software engineering world.

1.2 Generators: Building Iterators with


Imperative Programming
Earlier, I mentioned that “the traditional, object-oriented Iterator
design pattern is enhancing its value by meeting functional programming’s
first-class functions” and “that generators, which are written in an
imperative style, also mesh with this combination, allowing object-oriented,
functional, and imperative paradigms to cooperate within a single language
—advancing all three paradigms together and driving languages to become
truly multiparadigm.”

The reason for this description is that generators let you implement and
produce an iterator—based on the Iterator pattern—using imperative code.
Some problems are more efficiently and intuitively solved using an
imperative style. Generators provide a crucial foundation for enabling
object-oriented and functional paradigms to cooperate with imperative
approaches.

Basic Generator Syntax

A generator is syntax that allows you to write iterators in an imperative


style. Defined with the function* keyword, a generator function does not
execute immediately when called; instead, it returns an iterator object.
Through this object, you can externally control the function’s execution
flow.

yield and next()

When you call the next() method on the iterator returned by a


generator function, the generator runs until it encounters a yield keyword.
That yield keyword sends a value back to the caller and then pauses the
function. Calling next() again resumes the function from the point where it
left off, continuing until it reaches the next yield or finishes. By combining
yield and next(), you can maintain the internal state of the function and
return values sequentially.

[Code 1-10] demonstrates a simple generator that yields the numbers 1


through 3 in order.

[Code 1-10] A Simple Generator

1 function* generator() {
2 yield 1;
3 yield 2;
4 yield 3;
5 }
6
7 const iter = generator();
8
9 console.log(iter.next()); // { value: 1, done: false }
10 console.log(iter.next()); // { value: 2, done: false }
11 console.log(iter.next()); // { value: 3, done: false }
12 console.log(iter.next()); // { value: undefined, done: true }

Here, the generator function immediately returns an iterator object


upon being called. Each time you call iter.next(), the generator runs until
it hits a yield statement, returns the associated value, and suspends
execution. It continues this process until the done property becomes true.

If there’s a console.log('hi'); between yield 1; and yield 2;, it


behaves as follows:

[Code 1-10a] Pausing Execution in a Generator

1 function* generator() {
2 yield 1;
3 console.log('hi');
4 yield 2;
5 yield 3;
6 }
7
8 const iter = generator();
9
10 console.log(iter.next());
11 // { value: 1, done: false }
12 console.log(iter.next()); // hi <-- console.log('hi') runs at this point
13 // { value: 2, done: false }
14 console.log(iter.next());
15 // { value: 3, done: false }
16 console.log(iter.next());
17 // { value: undefined, done: true }

In this case, the first iter.next() call yields 1 and pauses execution
just before console.log('hi');. The second iter.next() call triggers the
console.log('hi'); and then yields 2. The generator continues this
pattern until done becomes true.

Generators and Control Statements

Because generators use an imperative approach, you can use constructs


like conditionals within them:
[Code 1-11] Using Conditionals in a Generator

1 function* generator(condition: boolean) {


2 yield 1;
3 if (condition) {
4 yield 2;
5 }
6 yield 3;
7 }
8
9 const iter1 = generator(false);
10
11 console.log(iter1.next()); // { value: 1, done: false }
12 console.log(iter1.next()); // { value: 3, done: false }
13 console.log(iter1.next()); // { value: undefined, done: true }

In [Code 1-11], the first call to iter1.next() returns 1 and pauses the
function. On the second call, because condition is false, yield 2; is
skipped, and 3 is returned immediately.

If you pass true, the second yield is executed as well:

[Code 1-11a] Using Conditionals in a Generator

1 const iter2 = generator(true);


2
3 console.log(iter2.next()); // { value: 1, done: false }
4 console.log(iter2.next()); // { value: 2, done: false }
5 console.log(iter2.next()); // { value: 3, done: false }
6 console.log(iter2.next()); // { value: undefined, done: true }

In this way, you can use if statements within a generator to control the
logic that produces elements for the iterator.

The yield* Keyword

The yield* keyword lets a generator function iterate through another


iterable object, sequentially yielding the elements it provides. In TypeScript,
an iterable is “an object that supports iteration,” such as an array [2, 3].
This means you can traverse it not only with for...of loops and the spread
operator (...) but also with yield*. We’ll delve deeper into iterables in
Section 1.3.
Here’s a simple example of using yield* to iterate through an array of
two elements:

[Code 1-12] A Generator Using yield*

1 function* generator() {
2 yield 1;
3 yield* [2, 3];
4 yield 4;
5 }
6
7 const iter = generator();
8
9 console.log(iter.next()); // { value: 1, done: false }
10 console.log(iter.next()); // { value: 2, done: false }
11 console.log(iter.next()); // { value: 3, done: false }
12 console.log(iter.next()); // { value: 4, done: false }
13 console.log(iter.next()); // { value: undefined, done: true }

Here’s how the generator function works:

1. The first call to iter.next() runs yield 1, returning 1 and pausing.


2. The second call triggers yield* [2, 3], which sequentially yields
each element from [2, 3]—like doing yield 2; and then pausing,
followed by yield 3;.
3. The third call resumes from where it left off, returning 3 (equivalent to
yield 3;).
4. The fourth call yields 4.
5. Subsequent calls return done: true.

naturals

The following generator creates an infinite sequence of natural


numbers, starting at 1. It uses a while(true) loop with yield to produce
numbers one by one.

[Code 1-13] naturals

1 function* naturals() {
2 let n = 1;
3 while (true) {
4 yield n++;
5 }
6 }
7
8 const iter = naturals();
9
10 console.log(iter.next()); // { value: 1, done: false }
11 console.log(iter.next()); // { value: 2, done: false }
12 console.log(iter.next()); // { value: 3, done: false }
13 // You can keep calling iter.next() indefinitely.

Although the naturals generator function uses an infinite loop, it only


produces a new number when iter.next() is called, so your process or
browser won’t freeze. As mentioned before, iterators are lazy. Generators
also return an iterator and only run as far as next() calls take them. Thus,
generators, too, support lazy evaluation—in other words, they defer code
execution until needed.

Rewriting reverse with Generators

In the following example, we’ll recreate the reverse function from


[Code 1-5]—which returned an Iterator—using a generator:

[Code 1-14] reverse with a Generator

1 function* reverse<T>(arrayLike: ArrayLike<T>): IterableIterator<T> {


2 let idx = arrayLike.length;
3 while (idx) {
4 yield arrayLike[--idx];
5 }
6 }
7
8 const array = ['A', 'B', 'C', 'D', 'E', 'F'];
9 const reversed = reverse(array);
10
11 console.log(reversed.next().value); // F
12 console.log(reversed.next().value); // E
13 console.log(reversed.next().value); // D

This reverse function starts from the end of arrayLike and moves
backwards to 0, yielding each element in turn. You can traverse the
elements by calling next() on the returned generator, just like before.
While the implementation differs from [Code 1-5], the behavior is the same.
In [Code 1-5], we built an object containing a next() method that uses the
idx state; in [Code 1-14], we wrote imperative code with a loop. Both solve
the same problem.

This example underscores a broader point about modern programming


languages: you can often solve a given problem in multiple ways—object-
oriented, imperative, or functional. Throughout this chapter and beyond,
we’ll continue to implement iterators for iterable objects using OOP,
imperative, and functional styles, examining how these three approaches
remain compatible. We’ll also dig deeper into how they work in concert to
push multiparadigm languages forward.

1.3 The Iterator Pattern in TypeScript: The


Iteration Protocol
Although the examples here focus on the iterator pattern in TypeScript,
the iteration protocol itself is actually a JavaScript convention. Introduced
in ES6 (ECMAScript 2015), the iteration protocol is a language-wide
standard that specifies how to determine whether an object is iterable, as
well as the syntax and features that interact with such iterable objects.

Iterators and Iterables

An object is considered iterable if it has a method of the form


[Symbol.iterator]() { return { next() { ... } }; } that returns an
iterator. Iterable objects can be used with various language features such as
for...of loops, the spread operator, and destructuring.

Iterable objects can be traversed through the iterator they provide,


incorporating all the characteristics of the iterator pattern described earlier.
Array, Map, and Set are prime examples. Many Web API collections are
also implemented as iterables, adhering to the iteration protocol.

Iterators

Let’s create a function (not using a generator) that returns an iterator to


produce natural numbers. We’ll add an end parameter to specify when to
stop; if no end value is given, it defaults to Infinity. The next() method
returns a natural number as long as the current value is less than or equal to
end, otherwise it terminates iteration.

[Code 1-15] naturals(): Iterator<number>

1 function naturals(end = Infinity): Iterator<number> {


2 let n = 1;
3 return {
4 next(): IteratorResult<number> {
5 return n <= end
6 ? { value: n++, done: false }
7 : { value: undefined, done: true };
8 }
9 };
10 }
11
12 const iterator = naturals(3);
13
14 console.log(iterator.next().value); // 1
15 console.log(iterator.next().value); // 2
16 console.log(iterator.next().value); // 3
17 console.log(iterator.next().done); // true

This function’s specification and pattern match those of the naturals


function implemented with a generator in [Code 1-14]. Both work and
produce the same result.

Using a for...of Loop

What happens if we try the following code ([Code 1-16])? In an IDE,


for (const num of iterator) will likely be underlined in red, with an
error similar to the one in the comment.

[Code 1-16] Type Error

1 const iterator2 = naturals(3);


2
3 // TS2488: Type 'Iterator<number, any, undefined>'
4 // must have a [Symbol.iterator]() method that returns an iterator.
5 for (const num of iterator2) {
6 console.log(num);
7 }
By modifying the naturals function as shown below, it will work
fine:

[Code 1-17] Adding a Symbol.iterator Method

1 function naturals(end = Infinity): IterableIterator<number> {


2 let n = 1;
3 return {
4 next(): IteratorResult<number> {
5 return n <= end
6 ? { value: n++, done: false }
7 : { value: undefined, done: true };
8 },
9 [Symbol.iterator]() {
10 return this;
11 }
12 };
13 }
14
15 const iterator = naturals(3);
16
17 for (const num of iterator) {
18 console.log(num);
19 }
20 // 1
21 // 2
22 // 3

Now you can use a for...of loop to traverse the natural numbers
generated by naturals(). When the [Symbol.iterator] method of the
iterable object is called, it returns the iterator, and for...of automatically
calls next() to traverse the numbers.

The returned value of naturals() now has the method


[Symbol.iterator]() { return this; }, and its return type is
IterableIterator<number>. An IterableIterator<number> is both an
iterator and an iterable.

Below are the TypeScript definitions for Iterator<T>, Iterable<T>,


and IterableIterator<T>, including only the most relevant parts of
TypeScript’s Iterator interface:
[Code 1-18] Iterator<T>, Iterable<T>, IterableIterator<T>

1 interface IteratorYieldResult<T> {
2 done?: false;
3 value: T;
4 }
5
6 interface IteratorReturnResult {
7 done: true;
8 value: undefined;
9 }
10
11 interface Iterator<T> {
12 next(): IteratorYieldResult<T> | IteratorReturnResult;
13 }
14
15 interface Iterable<T> {
16 [Symbol.iterator](): Iterator<T>;
17 }
18
19 interface IterableIterator<T> extends Iterator<T> {
20 [Symbol.iterator](): IterableIterator<T>;
21 }

To summarize:

Iterator: An object with a next() method that returns { value, done


}.
Iterable: An object with a [Symbol.iterator]() method that returns
an iterator.
IterableIterator: A value that is both an iterator and iterable.
Iteration Protocol: A convention that makes iterables work seamlessly
with for...of, the spread operator, and more.

Built-In Iterables

We’ve seen the requirements for an object to be iterable. Let’s now


look at some built-in iterables in JavaScript to understand the iteration
protocol in more detail.

[Code 1-19] Array as an Iterable

1 const array = [1, 2, 3];


2 const arrayIterator = array[Symbol.iterator]();
3
4 console.log(arrayIterator.next()); // { value: 1, done: false }
5 console.log(arrayIterator.next()); // { value: 2, done: false }
6 console.log(arrayIterator.next()); // { value: 3, done: false }
7 console.log(arrayIterator.next()); // { value: undefined, done: true }
8
9 // Use a for...of loop to create a new iterator and traverse again.
10 for (const value of array) {
11 console.log(value);
12 }
13 // 1
14 // 2
15 // 3

An array is inherently iterable. You can get its iterator via


Symbol.iterator and call next() to retrieve elements. With for...of, you
can create a new iterator and traverse all elements again.

[Code 1-20] Set as an Iterable

1 const set = new Set([1, 2, 3]);


2 const setIterator = set[Symbol.iterator]();
3
4 console.log(setIterator.next()); // { value: 1, done: false }
5 console.log(setIterator.next()); // { value: 2, done: false }
6 console.log(setIterator.next()); // { value: 3, done: false }
7 console.log(setIterator.next()); // { value: undefined, done: true }
8
9 // Use a for...of loop to create a new iterator and traverse again.
10 for (const value of set) {
11 console.log(value);
12 }
13 // 1
14 // 2
15 // 3

A Set object is also iterable. You can obtain its iterator with
Symbol.iterator and call next() to go through the elements. As before, a
for...of loop creates a new iterator for a fresh traversal.

[Code 1-21] Map as an Iterable

1 const map = new Map([['a', 1], ['b', 2], ['c', 3]]);


2 const mapIterator = map[Symbol.iterator]();
3
4 console.log(mapIterator.next()); // { value: ['a', 1], done: false }
5 console.log(mapIterator.next()); // { value: ['b', 2], done: false }
6 console.log(mapIterator.next()); // { value: ['c', 3], done: false }
7 console.log(mapIterator.next()); // { value: undefined, done: true }
8
9 // Use a for...of loop to create a new iterator and traverse again.
10 for (const [key, value] of map) {
11 console.log(`${key}: ${value}`);
12 }
13 // a: 1
14 // b: 2
15 // c: 3

Map is likewise iterable, allowing iteration via Symbol.iterator and


next(). You can traverse its entries again with for...of.

[Code 1-22] map.entries()

1 const mapEntries = map.entries();


2
3 console.log(mapEntries.next()); // { value: ['a', 1], done: false }
4 console.log(mapEntries.next()); // { value: ['b', 2], done: false }
5 console.log(mapEntries.next()); // { value: ['c', 3], done: false }
6 console.log(mapEntries.next()); // { value: undefined, done: true }
7
8 // Use a for...of loop to create a new iterator and traverse again.
9 for (const entry of map.entries()) {
10 console.log(entry);
11 }
12 // ['a', 1]
13 // ['b', 2]
14 // ['c', 3]

The map.entries() method returns the entries of a Map object as an


IterableIterator. After fully iterating with next(), you can call
map.entries() again to get a fresh iterator for use with for...of.

[Code 1-23] map.values()

1 const mapValues = map.values();


2
3 console.log(mapValues.next()); // { value: 1, done: false }
4
5 // Use a for...of loop to traverse the remaining values.
6 for (const value of mapValues) {
7 console.log(value);
8 }
9 // 2
10 // 3
Here, map.values() returns an IterableIterator of the Map’s values.
We retrieve the first value via next(), and then use a for...of loop to
traverse the remaining values. The iterator remembers its current position,
so once you’ve consumed the first value, the loop starts from the next. You
can verify its behaviour: after retrieving the first value with next(), the
loop begins from the second value.

[Code 1-24] map.keys()

1 const mapKeys = map.keys();


2
3 console.log(mapKeys.next()); // { value: 'a', done: false }
4
5 // Use a for...of loop to traverse the remaining keys.
6 for (const key of mapKeys) {
7 console.log(key);
8 }
9 // b
10 // c

map.keys() returns an IterableIterator of the map’s keys. As with


map.values(), once you retrieve the first key with next(), the for...of
loop will continue from the next key onward. Since an iterator remembers
its current position, you can check the next key after verifying the first one.

Interaction Between the Language and Iterables

JavaScript and TypeScript integrate iterables into many language


features. Below are examples of how iterables work with the spread
operator, destructuring, and more.

The Spread Operator and Iterables

The spread operator (...) expands an iterable object’s elements into


individual elements. This allows you to easily copy or merge arrays and
objects.
[Code 1-25] Merging Arrays

1 const array = [1, 2, 3];


2 const array2 = [...array, 4, 5, 6];
3
4 console.log(array2); // [1, 2, 3, 4, 5, 6]

Using concise syntax, every element in array is expanded into array2


at the front.

Any iterable object can be turned into an array using the spread
operator. For example, you can convert a Set into an array:

[Code 1-26] Convert Set to Array

1 const set = new Set([1, 2, 3]);


2 const array = [...set];
3
4 console.log(array); // [1, 2, 3]

The elements of a Set object can be converted into an array using the
spread operator. This is equivalent to Array.from(set).

The spread operator is also useful for passing elements of an iterable


as individual arguments to a function call:

[Code 1-27] Passing Arguments with the Spread Operator

1 const numbers = [1, 2, 3];


2
3 // Convert the parameter to a rest parameter
4 function sum(...nums: number[]): number {
5 return nums.reduce((a, b) => a + b, 0);
6 }
7
8 console.log(sum(...numbers)); // 6

This makes it especially convenient to define and use variadic


functions.
Destructuring and Iterables

Destructuring lets you assign elements from an iterable object to


individual variables, allowing you to extract specific values easily.

[Code 1-28] Destructuring

1 const array = [1, 2, 3];


2 const [first, second] = array;
3
4 console.log(first); // 1
5 console.log(second); // 2

Here, each element of the array is assigned to the respective variables


first and second.

You can also succinctly extract the first element (head) and the
remaining elements (tail) of an iterable:

[Code 1-29] head and tail

1 const array = [1, 2, 3, 4];


2 const [head, ...tail] = array;
3
4 console.log(head); // 1
5 console.log(tail); // [2, 3, 4]

head receives the first element, and tail is an array of the rest.

Below is an example of destructuring key-value pairs by using the


entries method of a Map:

[Code 1-30] Map, for...of, and Destructuring

1 const map = new Map();


2 map.set('a', 1);
3 map.set('b', 2);
4 map.set('c', 3);
5
6 for (const [key, value] of map.entries()) {
7 console.log(`${key}: ${value}`);
8 }
9 // a: 1
10 // b: 2
11 // c: 3

User-Defined Iterables and the Spread Operator

Finally, let’s try using the spread operator on a custom iterable


returned by naturals():

[Code 1-31] Spreading naturals

1 const array = [0, ...naturals(3)];


2 console.log(array); // [0, 1, 2, 3]

By leveraging the iteration protocol, the spread operator and


destructuring assignment provide a more effective way to manage data and
code in JavaScript and TypeScript.You can even make your own objects
iterable, as in [Code 1-31], enabling them to interoperate with many
language features5. This significantly expands what developers can do.
Such flexibility is grounded in these core aspects of the iterator pattern,
introduced in Section 1.1:

The iterator pattern is designed to allow access to internal elements via


public methods like next() without exposing the underlying structure
of the collection. This makes it possible to traverse data in various
collection styles in a consistent manner, regardless of the collection’s
actual structure.

An iterable is any value whose Symbol.iterator method returns an


iterator. This method lets us check whether a value is iterable and convert or
iterate through it in a uniform way, no matter whether the underlying
structure is an Array, Set, Map, or something else.

Next, let’s look at the connection between generators and iterables—


specifically, how iterators produced by generators can themselves be
iterable.
Generator-Created Iterators Are Also Iterables

Below, we’ll recreate the map function from [Code 1-8]—this time
using a generator.

A map Function Implemented with Generators

Here, map is implemented as a generator function, which always


returns an IterableIterator.

[Code 1-32] map

1 function* map<A, B>(


2 f: (value: A) => B,
3 iterable: Iterable<A>
4 ): IterableIterator<B> {
5 for (const value of iterable) {
6 yield f(value);
7 }
8 }

This generator-based map takes a function f and an iterable object, then


yields the result of applying f to each element in the iterable.

Below is an example of using the new map function:

[Code 1-33] Using the Generator-Based map Function

1 const array = [1, 2, 3, 4];


2 const mapped: IterableIterator<number> = map(x => x * 2, array);
3 const iterator = mapped[Symbol.iterator]();
4
5 console.log(mapped.next().value); // 2
6 console.log(iterator.next().value); // 4
7 console.log([...iterator]); // [6, 8]

Because map is implemented as a generator, its return value


automatically satisfies the iteration protocol. Therefore, you can use
next(), [Symbol.iterator](), the spread operator, and so on:
1. map(x => x * 2, array) returns an IterableIterator<number>
where each element in array is multiplied by 2.
2. mapped.next() and iterator.next() refer to the same iterator, so
once an element is consumed, it doesn’t reappear.
3. Using the spread operator (...iterator) collects the remaining
elements into an array—[6, 8].

Using a Generator-Based Iterator with for...of

Since a generator-based iterator is also iterable, you can traverse it


with for...of. Here, we’ll combine it with the naturals() function, which
itself returns an iterator that is also iterable, making it compatible with map.

[Code 1-34] map((x) => x * 2, naturals(4))

1 let acc = 0;
2 for (const num of map((x) => x * 2, naturals(4))) {
3 acc += num;
4 }
5 console.log(acc); // 20

In [Code 1-34], we pass a lazily evaluated iterator (instead of a regular


array) to the map function. We never need to create an intermediate array;
we just accumulate all the values in acc.

Through these various examples, we’ve seen how generators, iterators,


iterables, the spread operator, destructuring and for...of interact in
layered and nested ways. You can build iterators using regular functions or
generator functions, pass a regular function’s iterator to a generator, or pass
a generator’s iterator to a regular function. Along the way, you can use
for...of or next() to iterate, and yield to produce elements. TypeScript’s
iteration protocol offers great flexibility for mixing and matching different
programming paradigms.

We’ll now move on to explore functional programming with iterators,


further expanding our understanding of the iteration protocol.
1.4 Functional Programming with Iterables
In this section, we’ll implement the three functions—forEach, map,
and filter—in a variety of ways. By doing so, you’ll become familiar with
how to implement functions that utilize and adhere to the Iterator pattern
and iteration protocol.

Here, we’ll use plain JavaScript code without TypeScript type


definitions, so we can focus on manipulating iterables and iterators directly
and applying a given function to each element. In Chapter 2 (“Functional
Programming, Type Systems, and Lisp”), we’ll revisit these functions, add
types to them, and evolve them further.

forEach Function

The forEach function takes a function and an iterable, iterates through


the iterable, and applies the function to each element. It’s a higher-order
function.

[Code 1-35] function forEach(f, iterable) { for...of }

1 function forEach(f, iterable) {


2 for (const value of iterable) {
3 f(value);
4 }
5 }
6
7 const array = [1, 2, 3];
8 forEach(console.log, array);
9 // 1
10 // 2
11 // 3

This version of forEach uses a for...of loop to traverse each element


in the iterable, invoking the passed-in function f and passing the current
value as an argument. In the example above, we pass console.log,
applying the effect of printing each element of the array.
[Code 1-36] function forEach(f, iterable) { while }

1 function forEach(f, iterable) {


2 const iterator = iterable[Symbol.iterator]();
3 let result = iterator.next();
4 while (!result.done) {
5 f(result.value);
6 result = iterator.next();
7 }
8 }
9
10 const set = new Set([4, 5, 6]);
11 forEach(console.log, set);
12 // 4
13 // 5
14 // 6

This version of forEach manually handles iterator with a while loop


and direct calls to the iterator’s next() method. Just as before, f is applied
to each value until done is true. Because Set is also iterable, we can pass it
to the same forEach function.

Both functions behave identically in these examples, and they closely


mirror how the language internally handles iteration. One small
difference is that if a for...of loop terminates early (or if an
exception occurs) and if the underlying iterator has a return()
method, for...of automatically calls return(). While not crucial to
this chapter’s discussion, it’s useful to know for a complete
understanding of the language.

map Function

In [Code 1-37], map is implemented with a generator. It uses a


for...of loop to walk through the iterable’s values, applying the passed-in
function f to each value and yielding the transformed result.

[Code 1-37] function* map(f, iterable) { for...of }

1 function* map(f, iterable) {


2 for (const value of iterable) {
3 yield f(value);
4 }
5 }
6
7 const array = [1, 2, 3];
8 const mapped = map(x => x * 2, array);
9 console.log([...mapped]); // [2, 4, 6]
10
11 const mapped2 = map(x => x * 3, naturals(3));
12 forEach(console.log, mapped2);
13 // 3
14 // 6
15 // 9

This map function accepts an iterable as input and returns an iterator


that is also iterable, so you can use the spread operator or for...of to
iterate over it. Consequently, it can be combined with naturals() (which
returns an IterableIterator) and forEach (which takes an iterable as an
argument).

[Code 1-38] function* map(f, iterable) { while }

1 function* map(f, iterable) {


2 const iterator = iterable[Symbol.iterator]();
3 while (true) { // (1)
4 const { value, done } = iterator.next(); // (2)
5 if (done) break; // (3)
6 yield f(value); // (4)
7 }
8 }
9
10 const mapped = map(([k, v]) => `${k}: ${v}`, new Map([['a', 1], ['b',
2]]));
11 forEach(console.log, mapped);
12 // a: 1
13 // b: 2

This version also uses a generator but handles iteration with a while
(true) loop and manual calls to next(). Although it looks slightly different
internally, it ultimately performs the same functionality.

1. Create an infinite loop.


2. Destructure the next() result.
3. If done is true, break.
4. Apply f to value and yield the result.
Here, we pass a Map as the iterable. Since each Map entry is itself
iterable, we can destructure [k, v], and then use forEach to process the
generated iterator.

[Code 1-39] function map(f, iterable) { return { next, ... } }

1 function map(f, iterable) {


2 const iterator = iterable[Symbol.iterator]();
3 return { // (1)
4 next() {
5 const { done, value } = iterator.next();
6 return done
7 ? { done, value }
8 : { done, value: f(value) }; // (2)
9 },
10 [Symbol.iterator]() { // (3)
11 return this;
12 }
13 };
14 }
15
16 const iterator = (function* () { // (4)
17 yield 1;
18 yield 2;
19 yield 3;
20 }());
21
22 const mapped = map(x => x * 10, iterator); // (5)
23
24 console.log([...mapped]); // [10, 20, 30]

1. This map function directly returns an IterableIterator object.


2. It defines a next method to apply f(value) for each value, returning it
if iteration continues.
3. It also provides a [Symbol.iterator] method so it adheres to the
iterable protocol.
4. By using an anonymous generator, an iterator is created sequentially
yielding 1, 2, and 3, and then passed into map.
5. map(x => x * 10, iterator) creates an iterator prepared to multiply
each element by 10.

In the end, mapped is an iterator that produces [10, 20, 30] once fully
consumed by the spread operator. The call to console.log([...mapped])
exhausts the iterator and logs the resulting array.
filter Function

The filter function creates a new iterator containing only those


elements from the given iterable that satisfy a given condition.

[Code 1-40] function* filter(f, iterable) { for...of }

1 function* filter(f, iterable) {


2 for (const value of iterable) {
3 if (f(value)) {
4 yield value;
5 }
6 }
7 }
8
9 const array = [1, 2, 3, 4, 5];
10 const filtered = filter(x => x % 2 === 0, array);
11 console.log([...filtered]); // [2, 4]

Here in [Code 1-40], filter is implemented with a generator and uses


a for...of loop to yield only those elements for which f(value) is truthy.

[Code 1-41] function* filter(f, iterable) { while }

1 function* filter(f, iterable) {


2 const iterator = iterable[Symbol.iterator](); // from here
3 while (true) { // ~
4 const { value, done } = iterator.next(); // ~
5 if (done) break; // to here is the same
6 if (f(value)) {
7 yield value;
8 }
9 }
10 }
11
12 const array = [1, 2, 3, 4, 5];
13 const filtered = filter(x => x % 2 === 0, array);
14 console.log([...filtered]); // [2, 4]

Comparing the while loop version with the for...of version of both
map and filter shows that the external loop, including the termination
condition using done, remains the same; only the condition logic (checking
done) and applying f changes. Practicing this pattern makes it easier to
implement higher-order functions in functional programming.
[Code 1-42] function filter(f, iterable) { return { next, ... } }

1 function filter(f, iterable) {


2 const iterator = iterable[Symbol.iterator]();
3 return {
4 next() {
5 const { done, value } = iterator.next();
6 if (done) return { done, value }; // (3)
7 if (f(value)) return { done, value }; // (1)
8 return this.next(); // (2) recursive call
9 },
10 [Symbol.iterator]() {
11 return this;
12 }
13 };
14 }
15
16 console.log(...filter(x => x % 2 === 1, [1, 2, 3, 4, 5])); // 1 3 5

From [Code 1-42], the filter function returns an iterator object. Its
next method implements the following logic:

1. if the predicate(f(value)) satisfies conditions for each element, it


returns { done, value }.
Here, done is false and value is the current item.
2. If not, it calls this.next() recursively, continuing iteration until
finding a matching element or reaching the end.
3. When the underlying iterator created by the iterable is finished (done:
true), it simply returns { done, value } to signal the end.
In this case, done is true and value is undefined.

Because this version([Code 1-42]) of filter uses recursive calls


instead of a loop, it looks concise and object-oriented. Also, this code
supports Tail Call Optimization(TCO). For TCO to be applied, the last
function call must be a recursive call that is directly returned. In this code,
the structure allows TCO because the this.next() call in (2) is the last
operation, meaning its result is directly returned. Although ES6 specifies
TCO, it’s unfortunately not implemented in the V8 engine, so you still risk
a stack overflow with very large collections.

Below are two variations that replace recursion with a do...while or


while loop instead of TCO, preserving the structure and clarity while
avoiding the risk of stack overflow.

[Code 1-42a] Converting to do...while or while

1 function filter(f, iterable) {


2 const iterator = iterable[Symbol.iterator]();
3 return {
4 next() {
5 do {
6 const { done, value } = iterator.next();
7 if (done) return { done, value };
8 if (f(value)) return { done, value };
9 } while (true); // functionally similar to the recursive call
10 },
11 [Symbol.iterator]() {
12 return this;
13 }
14 };
15 }
16
17 // A slightly more concise while version:
18 function filter(f, iterable) {
19 const iterator = iterable[Symbol.iterator]();
20 return {
21 next() {
22 while (true) {
23 const { done, value } = iterator.next();
24 if (done) return { done, value };
25 if (f(value)) return { value };
26 }
27 },
28 [Symbol.iterator]() {
29 return this;
30 }
31 };
32 }

Since both variations enclose the logic in an infinite loop instead of


recursion, they maintain a similar structure while optimizing for safety and
efficiency in handling large collections. Among modern languages, both
Scala and Kotlin support tail recursion optimization internally, which means
such code could be expressed recursively without risking stack overflow. In
Scala, you can even annotate functions with @tailrec to ensure full TCO in
many scenarios, while Kotlin supports tailrec for certain patterns. These
approaches resemble the method used in [Code 1-42a], which achieves the
same effect by replacing recursion with a loop.
Composing Higher-Order Functions ((()))

Let’s combine the functions we’ve written to tackle a more complex


problem:

[Code 1-43] Composing Higher-Order Functions

1 forEach(console.log,
2 map(x => x * 10,
3 filter(x => x % 2 === 1,
4 naturals(5))));
5 // 10
6 // 30
7 // 50

The code looks somewhat nested, but this is a common style in Lisp-
like languages, sometimes considered elegant. One way to read easily it is
from the bottom-right to the top-left:

“Generate naturals(5), filter so that x % 2 === 1, then map each


result to x * 10, and finally pass all values to console.log.”

In this code, naturals(5) creates an iterator that sequentially


generates nature number from 1 to 5. filter takes this iterator and creates a
new one that filters only odd numbers. Remember, this is a lazy iterator.

It is also an argument of map. map applies x * 10 for each element and


returns the result. The map generator yields f(value) for each element. map
ultimately returns an iterator that sequentially produces [10, 30, 50].

This is passed into forEach which runs the function assigned to each
iterator elements and then prints each value by calling console.log.

A Fun Twist on filter

[Code 1-44] function* filter(f, iterable) { [].filter() }

1 function* filter(f, iterable) {


2 for (const value of iterable) {
3 yield* [value].filter(f);
4 }
5 }
6
7 const array = [1, 2, 3, 4, 5];
8 const filtered = filter(x => x % 2 === 0, array);
9 console.log([...filtered]); // [2, 4]

This slightly whimsical version wraps each element in a single-


element array, then uses Array.prototype.filter instead of an if
statement. With yield*, if the array ends up empty, nothing is yielded; if it
has one element, that element is yielded.

Even though this might look unusual, it still supports lazy evaluation
and has O(n) complexity because each element is visited once. There’s a
minor overhead of creating single-element arrays and calling
Array.prototype.filter, but it’s negligible in most practical scenarios.

You don’t need to take this trick too seriously, but it shows how you
can creatively combine the iteration protocol. It may spark new ideas for
working with iterables. Hopefully, it also provides a bit of coding fun.

Up to now, we’ve seen various ways to implement forEach, map, and


filter, as well as how to use them in conjunction with iterators and higher-
order functions. In the next chapter, we’ll explore why the iterator design
was so important from an object-oriented perspective, delving deeper into
its role and benefits.

1.5 Why the Iteration Protocol Is Designed as


an Interface Rather Than Inheritance
One of the familiar concept in object-oriented programming(OOP) is
inheritance. Inheritance is an essential concept for abstracting code and a
useful mechanism for sharing functionality. It’s also frequently used in real-
world development.

However, the iterator pattern, along with helper functions that support
iterators, is based on interface (or protocol/trait), rather than using
inheritance. Why is this approach preferred? In this section, we’ll discuss
why modern languages tend to discourage inheritance in language design
principles and instead favor interfaces (or protocols, traits, etc.).

Note: Note: What “inheritance” and “interface” refer to in this


chapter

Inheritance: In this chapter, “inheritance” refers to class-level


inheritance in TypeScript—e.g., class A extends B {}—where
a subclass inherits both its properties and methods along with the
implementation details of a parent class.
Interface: Here, “interface” refers to defining only the signatures
(methods/properties and their types), which a class can then
implement using class X implements Y {}. In TypeScript,
interfaces do not contain implementations—only type signatures.
Inheritance vs. Interface: When we compare inheritance and
interface here, we mean that “inheritance” is about sharing and
extending implementation details between classes, whereas
“interface” is about specifying signatures that implementing
classes or literal objects are responsible for implementing.

Even NodeList in the Web API Is Iterable

If you work with the DOM in JavaScript, you frequently encounter


NodeList. A NodeList represents a collection of nodes in the document.
This means you can seamlessly use NodeList with iteration-based features
like the for...of loop and the spread operator.

[Code 1-45] for...of

1 <ul>
2 <li>1</li>
3 <li>2</li>
4 <li>3</li>
5 <li>4</li>
6 <li>5</li>
7 </ul>
8 <script>
9 // Select all li elements
10 const nodeList = document.querySelectorAll('li');
11
12 // Use a for...of loop to iterate over the NodeList
13 for (const node of nodeList) {
14 console.log(node.textContent);
15 // 1
16 // 2
17 // 3
18 // 4
19 // 5
20 }
21 </script>

In the example above, document.querySelectorAll('li') returns all


<li> elements as a NodeList. Since NodeList is iterable, we can iterate
over each node using for...of.

Naturally, we can also use the iterable functions we created earlier:

[Code 1-46] Using Our Custom Iterable Functions

1 forEach(console.log,
2 filter(x => x % 2 === 1,
3 map(node => parseInt(node.textContent),
4 document.querySelectorAll('li'))));
5 // 1
6 // 3
7 // 5
8
9 forEach(element => element.remove(),
10 filter(node => parseInt(node.textContent) % 2 === 0,
11 document.querySelectorAll('li')));
12 // removed: <li>2</li>
13 // removed: <li>4</li>

In the first code snippet, we log the odd numbers (1, 3, 5), and in the
second, we remove <li>2</li> and <li>4</li> from the page by filtering
for even text content.

Why Do We Need to Solve This Using an Interface


Instead of Inheritance?

Let’s explore why we need to solve this using an interface (the


iteration protocol) instead of inheritance.
Why Use Iterables at All? Can’t We Just Use .map and .filter
Instead?

In JavaScript, arrays already support higher-order functions like .map,


.filter, and .forEach.

[Code 1-47] Trying Array.prototype.map on a NodeList

1 const nodes: NodeList = document.querySelectorAll('li');


2
3 console.log(nodes[0], nodes[1], nodes.length);
4 // <li>1</li> <li>3</li> 3
5 // nodes looks like an Array.
6
7 nodes.map(node => node.textContent);
8 // Uncaught TypeError: nodes.map is not a function

The code above([Code 1-47]) will throw an error because NodeList is


not an Array. Not only is NodeList not an Array, but as shown in [Code 1-
48], it is defined differently and cannot directly use Array methods.

[Code 1-48] NodeList Interface

1 interface NodeList {
2 readonly length: number;
3 item(index: number): Node | null;
4 forEach(
5 callbackfn: (value: Node, key: number, parent: NodeList) => void,
6 thisArg?: any
7 ): void;
8 [index: number]: Node;
9 }

By contrast, in the example from [Code 1-46], the iteration protocol


allows filter, map, and forEach to be used directly on a NodeList.

Why weren’t these iterable data structures simply made to


inherit from Array?

In OOP, inheritance is indeed a useful tool for abstracting code and


share functionality. Yet in JavaScript and TypeScript’s standard libraries,
you won’t find built-in classes inheriting from Array. Data structures like
Map, Set, and NodeList may need some of the same core features as an
Array, but they don’t inherit from it. Why?

The short answer is that they represent distinct data structures, each
designed with its own unique characteristics and behaviors. They are not
subsets of Array—neither externally nor internally. Tying them to Array
through inheritance would create unnecessary complexity, hamper each
structure’s ability to be optimized, and force them to evolve in lockstep. It
also violates the overall design principles of JavaScript, and most other
languages follow a similar approach. For instance, in Java, List, Set, and
Map are separate interfaces for different structures, rather than classes all
inheriting from a single parent.

Let’s take a closer look and expand our understanding of object-


oriented paradigms.

Array follows the typical characteristics and behavior of arrays and is


optimized for contiguous storage of elements accessed by numeric
index.
Map stores key-value pairs, where each key is unique. It has no inherent
order, but it provides quick key-based lookups.
Set holds unique values, also with no inherent ordering, enabling fast
membership checks.

Forcing these fundamentally different data structures to conform to


Array’s characteristics and behavior through inheritance would be unnatural
and inefficient.

But Isn’t NodeList Essentially ArrayLike with index and length?

NodeList is a specialized data structure maintaining the order of


elements in the DOM tree, primarily designed for DOM manipulation.
NodeList comes in two modes: “live” and “static”. A live
NodeListupdates automatically when the DOM changes.
Array is static once created and must always be manually updated to
add or remove elements.
JavaScript engines manage arrays in a specific, optimized way for
memory usage and performance.

In other words, forcing structurally and functionally distinct objects to


share dependencies would introduce unwanted complexity and prevent
optimization.

The Unintended Consequences One Decision Could Cause

Imagine there is someone responsible for defining language and


browser standards. One day, while adding a new feature, they decide to
make NodeList inherit from Array-and they ship that change. A month
later:

They realize they need to modify Array, only to realize that


maintaining it has become significantly more complex due to its
ingerited dependencies, including NodeList.
Or they discover a bug in NodeList and want to revert it to a non-
inheriting structure—but in the meantime, tons of developers have
already deployed code relying on it.
Even if they try to remove the inheritance relationship, they must
retain all Array functionalities in NodeList to ensure backward
compatibility.

This example shows why language-level designs avoid using


inheritance too liberally.

How Do We Share Common Logic instead?

We’ve already seen that by establishing an interface—driven the


iteration protocol—and by building higher-order functions around it, we can
handle diverse data structures consistently, without inheritance. By
leveraging the iteration protocol, we can achieve consistency across data
structures without inheritance. Each data structure retains its unique
characteristics while adhering to the same fundamental interface, allowing
seamless interoperability.
Concretely, whether we’re dealing with an Array, NodeList, Map, a
generator, or a custom user-defined iterable, the iteration protocol:

1. Solves external-structure polymorphism via the “iterable interface.”


2. Deals with internal data polymorphism by passing a function to your
higher-order function (e.g., map, filter), which determines how each
element is processed.

A notable distinction is that Array, Map, and Set are part of


JavaScript’s standard library, while NodeList is provided by browser
implementations.

Finally, note that the iteration protocol is grounded in the iterator


pattern, which is a classic OOP design pattern. By adopting the iterator
pattern, we can separate common logic from specific data structures and
provide a well-designed, maintainable approach—one that modern
programming heavily relies on.

Interfaces vs. Class Inheritance

Interfaces define a contract or specification for classes or objects to


implement, enabling different classes to interact in a consistent way. By
enforcing a shared behavior, interfaces foster polymorphism and code
flexibility. When multiple classes implement the same interface, they all
have the same method signatures, which can be invoked uniformly.

Inheritance, on the other hand, is the process of extending an existing


class to reuse its attributes and methods. It boosts code reusability and
extension possibilities. However, overusing inheritance incorrectly can
tightly couple classes, complicating maintenance.

We’ve focused on the benefits of interfaces in this chapter, which


might give the impression that interface are inherently superior to
inheritance. However, they serve different purposes:

Interfaces specify a contract that different classes fulfill to behave


consistently.
Inheritance shares an actual implementation that can be extended or
modified.
Interfaces often appear at the language or library design level.
Inheritance is common at the SDK or application level—for example,
extending a base window class in a GUI framework to create a
customized window.

In later sections of this book, we’ll explore scenarios where inheritance


is more appropriate. Ultimately, knowing when to choose an interface and
when to use inheritance is crucial for writing good code.

1.6 Summary
The Age of Multiparadigm

Today, most programming languages have evolved into multiparadigm


languages. In the past, one would choose a functional language for
functional programming and an object-oriented language for object-oriented
programming. Now, a single language often supports imperative, object-
oriented, and functional paradigms. This shift began around 2010, and by
2020 nearly all major languages had adopted these features.

The Cooperation of Multiparadigm

In a recent article, Mike Loukides stated that “we need to learn how to
use multiparadigm languages that provide functional, object-oriented, and
imperative paradigms more efficiently, and it’s time to blend paradigms that
were once distinct—and even in conflict—for a long time.” Each paradigm
has its own merits, but the true strength of a multiparadigm language lies in
being able to choose or combine paradigms best suited to a given problem.
For instance, object-oriented programming excels at encapsulating state and
behavior, whereas functional programming emphasizes pure functions and
immutability to improve predictability.
The Iterator Pattern and First-Class Functions

Among object-oriented design patterns, the Iterator pattern provides a


protocol for sequentially accessing the elements of a collection in a
consistent way. Meanwhile, first-class functions—a core concept in
functional programming—allow functions to be treated as values that can
be passed as arguments or returned by other functions. Combining these
two concepts enables key functional programming principles in a
multiparadigm language, such as lazy evaluation and list processing.

Lazy Evaluation and Higher-Order Functions

Lazy evaluation defers computation until it’s actually needed,


reducing memory usage and operational costs while improving
performance. Higher-order functions are functions that either accept other
functions as arguments or return them. Common list-processing functions
like map, filter, and reduce leverage both lazy evaluation and higher-
order function techniques to deliver better performance and readability.

Generator Functions

Generator functions offer a syntactic mechanism for creating iterators


in an imperative style. By using function* and the yield keyword, you can
neatly implement logic that returns values one by one, making it
straightforward to express code that supports lazy evaluation.

The TypeScript Iteration Protocol

Iterable objects have a structure that allows them to be traversed via an


iterator. In TypeScript (and JavaScript), you can combine iterable objects
with language features like the for...of loop, the spread operator, and
destructuring to write more flexible and concise code. TypeScript provides
a refined type system for the iteration protocol, enabling you to clearly
define each element’s type and further enhance code safety.
Custom Iterables

Developers can implement the iteration protocol for their own objects,
creating user-defined iterables. This allows tight integration with the
language’s built-in features and opens up a variety of problem-solving
approaches or even language-level extensions.

Interfaces Instead of Inheritance

While inheritance remains vital in object-oriented programming, using


interfaces and the iteration protocol can often be more efficient for handling
diverse data structures in a consistent way. For instance, NodeList, Map, and
Set all have different internal behaviors. Instead of relying on inheritance,
they share common logic through interfaces and adhere to the iterator
pattern-based iteration protocol, allowing for more flexible manipulation of
various data structures.

The Crossroads of Multiparadigm: The Iterator Pattern


and First-Class Functions

The iterator pattern and first-class functions are crucial foundations—


and key crossroads—for implementing functional programming in
multiparadigm languages. They enable advanced list processing, improve
performance and readability, and let you take full advantage of the powerful
features in a multiparadigm language. In this book, we will delve deeper
into these concepts, exploring how to combine various paradigms in order
to write elegant and efficient code.

2. Functional Programming, Type Systems,


and Lisp
Programming paradigms and languages are constantly evolving.
Today, we live in an era of multiparadigm languages, where object-oriented,
functional, and imperative paradigms coexist. Among these, the functional
paradigm helps write predictable and readable code through concepts like
immutability, pure functions, higher-order functions, and lazy evaluation.
These functional techniques are being adopted by more and more
languages, including JavaScript and TypeScript.

In this chapter, we will delve into the core principles of functional


programming and explore how TypeScript’s type system supports them.
We’ll learn how to leverage type inference, higher-order functions and
function types, generics, and other type features to write safer, more
maintainable code. We’ll also look at how to combine object-oriented
classes and functional functions effectively.

Additionally, we’ll examine some core ideas from Lisp, often


considered the root of functional programming. Lisp’s unique approach of
treating code itself as list-structured data enables flexible and powerful
metaprogramming. Many of Lisp’s ideas have profoundly influenced
modern multiparadigm languages, and JavaScript and TypeScript likewise
incorporate key concepts that can be traced back to Lisp.

By the end of this chapter, you’ll understand how functional


programming interacts with a type system, as well as the charm and elegant
philosophy behind Lisp. Building on this foundation, we’ll explore how to
write safer, more efficient, and more readable code.

2.1 Type Inference, Function Types, and


Generics
TypeScript adds a powerful type system to JavaScript, improving code
stability and readability. With TypeScript’s type inference, developers can
write safe code even without explicit type declarations. High-order
functions and generics allow for the implementation of complex functional
programming patterns. Additionally, combining object-oriented classes with
functional functions lets you write more flexible and powerful code.
Type Inference

Type inference in TypeScript is a feature where the compiler


automatically infers the types of variables, functions, expressions, and so
on, even if they are not explicitly declared. This makes it possible to
maintain concise code while still ensuring type safety.

Basic Concept of Type Inference

In the following code, a does not have an explicitly declared type, but
TypeScript infers a to be of type number from the value 10. Consequently, if
you try to assign a value of a different type to a later, a type error occurs.

[Code 2-1] Basic Example

1 let a = 10;

In a situation like [Code 2-1], there is no need to declare the type


explicitly. IDEs also provide guidance on how TypeScript infers types,
making both coding and maintenance more convenient.

Type Inference for Variables and Constants

When you declare a variable in TypeScript, the compiler infers its type
from the value you initialize it with.

[Code 2-2] Variable Type Inference

1 let message = "Hello, TypeScript!";

In [Code 2-2], the type of message is inferred to be string without an


explicit type declaration.

Below is an example explaining how TypeScript infers the types of


constants. In the following code, the comments indicate the floating type
guide boxes in an IDE when hovering over selected.
[Code 2-3] Constant Type Inference

1 const selected = true;


2 // [const selected: true]
3
4 let checked = true;
5 // [let checked: boolean]

In [Code 2-3], because selected is declared with const and cannot be


reassigned, its value cannot change, so its type is inferred to be true. On
the other hand, because checked is declared with let and can be reassigned,
its type is inferred to be boolean.

Function Return Type Inference

TypeScript can automatically infer the return type of a function.

[Code 2-4] Return Type Inference

1 function add(a: number, b: number) {


2 return a + b;
3 }

In [Code 2-4], the add function takes parameters of type number and
returns a number. Even though we have not explicitly specified a return
type, TypeScript infers from a and b that the return type is number.

In the same code as shown in [Code 2-5], the return type is also
inferred to be string if you change the types of a and b to string.

[Code 2-5] Return Type Inference 2

1 function add(a: string, b: string) {


2 return a + b;
3 }

[Code 2-6] Return Type Inference 3

1 function add(a: string, b: string) {


2 return parseInt(a) + parseInt(b);
3 }

TypeScript recognizes that parseInt(a) and parseInt(b) return


values of type number, so it infers each result to be number and concludes
that the return type of add is number.

Hence, you can explicitly add a return type, which aligns with
TypeScript’s type inference as shown in [Code 2-7].

[Code 2-7] Return Type Inference 4

1 function add(a: string, b: string): number {


2 return parseInt(a) + parseInt(b);
3 }

Type Inference in Object Properties

TypeScript can infer the types of properties in object literals.

[Code 2-8] Inferring Types of Object Properties

1 let user = {
2 name: "Marty",
3 age: 30
4 };

In [Code 2-8], the name property of the user object is inferred as


string, and the age property is inferred as number.

Type Inference for Function Parameters

TypeScript can infer the types of function parameters as well.

[Code 2-9] Function Parameter Types

1 let strs = ['a', 'b', 'c'];


2 strs.forEach(str => console.log(str.toUpperCase())); // [str: string]
In [Code 2-9], TypeScript infers that strs is a string[].
Consequently, the forEach method can infer that the type of str in the
arrow function is string, based on the element type of the strs array. This
allows IntelliSense to suggest toUpperCase() and compile successfully.

I particularly like this feature of TypeScript’s type inference. Because a


higher-order function infers the parameter type of the passed-in function,
we can keep our arrow functions concise. It maintains type safety while
preserving the brevity of arrow functions.

In this chapter, we will learn how type inference works with user-
defined high-order functions in a similarly way to them.

Type Inference with Generics

Generics enable a single function in TypeScript to handle multiple


types, thus creating functions with a high degree of polymorphism. The
identity function, which returns the same type as the input, is a classic
example of a generic function. It uses the generic type parameter T. This
function receives an argument arg of type T and returns a value of the same
type T.

To clarify, TypeScript determines the actual type of T from the


argument arg and applies that same type as the return type while evaluating
the expression that calls identity.

[Code 2-10] identity

1 function identity<T>(arg: T): T {


2 return arg;
3 }
4
5 const a = identity("hi"); // [const a: "hi"]
6 // Passing "hi" to identity determines T as "hi", making a's type "hi".
7
8 const b = identity(1); // [const b: 1]
9 // Passing 1 to identity determines T as 1, making b's type 1.
10
11 const c = identity<string>("a"); // [const c: string]
12 // Explicitly specify the generic type parameter T as string,
13 // and pass "a" as the argument.
14 // This sets c's type to string.
15
16 const d = identity<number>(1); // [const d: number]
17 // Explicitly specify the generic type parameter T as number,
18 // and pass 1 as the argument.
19 // This sets d's type to number.
20
21 class User {}
22 const e = identity(new User()); // [const e: User]
23 // Passing an instance of User to identity determines T as User,
24 // making e's type User.
25
26 const f = identity((n: number) => n % 2 === 1); // [const f: (n: number)
=> boolean]
27 // Passing the function (n: number) => n % 2 === 1 to identity.
28 // TypeScript infers the function's type as
29 // (n: number) => boolean and sets f's type accordingly.

This example shows how the identity function uses generics to


accept various types. The actual type of T is determined by the argument
arg, and the return type is the same T.

Type inference in TypeScript is crucial for enhancing both code


readability and safety. It lets developers maintain high productivity despite
having a robust type system in place.

Function Types and Generics

TypeScript supports functional programming by providing features


such as higher-order functions, function types, and generics. First, explicitly
defining a function type makes it clear what input and output types the
function handles. Next, generics let you create more general-purpose
functions that can support a broad range of types. Especially in higher-order
functions, TypeScript infers the types of function arguments and can
flexibly infer types in conjunction with other parameters.

Multiple Ways to Define Function Types

TypeScript offers various ways to define function types, helping you


clarify a function’s signature and ensure code stability and readability.

The most straightforward approach is to specify the types of both


parameters and return values in the function itself. This makes the
function’s signature explicit, enhances readability, and guarantees type
safety.

[Code 2-11] Function

1 function add(a: number, b: number): number {


2 return a + b;
3 }
4
5 const result: number = add(2, 3); // 5

In [Code 2-11], the add function takes two number parameters and
returns a number. By specifying the function’s signature, TypeScript can
prevent type errors at call sites.

TypeScript supports function overloading, allowing multiple signatures


under the same function name. This increases flexibility and enables the
function to handle various input types.

[Code 2-12] Function Overload

1 function double(a: number): number;


2 function double(a: string): string;
3 function double(a: number | string): number | string {
4 if (typeof a === 'number') {
5 return a * 2;
6 } else {
7 return a + a;
8 }
9 }
10
11 const num: number = double(10); // 20
12 const str: string = double('Hi'); // 'HiHi'

In this example, the double function is overloaded to handle both


number and string. The typeof operator serves as a runtime type guard,
enabling different logic depending on whether value is a number or string.
In TypeScript, this approach is referred to as type narrowing by a type
guard.

At compile time, TypeScript also correctly infers the type within the if
block. If value passes the check typeof value === 'number', a is inferred
as number type. Otherwise, a is inferred as string type.

Thanks to this inference, TypeScript provides type safety even before


running the code.

[Code 2-13] Arrow Function

Arrow functions offer concise syntax and are also useful when
defining function types. You can specify the parameter and return types to
ensure type safety when using arrow functions.

1 const multiply = (a: number, b: number): number => a * b;


2
3 const num: number = multiply(4, 5); // 20

In this way, explicitly specifying both parameter and return types helps
ensure type safety. However, since Typescript has powerful type inference,
specifying only the parameter types is sufficient.

[Code 2-14] Arrow Function (Type Inference)

1 const multiply = (a: number, b: number) => a * b;


2
3 const num: number = multiply(4, 5); // 20

In [Code 2-14], we only declare the parameter types, and TypeScript


infers the return type as number. This demonstrates how you can keep your
code concise while preserving type safety.

[Code 2-15] Function Type Alias

1 type Add = (a: number, b: number) => number;


2
3 const add: Add = (a, b) => a + b;

In this example, a function type alias named Add is defined to represent


a function of the form (a: number, b: number) => number. Then, you
can declare a function variable add with the type Add, ensuring that it is
implemented accordingly. Defining a function type as an alias allows for
convenient reuse wherever the same function signature is needed.

Having covered various methods for defining function types, let’s


move on to how higher-order functions and generics can be used to
implement more complex and flexible functional programming patterns.

The constant Function and Generics

The constant function returns a function that always yields the value
passed to it. Essentially, this function captures a specific value and returns it
every time the returned function is called. By implementing it as a generic,
you can handle different types of values seamlessly.

[Code 2-16] The constant Function and Types

1 function constant<T>(a: T): () => T {


2 return () => a;
3 }
4
5 const getFive = constant(5);
6 const ten: number = getFive() + getFive();
7 console.log(ten); // 10
8
9 const getHi = constant("Hi");
10 const hi2: string = getHi() + getHi();
11 console.log(hi2); // HiHi

In this example, calling constant(5) creates a function getFive,


which is then called twice, and the results are added together and stored in
ten. Likewise, calling constant("Hi") creates a function getHi, which is
called twice, and the results are concatenated and stored in hi2.

By using the constant function with generics, it can handle any type
of value. Thanks to TypeScript’s type inference, it works correctly without
needing explicit type declarations. Here is a detailed explanation:

1. By writing <T> next to the constant function, we declare that we will


use a generic type T in this function.
2. We define the type of the parameter a as T.
3. The constant function accepts a value of type T and returns a function
that returns a value of type T.
4. We specify the return type of the returned function as () => T,
indicating that it takes no parameters and returns T.
5. Because constant is called with the argument 5, T is inferred as
number, making the return type of getFive also number.
6. Because constant is called with the argument “Hi”, T is inferred as
string, making the return type of getHi also string.
7. By leveraging generics, the constant function can handle any type of
value, and type inference ensures the return type of the resulting
function is accurate.

Note: Unlike the identity function, where passing 5 or "Hi" would


immediately infer T as number or string, TypeScript tends to infer
broader types when dealing with the arguments or return values of
higher-order functions that manipulate first-class functions. While this
detail is not critically important, it is good to keep in mind.

We have now covered the core concepts of type inference, function


types, and generics. Practicing these features will help you write safer code
in a multi-paradigm language like TypeScript.

2.2 Functional Type Systems in a Multi-


Paradigm Language
Up to now, we’ve covered fundamental concepts and syntax for type
inference, function types, and generics—all prerequisites for applying a
type system to functional higher-order functions. In this section, we’ll
examine how to apply a type system to iterable helper functions, shedding
light on how functional type systems work in a multi-paradigm language.

Revisiting the Iteration Protocol and Types

We will soon create higher-order functions that combine first-class


functions with Iterator, Iterable, and IterableIterator. To do this, you
should be well versed in the code and concepts introduced in Chapter 1. The
following section reaffirms the key ideas we’ll need as we move forward.

[Code 2-17] Revisiting the Iterator, Iterable, and IterableIterator Types

1 interface IteratorYieldResult<T> {
2 done?: false;
3 value: T;
4 }
5
6 interface IteratorReturnResult {
7 done: true;
8 value: undefined;
9 }
10
11 interface Iterator<T> { // A partial version of TS’s built-in `Iterator`
interface
12 next(): IteratorYieldResult<T> | IteratorReturnResult;
13 }
14
15 interface Iterable<T> {
16 [Symbol.iterator](): Iterator<T>;
17 }
18
19 interface IterableIterator<T> extends Iterator<T> {
20 [Symbol.iterator](): IterableIterator<T>;
21 }

Below is a checklist for review. We will learn how to apply the type
system based on these concepts:

1. You’re aware of the three main iteration-protocol items: Iterator,


Iterable, and IterableIterator.
2. Only iterables can be looped over with a for...of.
3. Only iterables can be expanded via the spread operator into an array.
4. To write a function that returns an IterableIterator, you can
implement a return value of the form { next() {...},
[Symbol.iterator]() {...} }, making it an iterator as well as an
iterable.
5. Generators can produce iterators, and the result of a generator is an
IterableIterator.
6. You know the relationship between yield in a generator and next() in
an iterator.
7. By combining higher-order functions with iterators and leveraging the
iteration protocol, you can build forEach, map, filter, ect. that
interact seamlessly with language features.

[Code 2-17] uses a simplified form to convey crucial ideas for this
chapter. If you paste this directly into an editor, you may get duplicate
type or inference errors. The official iterator interface in TypeScript
can be found by referring to the lib.es2015.iterable.d.ts file.

Functional Higher-Order Functions with a Type System

The higher-order functional helpers utilizing the iterator pattern are


built around the iterable data structure, so we can call them iterable helper
functions. This section explores how to apply a type system to these iterable
helper functions.

forEach and Its Type

The forEach function processes each element of a given iterable with


a specified function. Below is an implementation using generics:

[Code 2-18] A forEach Function and Its Type

1 function forEach<A>(f: (a: A) => void, iterable: Iterable<A>): void {


2 for (const a of iterable) {
3 f(a);
4 }
5 }
6
7 const array = [1, 2, 3];
8 forEach(a => console.log(a.toFixed(2)), array); // [a: number]
9 // 1.00
10 // 2.00
11 // 3.00

Let’s break down the type definition and runtime flow of [Code 2-18].
This approach underscores the interconnectedness of iterables, higher-order
functions, first-class functions, and generics:
1. We write <A> next to forEach to declare that the function will use
generic type A.
2. We use A to define the type of the function f as (a: A) => void.
3. The function parameter a in f is typed as A.
4. We define iterable as Iterable<A>.
5. Conceptually, we can say “We declared <A> and used A for both a: A
and Iterable<A>, ensuring that the element type of iterable is the
same as the function parameter type of f.”
6. Because iterable is Iterable<A>, a of for (const a of iterable)
is typed as A.
7. In the example, we passed array: number[], so Iterable<A>
becomes Iterable<number>—thus f’s parameter a is number.
8. Thanks to generics, the function parameter a is correctly inferred as
number, making toFixed(2) safe to call.

map and Its Type

The type of the map function is as follows.

[Code 2-19] A Generator-Based map Function and Its Type

1 function* map<A, B>(f: (a: A) => B, iterable: Iterable<A>):


IterableIterator<B> {
2 for (const a of iterable) {
3 yield f(a);
4 }
5 }
6
7 const array = ['1', '2', '3'];
8 const mapped = map(a => parseInt(a), array); // [a: string]
9 // [const mapped: IterableIterator<number>]
10 const array2: number[] = [...mapped];
11 console.log(array2);
12 // [1, 2, 3]
13
14 const [head] = map(a => a.toUpperCase(), ['a', 'b', 'c']);
15 console.log(head); // [head: string]
16 // A

Here’s the breakdown:

1. We create generic types A and B, by writing the map<A, B> function.


2. map takes a function f: (a: A) => B that accepts an input of type A
and produces an output of type B, along with an Iterable<A>,
returning an IterableIterator<B>.
3. In the first example, because the input array is Iterable<string>,
TypeScript infers A as string.
4. The arrow function a => parseInt(a) returns a number, so B becomes
number. Hence map(a => parseInt(a), array) is
IterableIterator<number>.
5. Consequently, mapped is inferred as IterableIterator<number>, and
array2, which is the result of spreading mapped into an array, is of type
number[].
6. In the second case with [head], the element type is inferred as string.

By properly utilizing TypeScript’s type system, you can implement


higher-order functions with generics flexibly, ensuring type safety.

filter and Its Type

filter can be described as a function that takes an Iterable<A> and


returns a filtered IterableIterator<A>.

[Code 2-20] A Generator-Based filter Function and Its Type

1 function* filter<A>(f: (a: A) => boolean, iterable: Iterable<A>):


IterableIterator<A> {
2 for (const a of iterable) {
3 if (f(a)) {
4 yield a;
5 }
6 }
7 }
8
9 const array = [1, 2, 3, 4];
10 const filtered = filter(a => a % 2 === 0, array); // [a: number]
11
12 const array2: number[] = [...filtered]; // [const filtered:
IterableIterator<number>]
13 console.log(array2);
14 // [2, 4]

We’ll skip an in-depth breakdown here, but it follows the same flow as
the map explanation. I recommend you try writing the code yourself,
paying attention to how the types are applied. By practicing alongside
the examples, you’ll gradually become more concise and focus on the
key points of each example, saving the reader’s time with more
concise explanations and providing richer insights.

filter uses a generic type A. THe type of the argument a in the


function f and the type of the elements in the iterable are both set fo A.
The type of the elements in the returned IterableIterator is also set to A
because the filter function does not modify the elements themselves, but
only filters them.

reduce and Its Type

Finally, let’s write a concise reduce function with type declarations:

[Code 2-21] reduce Function and Its Type

1 function reduce<A, Acc>(


2 f: (acc: Acc, a: A) => Acc, acc: Acc, iterable: Iterable<A>
3 ): Acc {
4 for (const a of iterable) {
5 acc = f(acc, a);
6 }
7 return acc;
8 }
9
10 const array = [1, 2, 3];
11 const sum = reduce((acc, a) => acc + a, 0, array);
12 console.log(sum); // [const sum: number]
13 // 6
14
15 const strings = ['a', 'b', 'c'];
16 const abc = reduce((acc, a) => `${acc}${a}`, '', strings);
17 console.log(abc); // [const abc: string]
18 // abc

reducetakes a function (acc: Acc, a: A) => Acc as its parameter,


along with an initial value acc: Acc and an Iterable<A>, returning a final
Acc. Let’s break that down:

1. We declare generic type A and Acc, using reduce<A, Acc>.


2. We pass an initial value acc: Acc and an Iterable<A> whose
elements are A.
3. We pass a function f(acc, a) => newAcc that takes an initial value of
type Acc and the current value of type A, returning a new accumulated
value of type Acc.
4. The function iterates over the elements, calling f(acc, a) to update
acc.
5. Finally, reduce returns the accumulated acc of type Acc.

Defining the function from a type perspective enables clearer and more
accurate communication among developers.

reduce Overloads

In JavaScript, Array.prototype.reduce allows omitting the initial


value. If there’s no initial value, the first element becomes the accumulator,
but if the array is empty, it throws an error. Our iterable-based reduce aims
for the same functionality:

If an initial value is provided, we have three parameters.


If the initial value is omitted, we only take f and iterable, with the
first element used as the accumulator.
If the initial value is omitted and array is empty, it can’t return a
meaningful result, so it throws an error.

We handle these cases with function overloading. Function


overloading or method overloading is where you define multiple signatures
for the same function name, but provide only one implementation. That
way, a single function name can accept different types of parameters.

[Code 2-22] reduce(f, iterable);

1 function baseReduce<A, Acc>(


2 f: (acc: Acc, a: A) => Acc, acc: Acc, iterator: Iterator<A>
3 ): Acc {
4 while (true) {
5 const { done, value } = iterator.next();
6 if (done) break;
7 acc = f(acc, value);
8 }
9 return acc;
10 }
11
12 // (1)
13 function reduce<A, Acc>(
14 f: (acc: Acc, a: A) => Acc, acc: Acc, iterable: Iterable<A>
15 ): Acc;
16 // (2)
17 function reduce<A, Acc>(
18 f: (a: A, b: A) => Acc, iterable: Iterable<A>
19 ): Acc;
20 function reduce<A, Acc>(
21 f: (a: Acc | A, b: A) => Acc,
22 accOrIterable: Acc | Iterable<A>,
23 iterable?: Iterable<A>
24 ): Acc {
25 if (iterable === undefined) {
26 const iterator = (accOrIterable as Iterable<A>)[Symbol.iterator]();
27 const { done, value: acc } = iterator.next();
28 if (done) throw new TypeError("'reduce' of empty iterable with no
initial value");
29 return baseReduce(f, acc, iterator) as Acc;
30 } else {
31 return baseReduce(f, accOrIterable as Acc, iterable[Symbol.iterator]
());
32 }
33 }

The explanation of the function signature (1, 2) and implementation (3,


4) is as follows.

1. reduce<A, Acc>(f: (acc: Acc, a: A) => Acc, acc: Acc,


iterable: Iterable<A>): Acc

Declares generic A andAcc.


Takes an initial value acc: Acc and Iterable<A>.
Takes (acc: Acc, a: A) => Acc as a parameter and calculates
the accumulated value.
Finally returns Acc.

2. reduce<A, Acc>(f: (a: A, b: A) => Acc, iterable:


Iterable<A>): Acc

Declare generic A and Acc.


Takes Iterable<A> and (a:A, b: A) => Acc as parameters
without an initial value.
Calculates the accumulated value using the first element of the
iterable.
Finally returns Acc
3. No iterable Provided (iterable === undefined)

The second parameter, accOrIterable, is an iterable.


Calling [Symbol.iterator]() to get an iterator and using
iterator.next() to retrieve the first element.
If empty, throw an error. Otherwise, execute baseReduce
function.

4. Taking three parameters (else)

The second argument accOrIterable is indeed the initial value


and iterable is an iterable.
Call baseReduce to update the accumulated value by iterating
over the elements of the iterator and appling f(acc, value).
Finally, return the accumulated value.

Below is an example of this reduce usage:

[Code 2-22a] Using reduce

1 // First reduce case: with an initial value


2 const array = [1, 2, 3];
3 const sum = reduce((acc, a) => acc + a, 0, array);
4 console.log(sum); // [const sum: number]
5 // 6
6
7 const strings = ['a', 'b', 'c'];
8 const abc = reduce((acc, a) => `${acc}${a}`, '', strings);
9 console.log(abc); // [const abc: string]
10 // abc
11
12 // Second reduce case: no initial value
13 const array2 = [1, 2, 3];
14 const sum2 = reduce((a, b) => a + b, array2);
15 console.log(sum2); // [const sum2: number]
16 // 6
17
18 const words = ['hello', 'beautiful', 'world'];
19 const sentence = reduce((a, b) => `${a} ${b}`, words);
20 console.log(sentence); // [const sentence: string]
21 // hello beautiful world
22
23 const array3 = [3, 2, 1];
24 const str = reduce((a, b) => `${a}${b}`, array3);
25 console.log(str); // [const str: string]
26 // 321
In example like hello beautiful world, omitting the initial value
can make the code more concise and improve readability.

Error Handling in reduce

When JavaScript’s reduce is called without an initial value, it begins


iteration using the array’s first element as the initial value. If the array is
empty, there’s no element to serve as the initial value, so it throws a
TypeError and stops execution. Whether it’s Array.prototype.reduce, an
Iterable Helpers–based reduce, or the reduce we’ve implemented here, all
are designed to propagate an error when encountering an empty array or
empty iterable in a no-initial-value scenario.

So how should we view and manage such error handling?

1. Explicitly provide an initial value.

This is the simplest solution.


Even if the array is empty, reduce proceeds without an error,
returning a user-defined initial value (for example, 0 or '').
This method makes it clear what to return when encountering an
empty array.

2. However, an initial value alone does not solve every scenario.

For example, in something like the “hello beautiful world” case in


[Code 2-22a], adding an initial value means your helper function
(f) would need an if statement to handle a “blank” first iteration.
In that case, every iteration might pass through an additional
conditional, complicating your code.
Ultimately, whether providing an initial value is appropriate
depends on how the overall logic and your helper function
operate.

3. Check for an empty array in advance and early-return a


default value.
if empty arrays are expected and a default value make sense, this
method can be used.
For example, something like arr => arr.length === 0 ? '' :
arr.reduce((a, b) => `${a} ${b}`).
This works with data structures such as Array or Set, where the
length is known beforehand.

4. Use try/catch to handle the error.

If an empty array (or empty iterable) isn’t part of your normal


operating range, and you have no reason to use an initial value,
you can handle it with try/catch,
Or you can definitively decide “This is indeed an error,” in which
case you let it throw, expecting the error to be caught somewhere
if needed.
In short, the developer should know it’s “out of normal range”
and must be prepared to detect the error at some final stage.

5. If the iterator is lazy:

Before passing a lazy iterator to reduce, you can either convert it


to an array to check its length beforehand,
Or defer evaluation until reduce processes it, handling or
throwing an error if the iterator is empty.
The first approach allows you to “preemptively check for
emptiness and return a default value”.
The second scenario assumes that “if an empty iterator is passed
to reduce and triggers an error, you either let it throw or handle it
with try/catch.”
Ultimately, using reduce without an initial value on a lazy iterator
requires deciding whether to handle errors explicitly or assume
such a scenario won’t occur-and choosing the appropriate
approach accordingly.

In Chapters up to 4 of this book, we present various perspectives on


error handling and handling values in optional scenarios. Revisiting
the discussion here after reading that part will likely be even more
beneficial.

Function Signatures and Nested Type Inference

So far, we’ve only run these typed higher-order functions individually.


But well-typed functions maintain strong inference even when nested. Since
each higher-order function can infer the callback parameter’s type,
developers get full type safety with minimally manual annotations.

[Code 2-23] map + filter + forEach

1 function* naturals(end = Infinity): IterableIterator<number> {}


2 function forEach<A>(f: (a: A) => void, iterable: Iterable<A>): void {}
3 function* map<A, B>(f: (a: A) => B, iterable: Iterable<A>):
IterableIterator<B> {}
4 function* filter<A>(f: (a: A) => boolean, iterable: Iterable<A>):
IterableIterator<A> {}
5
6 function printNumber(n: number) {
7 return console.log(n);
8 }
9
10 forEach(printNumber,
11 map(n => n * 10, // [n: number]
12 filter(n => n % 2 === 1, // [n: number]
13 naturals(5))));
14
15 forEach(printNumber,
16 filter(n => n % 2 === 1, // [n: number]
17 map(text => parseInt(text), // [text: string]
18 map(el => el.textContent!, // [el: HTMLDivElement]
[Node.textContent: string | null]
19 document.querySelectorAll('div')))));

At the top, we’ve kept function signatures minimal to enable type


inference. In these examples, you can see how parameters are inferred
(comment like [n: number]), ensuring that printNumber receives a number.

We’ve now studied in detail how to apply a type system to functional


higher-order functions and iterable helpers. By applying these concepts, you
can build even safer higher-order functions. A well-structured set of such
functions leads to safer, more efficient, flexible, and productive functional
programming.
2.3 Multiparadigm Languages and
Metaprogramming – From LISP
We just examined how to assign types to higher-order functions
frequently used in functional programming, thereby implementing functions
that are both highly polymorphic and type-safe. In this chapter, we will
explore how to combine these higher-order functions with classes and apply
the iterable pattern to form a structure that allows for more readable
processing of data streams. This pattern is already widely used in many
modern languages’ standard libraries and greatly assists developers in
expressing data flows clearly and intuitively.

The examples in this chapter demonstrate how to combine various


language features—generics, first-class functions, classes, and iterable
protocols—to build flexible and highly extensible abstractions. By doing so,
you can achieve the kind of code expressiveness associated with
metaprogramming, implement runtime modifications of functionality, and
gain an experience that feels as though you’ve extended the language itself.

Here, metaprogramming refers to a programming technique where


programs treat themselves or other programs as data, enabling them to
analyze, modify, generate, or execute code. This approach of dynamically
manipulating code as data reaches its zenith in traditional LISP-family
languages, where developers can directly redefine code structures and
evaluation processes, while freely manipulating language syntax through
macros.

While TypeScript and other modern multi-paradigm languages don’t


offer the same direct metaprogramming capabilities as LISP-family
languages, we can strategically combine the language features we’ve
explored to achieve similar metaprogramming benefits in practice. In this
chapter, we’ll examine how multi-paradigm languages can accomplish the
rich abstraction and dynamic transformation that were once thought
exclusive to LISP-family languages, along with the strategies that make this
possible.
Pipe Operator

Before that, let’s take a moment to travel to the future—or perhaps the
past. As shown in the following example from [Code 2-23], code that needs
to be read from bottom-right to top-left can be unfamiliar and therefore less
readable. Since LISP excels in lazy evaluation and metaprogramming,
developers can create their own pipe function to address this issue.
Furthermore, some languages already provide built-in pipe operators to
effectively improve readability. The second example demonstrates the use
of a pipe operator.

[Code 2-24] Pipe Operator

1 // From [Code 2-23]...


2 forEach(printNumber,
3 map(n => n * 10,
4 filter(n => n % 2 === 1,
5 naturals(5))));
6 // 10
7 // 30
8 // 50
9
10 // Pipe Operator
11 naturals(5)
12 |> filter(n => n % 2 === 1, %)
13 |> map(n => n * 10, %)
14 |> forEach(printNumber, %)
15 // 10
16 // 30
17 // 50

We chose to design our higher-order functions with the parameter


order map(f, iterable) rather than map(iterable, f), following the
conventions of traditional functional programming languages. This order
improves readability in three key cases: when nesting functions, using the
pipe operator, and supporting currying. Had we adopted the reverse order
map(iterable, f), the code would need to be written as follows:

[Code 2-25] What if the parameter order was reversed?

1 forEach(map(filter(naturals(5), n => n % 2 === 1), n => n * 10),


printNumber);
2
3 naturals(5)
4 |> filter(%, n => n % 2 === 1)
5 |> map(%, n => n * 10)
6 |> forEach(%, printNumber)

Clearly, the code in [Code 2-24] is more readable. The pipeline code is
not bad, but having % in the first argument of those higher-order functions is
a bit visually distracting.

Combining Classes, Higher-Order Functions, Iterators,


and the Type System

We briefly looked at the possibility of a Pipe Operator being


introduced into JavaScript, but there’s no need to wait around for it. We can
solve this readability issue right now by appropriately combining classes
from object-oriented paradigm, iterables, functional functions, and the type
system.

Extending Iterable with a Generic Class

First, we created a generic class as shown in [Code 2-26] to extends


Iterable. we defined a generic class FxIterable<A>, and it has iterable:
Iterable<A> internally.

[Code 2-26] FxIterable<A>

1 class FxIterable<A> {
2 private iterable: Iterable<A>;
3
4 constructor(iterable: Iterable<A>) {
5 this.iterable = iterable;
6 }
7 }

In [Code 2-27], we specify an access modifier (private) directly in the


constructor parameter, such as private iterable: Iterable<A>. By doing
this, you can omit the code that explicitly defines the field and assigns it a
value; the iterable field will be automatically created within the class.
This approach helps make your class definitions more concise.
[Code 2-27] FxIterable<A>

1 class FxIterable<A> {
2 constructor(private iterable: Iterable<A>) {}
3 }

The type parameter A of FxIterable<A> is determined at the time you


instantiate the class, based on the type of the iterable argument. This
works similarly to how type parameters of a generic function are
determined by the function arguments at call time.

Now we can add various higher-order functions as methods to this


generic class.

Adding a map Method to FxIterable<A>

Next, let’s add a map method to the FxIterable class, and implement
the method using the previously written map function.

[Code 2-28] Adding a map Method to FxIterable<A>

1 class FxIterable<A> {
2 constructor(private iterable: Iterable<A>) {}
3
4 map<B>(f: (a: A) => B): FxIterable<B> {
5 return new FxIterable(map(a => f(a), this.iterable));
6 }
7 }
8
9 const mapped = new FxIterable(['a', 'b'])
10 .map(a => a.toUpperCase())
11 .map(b => b + b);
12
13 // [const mapped: FxIterable<string>]
14 // [a: string]
15 // [b: string]

The map method creates a new iterable iterator by applying map(f) to


this.iterable, then returns FxIterable<B>. A FxIterable instance can
call map repeatedly in a chain-like style. We can now read the code from top
to bottom. The implementation leverages generics effectively, resulting in
proper type inference - mapped becomes an instance of
FxIterable<string>, and the type system correctly identifies a as a
string.

Expressing fx(iterable: Iterable<A>): FxIterable<A> More


Concisely

We can make the part new FxIterable(['a', 'b']) a bit more


concise. Let’s add a helper function fx that easily creates a FxIterable
instance.

[Code 2-29] Adding a fx Helper Function

1 function fx<A>(iterable: Iterable<A>): FxIterable<A> {


2 return new FxIterable(iterable);
3 }
4
5 const mapped2 = fx(['a', 'b'])
6 .map(a => a.toUpperCase())
7 .map(b => b + b);
8
9 // [const mapped2: FxIterable<string>]

This allows for even more concise code with improved readability.

Creating filter and forEach Methods

Let’s now add filter and forEach methods.

[Code 2-30] Adding filter and forEach to FxIterable<A>

1 class FxIterable<A> {
2 constructor(private iterable: Iterable<A>) {}
3
4 map<B>(f: (a: A) => B): FxIterable<B> {
5 return new FxIterable(map(f, this.iterable));
6 }
7
8 filter(f: (a: A) => boolean): FxIterable<A> {
9 return new FxIterable(filter(f, this.iterable));
10 }
11
12 forEach(f: (a: A) => void): void {
13 return forEach(f, this.iterable);
14 }
15 }

You can use the fx function defined in [Code 2-29] to make the
internal code of FxIterable more concise.

[Code 2-30a] Replacing new FxIterable(…) with fx(…)

1 class FxIterable<A> {
2 constructor(private iterable: Iterable<A>) {}
3
4 map<B>(f: (a: A) => B): FxIterable<B> {
5 return fx(map(f, this.iterable));
6 }
7
8 filter(f: (a: A) => boolean): FxIterable<A> {
9 return fx(filter(f, this.iterable));
10 }
11
12 forEach(f: (a: A) => void): void {
13 return forEach(f, this.iterable);
14 }
15 }

Now we can iterate with forEach to produce output effects.

[Code 2-31] map, forEach

1 fx(['a', 'b'])
2 .map(a => a.toUpperCase())
3 .map(a => a + a)
4 .forEach(a => console.log(a));
5 // AA
6 // BB

Let’s rewrite [Code 2-24] using fx.

[Code 2-32] naturals, filter, map, forEach

1 // Nested functions
2 forEach(printNumber,
3 map(n => n * 10,
4 filter(n => n % 2 === 1,
5 naturals(5))));
6
7 // Pipe operator
8 naturals(5)
9 |> filter(n => n % 2 === 1, %)
10 |> map(n => n * 10, %)
11 |> forEach(printNumber, %)
12
13 // Chaining
14 fx(naturals(5))
15 .filter(n => n % 2 === 1)
16 .map(n => n * 10)
17 .forEach(printNumber);
18 // 10
19 // 30
20 // 50

We have now reached quite a complete form. From these examples,


while the nested function and pipe operator approaches are sufficiently
readable, the chaining approach is very similar to how modern languages
operate and is especially familiar and readable. Chaining approach allows
developers to intuitively express data transformations through sequential
method calls, making each step clear and the code flow easier to follow.
This style is especially reminiscent of JavaScript’s array method chaining or
Java’s Stream API, which many developers already find familiar. As
mentioned, this style is adopted by many multiparadigm languages.

Also, with chaining, the IDE can offer hints about which methods are
available next, making the development process more convenient.

Creating a reduce Method

Now we’ll add a reduce method.

[Code 2-33] Adding a reduce Method to FxIterable<A>

1 class FxIterable<A> {
2 constructor(private iterable: Iterable<A>) {
3 }
4
5 // ... omitted methods ...
6
7 reduce<Acc>(f: (acc: Acc, a: A) => Acc, acc: Acc): Acc {
8 return reduce(f, acc, this.iterable);
9 }
10 }
It’s straightforward to add. However, as with the function we
implemented previously, you need to support two different usage patterns
by overloading the method. In TypeScript, overloading a method is done
just like function overloading: It’s technique where multiple function or
method signatures are defined, while providing only a single
implementation.

[Code 2-34] Overloading the reduce Method

1 class FxIterable<A> {
2 constructor(private iterable: Iterable<A>) {}
3
4 // ... omitted methods ...
5
6 reduce<Acc>(f: (acc: Acc, a: A) => Acc, acc: Acc): Acc; // (1)
7 reduce<Acc>(f: (a: A, b: A) => Acc): Acc; // (2)
8 reduce<Acc>(f: (a: Acc | A, b: A) => Acc, acc?: Acc): Acc {
9 return acc === undefined
10 ? reduce(f, this.iterable) // (3)
11 : reduce(f, acc, this.iterable); // (4)
12 }
13 }

Here’s what’s going on in the signatures (1, 2) and the implementation


(3, 4):

1. reduce<Acc>(f: (acc: Acc, a: A) => Acc, acc: Acc): Acc;

This signature is used when reduce is called with an initial value


acc.
The generic type Acc is used to define the type of the
accumulator.
The function f takes the accumulator acc and each element a of
the iterable, returning a new accumulator.
A is the type of the elements in private Iterable<A>.

2. reduce<Acc>(f: (a: A, b: A) => Acc): Acc;

This signature is for when reduce is called without an initial


value.
In that case, the first element of the iterable is used as the initial
value.
Ais the type of the elements in private Iterable<A>.
The generic type Acc defines the type of the accumulator.

3. When acc is undefined

reduce(f, this.iterable) is called.


In that scenario, the first element of the iterable is used as the
initial accumulator value inside the reduce function.

4. When acc is not undefined

reduce(f, acc, this.iterable) is called.


The specified acc is used as the initial accumulator.

With the reduce method, you can now flexibly use both the form with
an initial value and the form without one.

[Code 2-35] Using the reduce Method

1 // When there is no initial value


2 const num = fx(naturals(5)) // FxIterable<number> (1, 2, 3, 4, 5)
3 .filter(n => n % 2 === 1) // FxIterable<number> (1, 3, 5)
4 .map(n => n * 10) // FxIterable<number> (10, 30, 50)
5 .reduce((a, b) => a + b); // [a: number] [b: number]
6
7 console.log(num); // [num: number]
8 // 90
9
10 // When there is an initial value
11 const num2 = fx(naturals(5)) // FxIterable<number> (1, 2, 3, 4, 5)
12 .filter(n => n % 2 === 1) // FxIterable<number> (1, 3, 5)
13 .map(n => n * 10) // FxIterable<number> (10, 30, 50)
14 .reduce((a, b) => a + b, 10); // [a: number] [b: number]
15
16 console.log(num2); // [num2: number]
17 // 100

With map, filter, reduce, and forEach methods implemented, we can


now write code that is more readable, safe, and easier to maintain through
chaining.

Learning from LISP (Clojure) – Code is Data, Data is


Code

Let’s talk about LISP for a moment. LISP holds a very important place
in the history of programming languages, thanks to its unique syntax and
philosophy. This language has the characteristic that “code is data, and data
is code,” allowing the syntax of a programming language to be represented
and manipulated as a data structure. Consequently, programs can
dynamically generate new code and execute it, making advanced techniques
such as metaprogramming easy to implement. This greatly enhances the
flexibility and extensibility of code.

In this section, using the LISP-family language Clojure as an example,


we will explain the basic concepts, macros, and metaprogramming of LISP,
and then apply these ideas to TypeScript to broaden our perspective on
multiparadigm languages.

Clojure

Clojure is a functional programming language in the LISP family,


developed by Rich Hickey in 2007. It runs on the JVM, combining the
features of a modern LISP language with access to Java’s extensive library
ecosystem. Clojure emphasizes immutability and first-class functions, and
offers powerful features for concurrent programming. It also treats code and
data as identical, enabling metaprogramming and increasing code flexibility
and extensibility.

Getting Started with Clojure – S-expressions

In LISP, an S-expression (Symbolic Expression) refers to list-based


syntax representation. This approach enables handling both code and data
with the same (list-based) structure, which means the code itself can be
handled as data. For instance, (+ 1 2) in LISP is simultaneously code that
adds 1 and 2, and also a list-structured piece of data.
[Code 2-36] Code to Be Executed as a List

1 (+ 1 2)

[Code 2-36] is an expression that adds two numbers, but it can also be
interpreted as a list structure, as follows:

First element: the operator (function) +


Remaining elements: operands 1 and 2

In LISP-family languages, calling a function takes the form of a list,


where the first element is the function, and the rest are the arguments to that
function.

To illustrate, let’s represent this concept in TypeScript in a simplified


manner.

[Code 2-37] A List Is a Value

1 [add, 1, 2]

The array contains as add function for summing two numbers, along
with the operand elements 1 and 2. This construct, [add, 1, 2], exists
fundamentally as an array data structure. Given the presence of an
evaluation function, this data structure could be transformed into executable
code and subsequently evaluated to produce a computational result.

[Code 2-37a] Evaluating a List

1 type Evaluatable<A, B> = [(...args: A[]) => B, ...A[]];


2
3 function evaluation<A, B>(expr: Evaluatable<A, B>) {
4 const [fn, ...args] = expr;
5 return fn(...args);
6 }
7
8 const add = (a: number, b: number) => a + b;
9 const result: number = evaluation([add, 1, 2]);
10 console.log(result); // 3
[Code 2-37a] demonstrates part of LISP’s “code as data” concept using
TypeScript by representing function calls as list-structured data and then
evaluating them.

1. Evaluatable<A, B> Type Definition

This type defines a structure where the first element is a function


type ((...args: A[]) => B), followed by values that serve as
the function’s arguments.
For example, [add, 1, 2] clearly expresses an array containing
the add function and its arguments 1 and 2 as a typed structure.

2. evaluation Function

The evaluation function takes a value of type Evaluatable<A,


B>. Using destructuring assignment, it extracts the first element as
the function (fn) and the remaining elements as an array of
arguments (args).
It then calls fn(...args) and returns the result, effectively
“evaluating” the list-structured “code” represented as data.

3. Usage Example

The add function simply adds two numbers together.


The array [add, 1, 2] is a data structure that means “call the add
function with arguments 1 and 2.
When we evaluate this with evaluation([add, 1, 2]), it
internally executes add(1, 2) and returns the result 3.

This example allows us to execute code represented as data (in array


form) through the evaluation function.

It’s worth noting that this TypeScript example only works with code as
data at runtime. In contrast, LISP can manipulate code as data during
compilation as well, giving it more powerful capabilities to transform the
code itself before it’s executed at runtime. We’ll explore this in more detail
later.
When map Executes in Clojure

The next code snippet is an example that returns the result of adding
10 to each element of a list:

[Code 2-38] map

1 (map #(+ % 10) [1 2 3 4])

Here’s how this code works:

First element: the function map


Second element: an anonymous function #(+ % 10) (which adds 10 to
the current element)
Third element: the vector [1 2 3 4] (in Clojure, [] represents a
vector, while () represents a list)

The map function returns the result of applying the given function #(+
% 10) to each element of the vector. When evaluated, the result becomes a
lazy sequence in list form: (11 12 13 14). Since this sequence is lazy, the
actual evaluation only completes when the values are needed somewhere.

The expression #(+ % 10) is expanded by a reader macro into the


anonymous function (fn [x] (+ x 10)). In Clojure, function definitions
are also expressed as lists, which means this function definition itself can be
treated as “both code and a data structure”. A reader macro is a feature in
languages like Clojure that substitutes certain symbols or patterns with
predefined forms of code during the source code reading (Reader) stage.

Pulling Out the First Two Values

The next example shows using let and destructuring to retrieve the
first two values from the result of map and print them.
[Code 2-39] let and Destructuring

1 (let [[first second] (map #(+ % 10) [1 2 3 4])]


2 (println first second))
3 ;; 11 12

The expression (map #(+ % 10) [1 2 3 4]) generates a lazy


sequence in the form of (11 12 13 14).
Within the let binding, [first second] performs destructuring to
extract only the first two elements(11 and 12), evaluating just what’s
needed.
Since map is lazy by design, elements are computed only when they’re
actually required.
The println function outputs the values of first and second.
The result displayed is 11 12.
Note that ;; denotes comments in Clojure syntax.

In LISP-family languages, code is represented as lists, and those lists


remain just data until evaluated. When evaluation begins, these lists turn
into actual function calls or logic. For instance, the anonymous function (fn
[x] (+ x 10)) generated by #(+ % 10) is both unevaluated “syntax(code)”
and a “value” structured as a list.

Then this value combines with another list structure like (map f
list), Clojure delays evaluation until necessary. Only when evaluation
becomes required does Clojure complete and execute the nested
combinations of lists as actual logic. This ability to handle code and data in
identical forms, along with gradual evaluation when needed, represents one
of the fundamental strengths and distinguishing characteristics of LISP-
family languages.

Making User-Created Code and Classes into Lists in a


Multiparadigm Language

I’d like to extend our FxIterable class to match the capabilities of


[Code 2-39] in Clojure, achieving the same time complexity (with lazy
evaluation support) and expressive power. What changes would we need to
make to FxIterable to enable this? One key question is: How can we
implement destructuring similar to Clojure’s let with [first second]
pattern? Perhaps we should create a method that converts FxIterable to an
Array? Let’s explore this idea by writing some code:

[Code 2-40] toArray

1 class FxIterable<A> {
2 constructor(private iterable: Iterable<A>) {}
3
4 // ... omitted methods ...
5
6 toArray(): A[] {
7 return [...this.iterable];
8 }
9 }
10
11 const [first, second] = fx([1, 2, 3, 4]).map(a => a + 10).toArray();
12 console.log(first, second); // 11 12

Here’s how this code works:

The added toArray() method converts the internal iterable into an


array. It uses the spread operator in the statement return
[...this.iterable]; to transform the iterable object into a proper
array.
The fx function creates an FxIterable instance. We call map() method
to add 10 to each element, and then return an array transformed by
toArray().
We use destructuring assignment to bind the first and second values
from this array to the variables first and second.
The result prints 11 and 12.

We get the desired result, but we notice some drawbacks compared to


[Code 2-39]. Specifically, we needed to append .toArray() after map(...),
and by calling .toArray(), we evaluate all four elements to build a
complete array. How can we fix this? The good news is that we already
have the answer! We can transform FxIterable into something more like
the lists in LISP by making one fundamental change: turning FxIterable
itself into a value that follows the iteration protocol we’ve been working
with throughout our examples.
[Code 2-41] LISP-Like Lists as Iterables

1 class FxIterable<A> {
2 constructor(private iterable: Iterable<A>) {}
3
4 [Symbol.iterator]() {
5 return this.iterable[Symbol.iterator]();
6 }
7
8 // ... omitted methods ...
9 }
10
11 const [first, second] = fx([1, 2, 3, 4]).map(a => a + 10);
12 console.log(first, second); // 11 12

The solution is surprisingly simple: we just need to make FxIterable


propter iterable. By implementing [Symbol.iterator]() to return an
iterator from this.iterable, we no longer need toArray() to destructure
[first, second]. Only two elements need to be evaluated (i.e., the
operation of adding 10 occurs just twice).

LISP’s Extensibility – Macros and Metaprogramming

Returning to LISP: Let’s consider a list that hasn’t been evaluated yet
and is intended to add 10 to its elements. With LISP, you can dynamically
add any number of new features to this list before evaluation - such as
filtering out odd numbers or excluding specific elements. For example, if a
developer creates a function that removes elements based on certain logic
and places it as the first element of the list, that list then behaves like code
composed of entirely new operators and functions. Through this process,
developers can extend the language’s capabilities themselves and naturally
integrate with existing language features like let.

Here is an example of defining a reject function, adding an operation


to Clojure that wasn’t available in the original language:

[Code 2-42] Applying a reject Function

1 (defn reject [pred coll]


2 (filter (complement pred) coll))
3
4 (let [[first second] (reject odd? (map #(+ % 10) [1 2 3 4 5 6]))]
5 (println first second))
6 ;; 12 14

Here’s how [Code 2-42] works:

reject uses filter and complement to keep only elements that do not
satisfy pred.
(map #(+ % 10) [1 2 3 4 5 6]) adds 10 to each, creating (11 12
13 14 15 16).
reject odd? removes odd? elements from the map result, leaving (12
14 16).
In the let form, [first second] destructures the first two elements
(12 14) and binds them to first and second.
The printed result of first and second with printIn is 12 14.
The ;; is a comment syntax in Clojure.

This example demonstrates how developers can define their desired


logic directly as functions, seamlessly integrate them with language
features, and flexibly extend the language thanks to LISP’s characteristic of
delaying code evaluation.

Macros

In LISP-family languages, macros are not mere text replacements but


actual functions that accept code (in list form) as input and return code
(in list form) as output. Macros operate at compile time, transforming
code while it’s still in an “unexecuted syntax” state, allowing you to
rearrange the final code that will eventually run. This is how developers can
flexibly alter or extend the language itself, even creating new syntax if
desired.

A famous example is the unless macro:

[Code 2-43] unless

1 (defmacro unless [test body]


2 `(if (not ~test) ~body nil))
Looking at the definition of unless above, test and body are “code
form parameters” passed to the macro. Unlike functions, which receive
evaluated arguments, macros receive the original code forms-lists that have
not yet been executed. This means the unless macro receives test and
body similar to function arguments, but treats them as code structures (lists)
themselves rather than executing their values.

For example:

[Code 2-44] Using unless

1 (unless false
2 (println "This line executes because the condition is false."))

Here, false is passed as the test argument to the unless macro from
[Code 2-43], and (println "This line executes because the
condition is false.") is passed as the body argument. At this point,
these arguments are passed to the macro in their unevaluated form as code
fragments (lists). Then, the unless macro uses these code fragments to
generate new code at compile time, like this:

[Code 2-45] The Actual Code Produced by unless

1 (if (not false)


2 (println "This line executes because the condition is false.")
3 nil)

Ultimately, the unless macro is a code transformer that takes test and
body code as input and returns new code fragments that will eventually be
executed. Since the compiler uses this returned code as the actual
executable code, developers can freely create new syntax or features that
weren’t originally provided by the language.

Summarising, test and body are “code fragments” passed to the


macro, and the unless macro reconstructs these fragments to generate new
code at compile time. This gives developers an easy way to create their own
language extension tools, which is one of the powerful metaprogramming
capabilities of LISP-family languages.
->> Macro

Let’s represent the code using reject as a pipeline. In Clojure, you


can use the ->> macro to make code look like a pipeline.

[Code 2-46] Pipeline Expression

1 (let [[first second] (->> [1 2 3 4 5 6]


2 (map #(+ % 10))
3 (reject odd?))]
4 (println first second))
5 ;; 12 14

Here’s how it works:

->> macro takes [1 2 3 4 5 6] as the first argument and passes it to


the last argument of the following function.
map #(+ % 10) adds 10 to each element of [1 2 3 4 5 6], generating
(11 12 13 14 15 16).
reject odd? removes the odd numbers from (11 12 13 14 15 16),
leaving (12 14 16).
let is used to bind the first two values of the list to [first second].
println prints [first second] values.
The result is 12 14.

This example illustrates how a pipeline macro can enhance code


readability.

In Clojure, developers can directly define macros like unless or ->>,


and can also create expressions using special characters and symbols. This
allows for easily adding new syntax such as ->> to the language. When
combined with S-expressions that use only parentheses without commas,
this approach enables the creation of even more elegant and concise code.
This powerful extensibility and flexibility comes from a key characteristic
of LISP-family languages: the ability to represent program syntax as data
structures that can be treated like delayed values.
Adding a reject Method to FxIterable

We will now add a reject method to the FxIterable class, producing


the same behavior as the Clojure example that uses reject, and then
express it in a chain.

[Code 2-47] Adding a reject Method to the FxIterable Class

1 class FxIterable<A> {
2 constructor(private iterable: Iterable<A>) {}
3
4 [Symbol.iterator]() {
5 return this.iterable[Symbol.iterator]();
6 }
7
8 // ... omitted methods ...
9
10 reject(f: (a: A) => boolean): FxIterable<A> {
11 return this.filter(a => !f(a));
12 }
13 }
14
15 const isOdd = (a: number) => a % 2 === 1;

Usage example:

[Code 2-48] FxIterable Chaining and Destructuring

1 const [first, second] = fx([1, 2, 3, 4, 5, 6])


2 .map(a => a + 10)
3 .reject(isOdd);
4
5 console.log(first, second);
6 // 12 14

[Code 2-46 Revisited] Clojure Pipeline with Destructuring

1 (let [[first second] (->> [1 2 3 4 5 6]


2 (map #(+ % 10))
3 (reject odd?))]
4 (println first second))
5 ;; 12 14
Both examples share the same programming paradigm and philosophy,
thus achieving essentially the same meaning and value.

Extending the Language Through Collaborative Code, Objects,


and Functions

We’ve explored how the imperative syntax of Destructuring


Assignment, the object-oriented Method Chaining Pattern, and functional
Higher-Order Functions cooperate closely through the iteration protocol,
achieving high-level abstractions and flexibility that give the impression of
extending the language itself.

[Code 2-48a] FxIterable Chaining and Destructuring

1 const [first, second] = fx([1, 2, 3, 4, 5, 6])


2 .map(a => a + 10)
3 .reject(isOdd);

You can classify each part of above example code according to its role:

Imperative syntax of destructuring assignment: const [first,


second] =
Object-oriented method chaining pattern: fx().map().reject()
Functional higher-order functions and LISP: map = (f: (a: A) =>
B, iterable: Iterable<A>) => Iterable<B>

In addition to [Code 2-48a], this code involves many other concepts


and functionalities—like generator from imperative code, object-oriented
iterator, first-class functions, classes, generics, and type inference—
interacting to deliver a lot of power and possibility.

Moreover, this code is not just a specialized implementation for a


particular domain or problem; it shows a universal language-like approach.
It doesn’t break from the existing language design or philosophy, so it
integrates well with compile-time type processing and runtime error
handling. It will also work seamlessly with new language features
introduced in future releases.
In conclusion, this code is implemented in a multiparadigm way and
simultaneously serves as a versatile code that can interact with all features
supported by multiparadigm languages.

Dynamically Extending Functionality at Runtime

Extending with to and Working in Harmony with OOP Objects

Earlier, we created a toArray() method for FxIterable. Then we


made FxIterable itself iterable, which allows us to convert it to an Array
using the spread operator. So why keep toArray()? Because it allows you
to transform FxIterable into an array and continue chaining when it makes
sense to do so. There will be situations where toArray() is more suitable.

[Code 2-49] Chaining with toArray()

1 const sorted = fx([5, 2, 3, 1, 4, 5, 3])


2 .filter(n => n % 2 === 1)
3 .map(n => n * 10)
4 .toArray() // Convert to Array<number>
5 .sort((a, b) => a - b); // Sort in ascending order
6
7 console.log(sorted);
8 // [10, 30, 30, 50, 50]
9
10 const sorted2 = [...fx([5, 2, 3, 1, 4, 5, 3])
11 .filter(n => n % 2 === 1)
12 .map(n => n * 10)
13 ]
14 .sort((a, b) => a - b);
15
16 console.log(sorted2);
17 // [10, 30, 30, 50, 50]

sorted uses the toArray() method to convert the FxIterable object


to an array, then uses the array’s sort method to order the elements. By
contrast, sorted2 uses the spread operator to compare readability. The
nested brackets and parentheses can reduce clarity because you have to
follow the code flow from map up to the beginning of the spread operator,
then back down to where the brackets close to find the sort method.
Method-chaining, on the other hand, reads and operates sequentially, which
improves readability.
Next, let’s provide a to method that allows developers to dynamically
extend FxIterable with methods like toArray, converting FxIterable to
other types as needed.

[Code 2-50] A to Method That Enables Dynamic Converters

1 class FxIterable<A> {
2 constructor(private iterable: Iterable<A>) {}
3
4 [Symbol.iterator](): Iterator<A> {
5 return this.iterable[Symbol.iterator]();
6 }
7
8 // ... omitted methods ...
9
10 to<R>(converter: (iterable: Iterable<A>) => R): R {
11 return converter(this.iterable);
12 }
13 }
14
15 const sorted = fx([5, 2, 3, 1, 4, 5, 3])
16 .filter(n => n % 2 === 1)
17 .map(n => n * 10)
18 .to(iterable => [...iterable]) // Convert iterable to spread operator
19 .sort((a, b) => a - b); // [Array<number>.sort(compareFn?: ...):
number[]]
20
21 console.log(sorted); // const sorted: number[]
22 // [10, 30, 30, 50, 50]

We’ve converted it into an Array, type inference correctly infers


Array, allowing you to safely continue method chaining. The compareFn
parameter in sort is also inferred to take two numbers.

Since FxIterable is itself an Iterable, we can simply implement it


by passing this as shown below, and it will work exactly the same way:

[Code 2-51] Using this

1 class FxIterable<A> {
2 constructor(private iterable: Iterable<A>) {}
3
4 [Symbol.iterator](): Iterator<A> {
5 return this.iterable[Symbol.iterator]();
6 }
7
8 // ... omitted methods ...
9
10 filter(f: (a: A) => boolean) {
11 return fx(filter(f, this)); // <-- return fx(filter(f,
this.iterable));
12 }
13
14 toArray() {
15 return [...this]; // <-- return [...this.iterable];
16 }
17
18 to<R>(converter: (iterable: this) => R): R {
19 return converter(this); // <-- return converter(this.iterable);
20 }
21 }
22
23 const sorted = fx([5, 2, 3, 1, 4, 5, 3])
24 .filter(n => n % 2 === 1)
25 .map(n => n * 10)
26 .to(iterable => [...iterable]) // Spread 'this' as it's an iterable
27 .sort((a, b) => a - b); // [a: number] [b: number]
28
29 console.log(sorted); // const sorted: number[]
30 // [10, 30, 30, 50, 50]

Here, the parameter to converter is typed as this, and the argument is


simply this (our iterable). This makes the code concise, and type inference
works well, keeping your method chaining type-safe.

You can convert into other types, not just arrays, with the to method:

[Code 2-52] Converting to a Set

1 const set = fx([5, 2, 3, 1, 4, 5, 3])


2 .filter(n => n % 2 === 1)
3 .map(n => n * 10) // [50, 30, 10, 50, 30]
4 .to(iterable => new Set(iterable)); // Convert to a Set, removing
duplicates
5
6 console.log(set);
7 // Set(3) {50, 30, 10}
8
9 const size = fx([5, 2, 3, 1, 4, 5, 3])
10 .filter(n => n % 2 === 1)
11 .map(n => n * 10)
12 .to(iterable => new Set(iterable))
13 .add(10) // [Set<number>.add(value: number):
Set<number>]
14 .add(20)
15 .size; // set.size
16
17 console.log(size); // [size: number]
18 // 4
We’ve converted to a Set, then used the add method and size property
to output 4. Type inference works nicely throughout this process, providing
code hints and allowing us to safely continue chaining.

By using the to method, you can dynamically extend FxIterable at


runtime to transform it into any type, all while maintaining excellent type
inference. This allows you to work with not only buil

Using Set’s Set Operations

JavaScript’s Set now supports set-related methods, so object-oriented


objects and iteration can be harmoniously used in a multiparadigm style:

[Code 2-53] Set.prototype.difference

1 const set = fx([5, 2, 3, 1, 4, 5, 3])


2 .filter(n => n % 2 === 1)
3 .map(n => n * 10)
4 .to(iterable => new Set(iterable)) // Convert to a Set: Set {50, 30, 10}
5 .difference(new Set([10, 20])); // Difference with [10, 20]: Set {50,
30}
6
7 console.log([...set]);
8 // [50, 30]

This code filters out even numbers from the array, multiplies the
remaining elements by 10, removes duplicates via a Set, then finds the
difference with another Set, and finally converts the result back to an array.
Note that at the time of writing, difference() method was only available
in the latest browsers, so you might need to test this code in JavaScript to
ensure compatibility.

Extending with chain

To further broaden the versatility of FxIterable, let’s add another


method similar to to. This time, we’ll introduce a chain method that
accepts a function which takes an iterable and returns another iterable,
allowing the result to continue in the FxIterable chain. This enables you to
dynamically incorporate newly generated iterables directly into your chain,
flexibly applying various transformations.

[Code 2-54] Adding a chain Method to the FxIterable Class

1 class FxIterable<A> {
2 constructor(private iterable: Iterable<A>) {}
3
4 [Symbol.iterator](): Iterator<A> {
5 return this.iterable[Symbol.iterator]();
6 }
7
8 // ... omitted methods ...
9
10 chain<B>(f: (iterable: this) => Iterable<B>): FxIterable<B> {
11 return fx(f(this)); // new FxIterable(f(this));
12 }
13 }

With chain, you can dynamically create any function that takes an
iterable and returns a new iterable, extending FxIterable at runtime.

[Code 2-55] chain + Set

1 const result = fx([5, 2, 3, 1, 4, 5, 3])


2 .filter(n => n % 2 === 1)
3 .map(n => n * 10) // [50, 30, 10, 50, 30]
4 .chain(iterable => new Set(iterable)) // Convert to Set, remove
duplicates;
5 .reduce((a, b) => a + b); // [FxIterable<number>.reduce<number>(f: ...):
number]
6
7 console.log(result); // [result: number]
8 // 90
9
10 const result2 = fx([5, 2, 3, 1, 4, 5, 3])
11 .filter(n => n % 2 === 1)
12 .map(n => n * 10) // [50, 30, 10, 50, 30]
13 .chain(iterable => new Set(iterable)) // Convert to Set, remove
duplicates;
14 .map(n => n - 10) // [FxIterable<number>.map<number>(f: ...):
FxIterable<number>]
15 .reduce((a, b) => `${a}, ${b}`); // [FxIterable<number>.reduce<string>
(f: ...): string]
16
17 console.log(result2); // [result2: string]
18 // 40, 20, 0
We’ve seen how to combine classes, higher-order functions, iterators,
and the type system, integrating tightly with the language to further extend
the flexibility of FxIterable. By adding the chain method, we can
dynamically apply any function returning an iterable or convert a collection
to another data structure (like a Set) and continue chaining. All of this
merges naturally with the language, providing type-safe method chaining
that integrates well with features like destructuring assignment. Thanks to
TypeScript’s type system, type inference works smoothly, letting us perform
transformations and continue chaining without needing explicit type
declarations.

The Joy of Extending a Language

The ability for developers to instantly extend a language is perhaps the


most appealing feature of metaprogramming. I believe that a crucial turning
point, which allowed object-oriented languages to approach the level of
metaprogramming in LISP-family languages, was the adoption of first-class
functions. Although we could have used interfaces and the iterator pattern
in the past, without first-class functions—allowing direct usage of external
functions within the iterator—implementing a wide range of functional
programming features would have been impossible.

In modern programming languages, first-class functions have only


recently been introduced. For example, around 2013, the primary languages
for mobile app development—Objective-C (iOS) and Java (Android)—did
not support first-class functions. Then in March 2014, Java 8 introduced
first-class functions. Then in March 2014, Java 8 introduced first-class
functions, but Objective-C ultimately never supported them. Objective-C
did introduce “blocks” around 2010, which were a similar feature but quite
different from lambda expressions; they had complex variable capture
syntax and required manual memory management. Furthermore, blocks
were not completely integrated with the standard library or the language’s
type system, thus they were strictly limited to sharing contexts in async
programming.

Java introduced first-class functions and the Stream API in Java 8


(2014).
Swift supported Sequence, Iterator protocols, and first-class functions
from its initial release in 2014.
JavaScript/TypeScript included iterators and generators in ES6 (2015).
Kotlin has supported first-class functions and iterable-based iteration
since its first releases in 2016.
C# has provided IEnumerable and IEnumerator interfaces from early
versions, and introduced many helper functions with LINQ in 2007.

In short, the addition of first-class functions to class-based iterator


patterns has led numerous languages to evolve into multiparadigm
languages. Moreover, with the introduction of iteration protocols, you can
now extend language functionality in a consistent and standardized way.
Hence, without modifying language specifications or compilers, developers
can organically combine classes, functional higher-order functions, object-
oriented patterns, generics, currying, and iterable protocols to achieve
sophisticated abstractions and language extension effects.

Of course, TypeScript, Swift, Kotlin, C#, and Java do not provide the
same scope or intensity of metaprogramming as LISP-family languages. Yet
they still allow for sufficiently rich and high-level abstraction. These
modern languages also offer powerful type systems and various object-
oriented programming support features, enabling more patterned designs
and implementations, and they have established themselves as mainstream
languages widely utilized across diverse platforms.

Ultimately, deeply understanding and strategically utilizing the various


features in modern multiparadigm languages becomes a powerful weapon
for developers. With a solid foundation, developers can approach problems
creatively, resulting in effective and extensible solutions.
2.4 Mid-Chapter Summary
Type Inference and Function Types in TypeScript

TypeScript adds a type system to JavaScript, enhancing code reliability


and readability. Its type inference feature allows you to write safe code
without explicit type declarations, automatically inferring types for
variables and function return values. For example, in let a = 10;, a is
inferred to be number, and in a function like (a: number, b: number) =>
a + b;, the return type is automatically determined to be number.

Type Inference for Variables and Constants

When you initialize variables and constants, TypeScript infers their


types from the assigned values. Declaring const selected = true; infers
that selected has the literal type true, whereas declaring let checked =
true; infers that checked is of type boolean.

Type Inference via Generics

Generic functions let you implement polymorphic functions that can


handle diverse types. For example, a generic function like function
identity<T>(arg: T): T; flexibly determines its return type depending on
the type of its argument.

Function Types and Generics

To support functional programming, TypeScript provides features such


as higher-order functions, function types, and generics. These enable you to
clearly define both the input and output types of functions and to implement
versatile functions that accommodate various types with ease.
Iterable Helper Functions

Iterable helper functions follow a pattern where higher-order functions


revolve around iterables. In TypeScript, you can supply precise type
information for these functions, further improving code safety and
readability. By leveraging generics (e.g., function* map<A, B>(f: (a: A)
=> B, iterable: Iterable<A>): IterableIterator<B>), you can create
iteration functions with clearly defined input and output types.

LISP and Metaprogramming

LISP-family languages treat code and data identically, making


metaprogramming easy to implement. LISP’s S-expressions have a list-
based syntax that can treat code itself as data, offering powerful capabilities
to dynamically transform and generate code.

Class + Higher-Order Functions + Iterable

As seen in the FxIterable class example, combining classes, higher-


order functions, and iterables can produce highly expressive abstractions.
For instance, const [first, second] = fx([1, 2, 3, 4]).map(a => a
* 10); shows how a user-defined object can be flexibly integrated with
language features (in this case, destructuring) and how you can use a
chaining pattern to write highly readable code.

High-Level Abstractions Achievable in Modern Programming


Languages

The introduction of first-class functions and the spread of iteration


protocols have gone beyond the traditional class-based iterator pattern,
achieving a higher level of abstraction that naturally combines various
paradigms in modern programming languages. Major languages—such as
Java 8 (2014), Swift (2014), JavaScript/TypeScript (ES6, 2015), Kotlin
(2016), and C# (LINQ, 2007)—provide a variety of features, including
first-class functions, iterables, generics, higher-order functions, currying,
and rich type systems. This makes it possible to organically combine object-
oriented components (including classes), core concepts of functional
programming, and standardized iteration methods without modifying
language specs or compilers.

As a result, developers are no longer restricted to a single paradigm.


They can implement whatever level of abstraction they need to handle
diverse requirements, all built upon a solid foundation that can be widely
applied across various platforms and domains. This robust foundation
enables practical language extensions and patterns that can be adapted to
address a wide range of problem domains.

3. Code:Object:Function =
Generator:Iterator:LISP = IP:OOP:FP
Generators written in imperative code, Iterators that implement the
object-oriented Iterator pattern, and list processing (the core of LISP)
share deep interconnections. They act as both creators and consumers of
each other, enabling highly abstract programming while offering a prime
example of how IP (Imperative Programming), OOP (Object-Oriented
Programming), and FP (Functional Programming) can converge.

Below is a summary of the concepts covered in Chapters 1 and 2,


viewed through the lens of Generator:Iterator:LISP = IP:OOP:FP.

1. Iterator is the concrete implementation of the Iterator pattern.

An iterator is an object that traverses collection-like data using a


generalized pattern.
Owing to its laziness, which evaluates values only when needed, it can
handle not just finite collections but also infinite sequences.

2. Generator creates iterators using imperative code.

A generator function is a mechanism for building an iterator in an


imperative code style.
Because execution can pause and resume at yield, the generator’s
code is effectively evaluated lazily in list-sized chunks. This aligns
with the “code as list, list as code” perspective of LISP.

3. Iterable Iterators can be handled imperatively, object-orientedly, or


functionally.

Imperatively, you can call next() directly inside a while loop or use
for...of and spread (...) to iterate.
Object-oriented approaches can encapsulate iterable iterators in a
class or chain multiple iterators internally.
Functionally, you can use higher-order functions (HOFs) to handle
iterators by passing functions that process each element, implementing
iteration logic through function composition while maximizing lazy
evaluation and list processing.

4. Multiple Ways to Create Iterators

You can build an iterator yourself by directly implementing the Iterator


interface, or can produce one imperatively via a generator function.
You can also generate one in a functional style by combining list-
processing based functions (to be examined in Chapter 3).
Ultimately, iterators can be created in these three ways, each
substitutable on a 1:1:1 basis:
1. Imperative (IP) – Generating iterators with a generator
2. Object-Oriented (OOP) – Directly implementing an iterator
object
3. Functional (FP) – Combining list-processing functions to
produce an iterator

All this demonstrates how the iteration protocol serves as a linking


bridge across various paradigms. In Chapter 3, we’ll focus specifically on
handling iterators from a functional perspective and on lazy evaluation—
expanding the Lisp-inspired idea of “code as lists” and examining how
these multi-pronged approaches form the robust foundation of modern
programming.
3.1 Code Is Data – A List Containing Logic
Chapters 1 and 2 covered the process of applying functional
programming, list processing, metaprogramming, and a functional type
system in multiparadigm languages. In this chapter, we’ll bring all that
discussion into real-world scenarios by writing functional code to solve
problems you might encounter in daily life. The first step is to replace
imperative code with list-processing functions.

[for, i++, if, break] – Thinking of Code as a List

Viewing code as a list is a powerful tool for expanding your


programming paradigms. In functional programming, the principle that
code is data and data is code enables developers to produce code that is
more readable and maintainable. In this section, we will explore how to
process code as a list by transforming imperative constructs such as for,
i++, if, and break into functional list-processing functions.

An Imperative Function That Squares n Odd Numbers and


Sums Them All

[Code 3-1] shows a function that squares n odd numbers and sums
them all. When this code is executed, it selects the first three odd numbers
(1, 3, 5) from the list array, calculates their squares (1^2, 3^2, 5^2), and
adds them up. The final sum, 1 + 9 + 25 = 35, is printed.

[Code 3-1] Summing the Squares of n Odd Numbers

1 function sumOfSquaresOfOddNumbers(limit: number, list: number[]): number {


2 let acc = 0;
3 for (const a of list) {
4 if (a % 2 === 1) {
5 const b = a * a;
6 acc += b;
7 if (--limit === 0) break;
8 }
9 }
10 return acc;
11 }
12
13 console.log(
14 sumOfSquaresOfOddNumbers(3, [1, 2, 3, 4, 5, 6, 7, 8, 9])
15 );
16 // 35

We will now transform this code into a functional style while


exploring how to handle code as a list.

Let’s recap what sumOfSquaresOfOddNumbers is doing, in list form:

1. Iteration: for (const a of list) iterates through each element of


list. The variable a represents the current element of the array.
2. Odd Check: Using if (a % 2 === 1), the code checks whether a is
odd. If it is odd, it proceeds to the next step.
3. Square Calculation: const b = a * a; calculates the square of the
odd number a and stores it in b.
4. Accumulated Sum Update: acc += b; adds b to the accumulated
sum.
5. Length Check & Exit: if (--limit === 0) break; decreases
limit, and if limit reaches 0, it breaks out of the loop.
6. Return Result: return acc; returns the final accumulated sum.

Replacing if with filter

We can replace the if statement with a filter function.

[Code 3-2] Replacing if with filter

1 function sumOfSquaresOfOddNumbers(limit: number, list: number[]): number {


2 let acc = 0;
3 for (const a of filter(a => a % 2 === 1, list)) {
4 const b = a * a;
5 acc += b;
6 if (--limit === 0) break;
7 }
8 return acc;
9 }
10
11 console.log(
12 sumOfSquaresOfOddNumbers(3, [1, 2, 3, 4, 5, 6, 7, 8, 9])
13 );
14 // 35
Here, we replaced the list part of [Code 3-1] with filter(...) and
moved the condition if (a % 2 === 1) {} into the callback function for
filter, removing the original conditional. Now that filter(a => a % 2
=== 1, list) leaves only odd numbers, the inner code of the for loop,
which calculates squares and sums them, becomes one step simpler.

In this example, we transformed a code statement that controlled


execution with if (a % 2 === 1) {} after extracting a from list into a
filtered list using the filter method. The code inside the for...of loop
doesn’t need to know whether a is filtered or not; it simply squares each
element and adds it to the sum.

To summarize what we’ve covered so far:

1. Scattered code statements have been replaced by list processing


function executions
2. filter(a => a % 2 === 1, list) is both code that performs filter
logic and a list.
3. The conditional has been removed from the internal logic, making the
code clearer and simpler.

Replacing the Variable Assignment after Value Transformation


with map

Through this example, we are learning together how to gradually


refactor code into a functional style without touching its overall structure.
Now, let’s use map to eliminate variable assignments after value
transformations.

[Code 3-3] Code Replaced by map

1 function sumOfSquaresOfOddNumbers(limit: number, list: number[]): number {


2 let acc = 0;
3 for (const a of map(a => a * a, filter(a => a % 2 === 1, list))) {
4 acc += a;
5 if (--limit === 0) break;
6 }
7 return acc;
8 }
9
10 console.log(
11 sumOfSquaresOfOddNumbers(3, [1, 2, 3, 4, 5, 6, 7, 8, 9])
12 );
13 // 35

In this example, map has replaced the code const b = a * a;, which
declares a variable and assigns it the squared value. Consequently, map(a
=> a * a, filter(a => a % 2 === 1, list)) has become a lazy iterator
that creates a list of squared odd numbers, and we’re extracting these
squared values of a by iterating through it with the for...of loop.

Once more, the code statements are being altered to incorporate a


combination of lists and functions. To convey this concept more effectively,
it would be preferable to describe this code in LISP syntax. This is due to
the fact that LISP syntax more represents the concept of treating code as a
list.

[Code 3-4] is code written in Scheme, a LISP-family language.

[Code 3-4] Scheme Code

1 ; Scheme
2 (define list '(1 2 3 4 5))
3
4 (define (square x) (* x x))
5
6 (map square (filter odd? list))
7 ; (1 9 25)
8
9 ; JavaScript
10 ; map(square, filter(isOdd, list))

As demonstrated in [Code 3-4], the syntax for creating a list, such as


(1 2 3 4 5), and the syntax for calculating the square of x using the
multiplication operator x in (* x x) are identical, with the exception of the
single quote (’). In LISP, both the syntax for defining data, such as numeric
arrays, and expressions for calculations, such as squaring, are represented as
lists. Moreover, function calls are also notated as lists; the list (* x x)
contains one * and two x, and when this list-as-code is evaluated, it
computes the square of x.
Examining (filter odd? list) from the same perspective, it is a list
containing filter, odd?, and list. Since list is (1 2 3 4 5), (filter
odd? list) effectively becomes a nested list: (filter odd? '(1 2 3 4
5)). Similarly, (map square (filter odd? list)) is a multi-level nested
list, as shown in [Code 3-4a].

[Code 3-4a] Scheme Code

1 (map (lambda (x) (* x x)) (filter odd? '(1 2 3 4 5)))

In LISP, the program are executed by evaluating expressions that serve


as lists, code, and data. In this language, lists are code, code is lists, and
nested lists embody both algorithms and logic. LISP’s syntax beautifully
reflects this philosophy and expresses it with remarkable elegance.

TypeScript, the primary language used in this book, differs only


slightly in syntax from LISP. Thanks to its iteration protocol based on
iterable and iterators, you can directly apply the same paradigm-as
demonstrated by the examples provided so far.

Replacing break with take

Now we’ll replace if (--limit === 0) break; with take.

[Code 3-5] Code Replaced by take

1 function* take<A>(limit: number, iterable: Iterable<A>):


IterableIterator<A> {
2 const iterator = iterable[Symbol.iterator]();
3 while (true) {
4 const { value, done } = iterator.next();
5 if (done) break;
6 yield value;
7 if (--limit === 0) break;
8 }
9 }
10
11 function sumOfSquaresOfOddNumbers(limit: number, list: number[]): number {
12 let acc = 0;
13 for (const a of take(limit, map(a => a * a, filter(a => a % 2 === 1,
list)))) {
14 acc += a;
15 }
16 return acc;
17 }
18
19 console.log(
20 sumOfSquaresOfOddNumbers(3, [1, 2, 3, 4, 5, 6, 7, 8, 9])
21 );
22 // 35

The take function returns a lazy iterable iterator that produces a lazy
list, yielding elements from the given iterable up to the specified limit. In
this case, it returns only the first three elements.

In [Code 3-5], take(limit, map(a => a * a, filter(a => a % 2


=== 1, list))) is lazily evaluated, meaning no computations occur until
explicitly requested. When the for...of loop retrieves values, it returns 1
on the first iteration, 9 on the second, and finally 25, after which the loop
concludes.

We should note that even though we’ve removed the break statement
that would normally exit the loop, the time complexity remains the same.
The break keyword enhances efficiency by ensuring that the loop executes
only as many iterations as necessary. Through take, we’ve demonstrated
that even control statements like break can be conceptualized as lists. Lazy
evaluation is the key enabling factor for this approach, which we’ll explore
more thoroughly in section ‘3.3 Looking Deeper into Lazy Evaluation’.

Looking at the remaining imperative code, we can see it’s essentially


identical to an implementation of reduce.

Summation with reduce

Finally, let’s replace the imperative summation code with one that uses
the reduce function.

[Code 3-6] Replacing with reduce

1 const sumOfSquaresOfOddNumbers = (limit: number, list: number[]): number


=>
2 reduce((a, b) => a + b, 0, // add(add(1, 9), 25)
3 take(limit, // [(1), (9), (25)]
4 map(a => a * a, // [(1), (9), (25), (49), (81)]
5 filter(a => a % 2 === 1, list)))); // [(1), (3), (5), (7), (9)]
6
7 console.log(
8 sumOfSquaresOfOddNumbers(3, [1, 2, 3, 4, 5, 6, 7, 8, 9])
9 );
10 // 35

As it evaluates this lazy list, reduce extracts elements. It first


accumulates by adding the first element (1) and the second element (9), and
then adds the third element (25) to yield a final result of 35.

Switching to a Chaining Style

Because take is used frequently, let’s add a take method to


FxIterable so that we can write the code in a more readable, sequential
manner.

[Code 3-7] Chaining

1 class FxIterable<A> {
2 constructor(private iterable: Iterable<A>) {}
3
4 // ... omitted methods ...
5
6 take(limit: number): FxIterable<A> {
7 return fx(take(limit, this)); // new FxIterable(take(limit, this));
8 }
9 }
10
11 // Functional code:
12 const sumOfSquaresOfOddNumbers = (limit: number, list: number[]): number
=>
13 fx(list) // [1, 2, 3, 4, 5, 6, 7, 8, 9]
14 .filter(a => a % 2 === 1) // [(1), (3), (5), (7), (9)]
15 .map(a => a * a) // [(1), (9), (25), (49), (81)]
16 .take(limit) // [(1), (9), (25)]
17 .reduce((a, b) => a + b, 0); // add(add(1, 9), 25)
18
19 console.log(
20 sumOfSquaresOfOddNumbers(3, [1, 2, 3, 4, 5, 6, 7, 8, 9])
21 );
22 // 35
23
24 // Imperative code:
25 // function sumOfSquaresOfOddNumbers(limit: number, list: number[]):
number {
26 // let acc = 0;
27 // for (const a of list) {
28 // if (a % 2 === 1) {
29 // const b = a * a;
30 // acc += b;
31 // if (--limit === 0) break;
32 // }
33 // }
34 // return acc;
35 // }

We have now transformed the imperative function into a functional


programming style. The code has become more declarative, significantly
enhancing readability. By using descriptive function names like filter,
map, take, and reduce, the purpose of each code segment is clear, and the
sequence of operations can be easily followed from top to bottom.

All parts of the code have been replaced with list-processing functions,
essentially producing nested lists.

What sumOfSquaresOfOddNumbers Does, in List Form

Let’s revisit what the imperative implementation of


sumOfSquaresOfOddNumbers accomplished:

1. Iteration: for (const a of list) iterates through each element of


list. a is the current element of the array.
2. Odd Check: The condition if (a % 2 === 1)checks whether a is
odd, proceeding only if it is.
3. Square Calculation: The statements const b = a * a; computes the
square of a and stores it in b.
4. Accumulated Sum Update: The statements acc += b; adds b to the
accumulator acc.
5. Length Check & Exit: The condition if (--limit === 0) break;
decreases limit and exits the loop when it reaches 0.
6. Return Result: Finally return acc; returns the accumulated sum.

This sequence is replaced by the following list-represented functional


code:

1. Iteration: fx(list) creates a lazy list to iterate over.


2. Odd Check: filter(a => a % 2 === 1) yields a lazy list containing
only odd numbers.
3. Square Calculation: map(a => a * a) produces a lazy list where
each element is the square of the filtered numbers.
4. Length Check & Exit: take(limit) produces a lazy list limited to
limit elements.
5. Accumulated Sum Update: reduce((a, b) => a + b, 0) sums all
the elements.
6. Return Result: => ((())) returns the accumulated sum.

List processing, as demonstrated, effectively transforms lines of


imperative code into lists. By treating code as values (lists) and functions as
first-class citizens, complex problems are solved by composing small,
modular blocks of code. This is the essence of functional programming and
list processing, enabling the decomposition of intricate logic into
manageable, self-contained list elements. Consequently, code implemented
with list processing is more readable, easier to maintain, and each
component’s role is distinctly defined.

List Processing in Modern Languages – Clojure, Kotlin,


Swift, Scala, C#, Java

Modern programming languages support functional programming


paradigms that align with the list-processing philosophy we have discussed.
In [Code 3-7], we implement the complete functional code in various
languages to verify this approach.

Implementing sumOfSquaresOfOddNumbers in Other


Languages

[Code 3-8] Clojure

1 (defn square [x]


2 (* x x))
3
4 (defn sumOfSquaresOfOddNumbers [limit list]
5 (->> list
6 (filter odd?)
7 (map square)
8 (take limit)
9 (reduce +)))
10
11 (println (sumOfSquaresOfOddNumbers 3 [1 2 3 4 5 6 7 8 9]))
12 ; 35

[Code 3-8] is written in Clojure, a language focused on the functional


programming paradigm. While not primarily the multiparadigm language
this book focuses on, Clojure is a modern LISP-family language with strong
interoperability with Java and widespread usage in various communities,
providing significant inspiration.

->> is a macro that allows pipeline-like expressions. It takes code like


(filter odd?), (map square), etc., which expect sequences to arrive later,
applies them consecutively to list, then feeds each result into subsequent
functions. And in true LISP spirit, you can pass the + symbol as the reducer
function to reduce.

[Code 3-9] Kotlin

1 fun sumOfSquaresOfOddNumbers(limit: Int, list: List<Int>): Int {


2 return list.asSequence()
3 .filter { it % 2 == 1 }
4 .map { it * it }
5 .take(limit)
6 .fold(0) { a, b -> a + b }
7 }
8
9 fun main() {
10 val result = sumOfSquaresOfOddNumbers(3, listOf(1, 2, 3, 4, 5, 6, 7,
8, 9))
11 println(result) // 35
12 }

Kotlin supports iteration via the Iterable interface, using


asSequence() to enable lazy operations. The standard library provides a
high level of functional programming support, including higher-order
functions like filter, map, take, reduce, and fold. It also offers concise,
unique lambda syntax using the keyword it to refer to the current item.
Kotlin’s robust type system, combined with its classes, makes it a
multiparadigm language that also delivers readable, succinct functional
code.
[Code 3-10] Swift

1 func sumOfSquaresOfOddNumbers(limit: Int, list: [Int]) -> Int {


2 return list.lazy
3 .filter { $0 % 2 == 1 }
4 .map { $0 * $0 }
5 .prefix(limit) // same as take
6 .reduce(0, +) // Swift’s reduce requires an initial
value, cannot omit
7 }
8
9 print(sumOfSquaresOfOddNumbers(limit: 3, list: [1, 2, 3, 4, 5, 6, 7, 8,
9]))
10 // 35

Swift uses the lazy keyword to enable lazy operations, combining it


with the sequence protocol for high-performance lazy evaluation. Swift’s
standard library provides higher-order functions like filter, map, prefix,
and reduce, while its strong type inference keeps the code concise.
Moreover, Swift performs extensive optimizations at compile time,
maintaining high performance even with code using higher-order functions
like lazy evaluation. It’s also appealing that the reduce function can directly
use the + operator as its accumulator function.

Swift strongly supports the functional paradigm but can also combine
it with imperative and object-oriented paradigms. Additionally, Apple’s
powerful SDK and platforms enable the development of elegant, high-
performing UI/UX apps for its devices and operating systems, making the
language all the more distinctive.

[Code 3-11] Scala

1 object Main extends App {


2 def sumOfSquaresOfOddNumbers(limit: Int, list: List[Int]): Int = {
3 list.to(LazyList)
4 .filter(_ % 2 == 1)
5 .map(a => a * a)
6 .take(limit)
7 .foldLeft(0)(_ + _)
8 }
9
10 println(sumOfSquaresOfOddNumbers(3, List(1, 2, 3, 4, 5, 6, 7, 8, 9)))
11 // 35
12 }
Scala is also a multi-paradigm language that combines functional
programming and object-oriented programming, offering high-level
functional programming capabilities. By using LazyList, Scala supports
lazy evaluation, computing elements only when necessary to enhance
processing efficiency.

In Scala’s lambda expressions, you can use an underscore (_) to refer


to the current element, or you can use explicit lambda expressions such as a
=> a * a. Additionally, Scala provides concise syntax like (_ + _). When
the compiler clearly knows how many arguments a function needs, it infers
that number and creates an anonymous function accordingly. For example,
based on the fact that the partially applied foldLeft(0) requires a function
with two parameters, the compiler interprets _ + _ as a function that takes
two arguments and sums them. It’s a remarkably concise style of
expression.

[Code 3-12] C#

1 using System;
2 using System.Collections.Generic;
3 using System.Linq;
4
5 public class LispTest
6 {
7 public static void Main()
8 {
9 List<int> list = new List<int> { 1, 2, 3, 4, 5, 6, 7, 8, 9 };
10 int result = SumOfSquaresOfOddNumbers(3, list);
11 Console.WriteLine(result); // 35
12 }
13
14 static int SumOfSquaresOfOddNumbers(int limit, List<int> list)
15 {
16 return list.Where(a => a % 2 == 1)
17 .Select(a => a * a)
18 .Take(limit)
19 .Aggregate(0, (a, b) => a + b);
20 }
21 }

C# supports advanced functional programming capabilities through its


LINQ (Language Integrated Query) feature. Developers can write concise
and highly readable code using higher-order functions such as Where,
Select, Take,
and Aggregate—which correspond directly to the functional
programming concepts of filter, map, take, and reduce respectively.

[Code 3-13] C# SQL-Style Syntax

1 static int SumOfSquaresOfOddNumbers(int limit, List<int> list)


2 {
3 var query = from num in list
4 where num % 2 == 1
5 select num * num;
6
7 return query.Take(limit).Aggregate(0, (acc, a) => acc + a);
8 }

In particular, LINQ offers the unique advantage of supporting SQL-


like syntax for certain operations. By utilizing SQL-like keywords such as
from, where, and select, developers can craft SQL-resembling expressions
without the need for additional brackets or symbols in their programming
language-a particularly appealing feature. This design choice likely explains
why the functions were named where and select instead of the functional
programming standards filter and map. The LINQ query syntax is
especially intuitive for developers with SQL, experience, significantly
enhancing the readability of data queries and transformations.

C# introduced LINQ in November 2007, demonstrating that it has


been a pioneering multi-paradigm language that integrated functional
programming concepts long before many other mainstream languages.

[Code 3-14] Java

1 import java.util.Arrays;
2 import java.util.List;
3 import java.util.stream.Collectors;
4
5 public class Main {
6 public static void main(String[] args) {
7 List<Integer> list = Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9);
8 int result = sumOfSquaresOfOddNumbers(3, list);
9 System.out.println(result); // 35
10 }
11
12 public static int sumOfSquaresOfOddNumbers(int limit, List<Integer>
list) {
13 return list.stream()
14 .filter(a -> a % 2 == 1)
15 .map(a -> a * a)
16 .limit(limit) // same as take
17 .reduce(0, Integer::sum);
18 }
19 }

Java supports functional programming through its Stream API. You


can transform and process collections using stream methods like filter,
map, limit, and reduce. Java’s lambda expressions are concise and
expressive, and when combined with the Stream API, they make
manipulating complex data straightforward. Stream.reduce in Java comes
in two forms: one requiring an initial value, and another that requires an
Optional without an initial value. When omitting the initial value, you
receive an Optional, necessitating additional steps to unwrap the actual
value.

Note that It’s worth noting that each language’s reduce method
handles the initial value slightly differently. These design choices aim
to either prevent unexpected situations or help developers discover
them—specifically when encountering empty collections (arrays,
streams, or iterables) at runtime that could cause errors. The languages
essentially suggest appropriate handling methods to developers. In
TypeScript and JavaScript specifically, when no initial value is
provided and the reduce method encounters an empty array or iterable,
it propagates an error.

Concepts and Paradigms Applicable Across Languages.

The languages we’ve examined-JavaScript, TypeScript, Clojure,


Kotlin, Swift, Scala, C#, and Java-each support lazy evaluation through
their unique iteration protocols or similar mechanisms. By implementing
functional programming principles at the language level, they enhance code
readability and maintainability while maximizing data processing
efficiency.

Modern languages are actively adopting and refining function


paradigm, with some optimizing functional code during compilation. Even
Java, traditionally a object-oriented language, has evolved into a multi-
paradigm language featuring a rich set of functional higher-order functions.

JavaScript particularly enables high-level functional library


implementation through various protocols like Iterable, Iterator,
Generator, AsyncGenerator, and AsyncIterator. The ECMAScript Stage
3 “Iterator Helpers” specification aims for functionality nearly identical to
what we’ve implemented in this book, and will eventually provide built-in
helper functions supporting lazy evaluation at the language level.

Ultimately, multi-paradigm thinking and problem-solving abilities that


combine object-oriented, imperative, and functional approaches transcend
specific languages. Languages with robust type systems and type inference
—such as TypeScript, Kotlin, Swift, Scala, C#, and Java—allow developers
to leverage classes, interfaces, iteration protocols, and functional higher-
order functions simultaneously. The concepts and principles covered in this
book apply across these languages, contributing to safer and more efficient
code implementation across diverse environments.

3.2 Learning from Haskell


Haskell is considered a purely functional programming language and
features elegant syntax designed to embody the functional paradigm. It
emphasizes pure functions and function composition, uses currying by
default, supports lazy evaluation, and handles side effects in specialized
ways. Moreover, it offers a wealth of distinctive features for functional
programming, including a powerful type system, type inference, algebraic
data types, and type classes that facilitate high polymorphism.

Chapter 3.4 incorporates Haskell examples and content to examine


TypeScript’s find function and optional value handling from a broader
perspective. To fully grasp these concepts, it is beneficial to first
explore Haskell’s syntax and characteristics. Therefore, Chapter 3.2
has been dedicated to an introduction to Haskell. By briefly studying
Haskell in this chapter, readers will gain deeper insights into various
features relevant to modern programming languages, enabling richer
applications.

Functions and Function Signatures in Haskell

In Haskell, function signatures provide crucial insights into the


fundamental concepts of functional programming. They explicitly define
the input and output types of a function, allowing for an intuitive
understanding of its behavior.

[Code 3-15] Haskell’s square function

1 square :: Int -> Int


2 square x = x * x

Here, square :: Int -> Int means that the square function takes an
argument of type Int and returns a value of type Int. The :: symbol
denotes a type declaration. In the expression square x = x * x, square is
the function name, x is the parameter, and the = sign indicates that the
function is defined by returning the expression on its right-hand side.

Expressed in TypeScript, the code would be as follows:

[Code 3-16] TypeScript’s square function

1 function square(x: number): number {


2 return x * x;
3 }

Alternatively, we can predefine the type, clearly specifying the


function type:

[Code 3-17] Defining a function type in TypeScript

1 type Square = (x: number) => number;


2 const square: Square = x => x * x;
Both Haskell and TypeScript can clarify the function’s input and
output types by declaring function signatures. This is very helpful in
effectively using functional programming, and being familiar with function
signatures in various languages is valuable for understanding and applying
the functional paradigm.

Elegant Syntax with Language-Level Support for


Currying

Because Haskell supports currying at the language level, it naturally


handles functions that take multiple arguments in curried form. Currying is
a technique for expressing a function that takes multiple arguments as a
chain of functions each taking a single argument.

[Code 3-18] add function

1 add :: Int -> Int -> Int


2 add x y = x + y

The signature add :: Int -> Int -> Int indicates that add takes
two Ints and returns an Int. However, in Haskell, you can naturally use
this function in a curried form by default.

[Code 3-19] Partial application of add 5

1 addFive :: Int -> Int


2 addFive = add 5

addFive is the result of partially applying the add function to 5. Thus,


addFive has the type Int -> Int and becomes a function that, when given
a new integer argument, returns a result.

[Code 3-20] Finishing the execution of add

1 main :: IO ()
2 main = do
3 print (addFive 10) -- Output: 15
4 print (add 3 7) -- Output: 10
5 print (3 `add` 7) -- Output: 10

In [Code 3-20], addFive 10 takes the function (add 5), created by


partially applying add with 5, then gives it 10 for a result of 15. The result
of (add 3 7), where both arguments are applied directly, prints 10. Here,
(add 3) is the function you get from partially applying add to 3, and
applying 7 to that function yields 10.

Also, Haskell allows function calls in infix notation, so (3 `add` 7)


produces the same result as (add 3 7). (add 3 7) is the prefix form of the
function call.

In Haskell, all function calls are essentially curried. For example, add
:: Int -> Int -> Int is actually the same as add :: Int -> (Int ->
Int). Therefore, add takes an Int and returns a new function of type (Int
-> Int). In many languages, you might need function overloading or
additional function type definitions to express this pattern, but Haskell
elegantly supports currying at the language level with very concise function
signatures.

The main Function and IO

All programs in Haskell start from the main function. The main
function must return a type of IO, which is a special type that can perform
input/output (IO) operations. Below is a step-by-step explanation of the
main function from [Code 3-20].

1. main :: IO () is the function’s type signature, meaning the main


function returns IO ().
2. IO is a type in Haskell that represents input/output operations.
3. () indicates that the main function does not return any particular value.
4. main = indicates the start of the function definition, with no
arguments. It’s somewhat similar to const main = () => in other
languages.
5. A do block can be used to sequentially execute multiple IO actions.
6. Each line in the do block is an IO action, executed in order from top to
bottom.
IO is a type in Haskell denoting input/output operations. Since Haskell
is a purely functional language, it uses IO to handle side effects (like
input/output) while retaining the benefits of functional purity.

IO and Side-Effect Management

Because Haskell is purely functional, every function is supposed to


return the same output for the same inputs (purely). However, real-world
programs must perform side effects, such as user input, file IO, or network
communication.

Haskell solves this by isolating functions with side effects through the
IO type. In Haskell, if a function returns IO, it indicates at the type level that
the function may cause side effects like IO. This clearly distinguishes “pure
functions” (a -> b) from “IO functions” (a -> IO b), minimizing the
unpredictability of side effects.

Developers declare main :: IO () in Haskell to tell the language “the


program’s final result is IO.” As a result, inside main, they typically
combine many pure functions to form the program logic, but let the final
outcome be in an IO context for things like input and output. Through this
agreement—“side effects are allowed only within IO”—Haskell maintains
purity.

In summary, IO is Haskell’s way of declaring “this function may do


non-pure tasks like input/output or state changes.” This distinction strictly
separates pure functions from side-effecting functions, enhancing
predictability and safety.

Unit Type () and TypeScript’s void

In Haskell, () is a Unit type with exactly one possible value,


representing a “meaningless value,” often used to indicate that the function
returns no meaningful value. That is, () marks that the function doesn’t
return anything other than performing side effects (e.g., console output).
In TypeScript, one might use the return type void to express a similar
idea. A function that returns void returns no particular value, typically
indicating that it only has side effects.

Though Haskell’s () and TypeScript’s void differ in implementation


and static analysis detail, both serve a conceptually similar purpose: “a
function that does not return a meaningful result.” This allows developers to
see from the function’s return type whether it returns a pure calculation or
triggers side effects (like altering external state).

head, map, filter, foldl Function Signatures

Let’s see how Haskell expresses the signatures of head, map, filter,
and foldl.

head Function Signature

[Code 3-20] head

1 head :: [a] -> a

head returns the first element of a list. Here, a is a generic type


variable that can represent any type. Haskell expresses this very succinctly,
as in [Code 3-20].

[Code 3-21] TypeScript head

1 type Head = <A>(arr: A[]) => A;


2 // or
3 type Head = <A>(iterable: Iterable<A>) => A;

In TypeScript, you could achieve the same functionality shown in


[Code 3-21] using a generic type parameter <A>. However, it is somewhat
more verbose than in Haskell, reflecting a difference in succinctness
compared to a functional language’s type notation.
map Function Signature

[Code 3-22] map

1 map :: (a -> b) -> [a] -> [b]

map applies a given function to each element of a list, returning a new


list.(a -> b) is a function type that converts an a-type value to a b-type
value.

filter Function Signature

[Code 3-23] filter

1 filter :: (a -> Bool) -> [a] -> [a]

filter takes a condition and returns a new list containing only


elements that satisfy that condition. (a -> Bool) is a function type that
takes an a-type value and returns a Bool.

foldl Function Signature

[Code 3-24] foldl

1 foldl :: (b -> a -> b) -> b -> [a] -> b

foldl is similar to the reduce function in other languages, iterating


through a list from left to right while accumulating a single value. Here, (b
-> a -> b) denotes the type signature of the accumulator function: it takes
the current accumulated value of type b and an element of type a, returning
a new accumulated value of type b. The second argument is the initial
accumulator, and the third is the list to be processed.

In Haskell, generic type variables are commonly denoted by single


letters like a and b. Thanks to this simple and consistent notation, higher-
order functions such as foldl and map are expressed elegantly and
concisely.

Learning notation and conventions across different languages can


greatly broaden your perspective on programming languages. It makes it
easier to understand other languages or paradigms, and you’ll be more
flexible and rich in perspective when learning a new language or
discovering new patterns in existing languages.

Function Composition – The . Operator and the $


Operator

The . operator composes functions; for instance, f . g . h is


equivalent to (\x -> f (g (h x)))—meaning that h is applied first, then
its result is passed to g, and finally to f. In JavaScript, this can be written as
(x) => f(g(h(x))).

The $ operator is used for function application, controlling precedence


and triggering immediately evaluate with the provided argument. f $ g $
h x is the same as f (g (h x)). In JavaScript, that’s f(g(h(x))). The $
operator reduces parentheses, so rather than writing multiple nested
parentheses like f (g (h x)), you can write f $ g $ h x with no
parentheses but the same meaning.

You can mix . and $ together, like f . g . h $ x, which in


JavaScript is f(g(h(x))).

[Code 3-25] . and $

1 f :: Int -> Int


2 f x = x + 1
3
4 g :: Int -> Int
5 g x = x * 2
6
7 h :: Int -> Int
8 h x = x - 3
9
10 main :: IO ()
11 main = do
12 let result = f . g . h $ 5
13 print result -- Output: 5

In JavaScript, this would translate to f(g(h(5))).

sumOfSquaresOfOddNumbers

You can write the sumOfSquaresOfOddNumbers function in Haskell like


so:

[Code 3-26] . . .$

1 square :: Int -> Int


2 square x = x * x
3
4 sumOfSquaresOfOddNumbers :: Int -> [Int] -> Int
5 sumOfSquaresOfOddNumbers limit list =
6 foldl (+) 0 . take limit . map square . filter odd $ list
7
8 main :: IO ()
9 main = print (sumOfSquaresOfOddNumbers 3 [1, 2, 3, 4, 5, 6, 7, 8, 9])
10 -- Output: 35

In this code, the expression foldl (+) 0 . take limit . map


square . filter odd $ list is composed of function composition and
application. It can be read from right to left to understand the flow:

1. The sumOfSquaresOfOddNumbers function passes list to the functions


on its left using the $ operator.
2. filter odd filters the list, retaining only odd elements.
3. map square squares the remaining odd numbers.
4. take limit selects only the specified number of squared values.
5. foldl (+) 0 sums the selected values; foldl starts from the initial
value 0 and accumulates left-to-right.

Pipeline Style – The & Operator

Haskell also supports a pipeline style for function composition. Instead


of using the composition operator ., you can use the forward function
application operator &, which is imported from the Data.Function module.
[Code 3-27] &

1 import Data.Function ((&))


2
3 square :: Int -> Int
4 square x = x * x
5
6 sumOfSquaresOfOddNumbers :: Int -> [Int] -> Int
7 sumOfSquaresOfOddNumbers limit list =
8 list
9 & filter odd
10 & map square
11 & take limit
12 & foldl (+) 0
13
14 main :: IO ()
15 main = print (sumOfSquaresOfOddNumbers 3 [1, 2, 3, 4, 5, 6, 7, 8, 9])
16 -- Output: 35

1. import Data.Function ((&)) brings in the & operator.


2. In the sumOfSquaresOfOddNumbers function, list is the input list.
3. & filter odd filters the list, leaving only odd elements.
4. & map square squares the remaining odd numbers.
5. & take limit selects only the specified number of squared values.
6. & foldl (+) 0 calculates the sum of the selected values.

Error Handling via Either

As purely functional language, Haskell prefers to explicitly represent


error conditions through its type system rather than handling exceptions in
the traditional way (e.g., try-catch). For this purpose, one of the most
commonly used types is Either. The Either type distinguishes between
success (Right) and failure (Left), clearly indicating at compile time that
error handling is necessary. This approach greatly reduces runtime
exceptions and improves both code stability and readability.

(div 10 0) – Exception Occurrence

Haskell’s standard library function div throws an exception if you try


dividing by zero. In the following example, div 10 0 triggers a runtime
error (“divide by zero”).
[Code 3-28] Dividing by zero example

1 main :: IO ()
2 main = do
3 print (div 10 2) -- Output: 5
4 print (div 10 0) -- Exception: divide by zero

Running this code causes a “divide by zero” exception at div 10 0.

Safe Division

We can handle zero-division more safely using the Either type.


Either returns a successful result in Right and an error condition in Left.

[Code 3-29] Pattern matching, Left, Right

1 safeDiv :: Int -> Int -> Either String Int


2 safeDiv _ 0 = Left "Cannot divide by zero."
3 safeDiv x y = Right (div x y)

In the above example, safeDiv returns Left "Cannot divide by


zero." when the second argument is 0. Thus, we represent errors explicitly
rather than throwing a runtime exception.

Pattern Matching

The safeDiv function in [Code 3-29] uses Haskell’s pattern matching


syntax to branch execution based on argument patterns.

1. safeDiv _ 0 = Left "Cannot divide by zero."

_is a wildcard pattern, meaning it can match any values.


0matches the case when the second argument is 0.
This pattern matching returns Left "Cannot divide by zero."
when the second argument is 0.

2. safeDiv x y = Right (div x y)

This clause is executed when the previous pattern does not match.
If the second argument is not 0, it returns the normal division
result wrapped in Right.

Haskell’s pattern matching enables writing concise and intuitive code.


In TypeScript terms, it’s as if you combined function overloading, if
statements, type guards, type narrowing, and parameter destructuring all in
one. Haskell implements its expression-based philosophy through pattern
matching elegantly.

[Code 3-30] Example usage of safeDiv

1 main :: IO ()
2 main = do
3 print (safeDiv 10 2) -- Output: Right 5
4 print (safeDiv 10 0) -- Output: Left "Cannot divide by zero."

In [Code 3-30], you see how safeDiv uses pattern matching on the
second argument to handle a zero divisor clearly. When the second
argument is 0, it returns Left "Cannot divide by zero."; otherwise, it
returns a normal division result in Right.

Merely seeing Right and Left might not immediately reveal why we
wrap them this way. The following example shows how to handle Either
values differently depending on whether they are Right or Left, providing
deeper insight into Haskell’s philosophy of explicit error handling rather
than runtime exceptions.

[Code 3-31] Handling Either values with pattern matching

1 processResult :: Either String Int -> String


2 processResult (Left errMsg) = "Error: " ++ errMsg
3 processResult (Right value) = "Result: " ++ show value
4
5 main :: IO ()
6 main = do
7 let result1 = safeDiv 10 2
8 let result2 = safeDiv 10 0
9 putStrLn (processResult result1) -- Output: Result: 5
10 putStrLn (processResult result2) -- Output: Error: Cannot divide by
zero.
In [Code 3-31], the patterns (Left errMsg) and (Right value)
destructure the internal value of an Either. In TypeScript terms, this is
conceptually similar to destructuring an object.

1. processResult function takes an Either String Int as its argument.


When given a Left value, it returns an error string; when given a
Right value, it returns a normal result string.
2. The pattern (Left errMsg) indicates an error state, combining
"Error: " with errMsg.
3. The pattern (Right value) indicates a normal value, using the show
function to convert the integer to a string and combining it with
"Result: ".
4. In main, the results of safeDiv are assigned to result1 and result2,
then passed to processResult, and the resulting strings are printed
using putStrLn. result1 which is a successful Right 5, prints
“Result: 5”. result2 is Left "Cannot divide by zero.", printing
“Error: Cannot divide by zero.”

By using the Either type, Haskell can explicitly distinguish between


success and failure, handling errors safely through types rather than runtime
exceptions. Haskell also provides a type called Maybe for safely handling
situations where a value might not exist; this is further discussed in Chapter
3.4.

We have now taken a brief look at Haskell. Learning and comparing


different languages’ solutions provides insights that help us use modern
languages more effectively, since modern languages often learn from and
adopt each other’s ideas.

3.3 Taking a Closer Look at Lazy Evaluation


In this chapter, we will closely examine the actual execution order of
iterators—data structures that support lazy evaluation. Then, keeping in
mind these execution orders and their efficiency, we will implement higher-
order functions such as find, every, and some to further deepen our
understanding of lazy evaluation.
The Execution Order of Nested Iterators — Verifying with
Generators

In [Code 3-32], we added logging to the functions used in FxIterable.


By leaving logs inside the while loops of each function, we can
meticulously track the execution order when combining functions like take,
map, and filter to produce nested iterators. Let’s look at the code in detail
and try predicting the actual log output order.

[Code 3-32] The Execution Order of Lazy Evaluation

1 function* filter<A>(f: (a: A) => boolean, iterable: Iterable<A>):


IterableIterator<A> {
2 const iterator = iterable[Symbol.iterator]();
3 while (true) {
4 console.log('filter');
5 const { value, done } = iterator.next();
6 if (done) break;
7 if (f(value)) yield value;
8 }
9 }
10
11 function* map<A, B>(f: (a: A) => B, iterable: Iterable<A>):
IterableIterator<B> {
12 const iterator = iterable[Symbol.iterator]();
13 while (true) {
14 console.log('map');
15 const { value, done } = iterator.next();
16 if (done) break;
17 yield f(value);
18 }
19 }
20
21 function* take<A>(limit: number, iterable: Iterable<A>):
IterableIterator<A> {
22 const iterator = iterable[Symbol.iterator]();
23 while (true) {
24 console.log('take limit:', limit);
25 const { value, done } = iterator.next();
26 if (done) break;
27 yield value;
28 if (--limit === 0) break;
29 }
30 }
31
32 const iterable = fx([1, 2, 3, 4, 5])
33 .filter(a => a % 2 === 1)
34 .map(a => a * a)
35 .take(2);
36
37 for (const a of iterable) {
38 console.log('result:', a);
39 }
40 // ?
41 // ?

Which order might the logs print in [Code 3-32]?

(1)

1 // filter
2 // filter
3 // filter
4 // map
5 // map
6 // take limit: 2
7 // take limit: 1
8 // result: 1
9 // result: 9

(2)

1 // filter
2 // map
3 // take limit: 2
4 // result: 1
5 // filter
6 // filter
7 // map
8 // take limit: 1
9 // result: 9

You might guess logs appear like (1) or (2). Let’s see if the real result
matches your guess.

[Code 3-33] The Answer Revealed

1 for (const a of iterable) {


2 console.log('result:', a);
3 }
4 // take limit: 2
5 // map
6 // filter
7 // result: 1
8 // take limit: 1
9 // map
10 // filter
11 // filter
12 // result: 9

The actual output is neither (1) nor (2). It’s not that all filter logs
appear first, then map, nor do we see filter -> map -> take in sequence
for each item.

Instead, we see take limit: 2 logs first, followed by map, then


filter. This happens because, once we combine the iterators up to take,
the first next() call from the for...of loop triggers the while loop in
take. Inside that loop, it logs "take limit: 2", then immediately calls the
next() method on the iterator it received (which is the map iterator). That,
in turn, calls the while loop in map, logging "map", which then calls the
next() on the filter iterator, logging "filter".

Looking in More Detail

Let’s modify [Code 3-33] as shown in [Code 3-34], and then run it
again. It’s also a good idea to set breakpoints at desired points in the code to
inspect the execution order.

[Code 3-34] Adding Logs

1 function* filter<A>(f: (a: A) => boolean, iterable: Iterable<A>):


IterableIterator<A> {
2 const iterator = iterable[Symbol.iterator]();
3 while (true) {
4 console.log('filter'); // (5)
5 const { value, done } = iterator.next(); // (5)
6 if (done) break;
7 console.log('filter value f(value):', value, f(value)); // (6)
8 if (f(value)) yield value; // (9)
9 }
10 }
11
12 function* map<A, B>(f: (a: A) => B, iterable: Iterable<A>):
IterableIterator<B> {
13 const iterator = iterable[Symbol.iterator]();
14 while (true) {
15 console.log('map'); // (4)
16 const { value, done } = iterator.next(); // (4)
17 if (done) break;
18 console.log('map value f(value):', value, f(value)); // (7)
19 yield f(value);
20 }
21 }
22
23 function* take<A>(limit: number, iterable: Iterable<A>):
IterableIterator<A> {
24 const iterator = iterable[Symbol.iterator]();
25 while (true) {
26 console.log('take limit:', limit); // (2)
27 const { value, done } = iterator.next(); // (3)
28 if (done) break;
29 console.log('take value:', value);
30 yield value;
31 if (--limit === 0) break;
32 }
33 }
34
35 const iterable = fx([1, 2, 3, 4, 5])
36 .filter(a => a % 2 === 1)
37 .map(a => a * a)
38 .take(2);
39
40 for (const a of iterable) {
41 console.log('result:', a);
42 console.log('---');
43 }
44 // take limit: 2
45 // map
46 // filter
47 // filter value f(value): 1 true
48 // map value f(value): 1 1
49 // take value: 1
50 // result: 1
51 // ---
52 // take limit: 1
53 // map
54 // filter
55 // filter value f(value): 2 false
56 // filter
57 // filter value f(value): 3 true
58 // map value f(value): 3 9
59 // take value: 9
60 // result: 9
61 // ---

In [Code 3-34], the execution order is explained as follows:

1. When the for...of loop calls next() for the first time, the while loop
in the take function starts.
2. take function logs take "take limit: 2" along with the limit value.
3. The take function then calls its internal iterator.next(), which in
turn calls the next() method of the map function.
4. The map function logs "map", and calls its internal iterator.next(),
which then calls the next() method of the filter function.
5. The filter function logs "filter", then calls its internal
iterator.next().
6. The first value, 1, reaches the filter function and satisfies the
condition, so it logs filter value f(value): 1 true.
7. The map function logs "map value f(value): 1 1", then take logs
"take value: 1", and outputs result: 1.
8. In the second iteration, it logs "take limit: 1", followed by "map",
then "filter", and then logs "filter value f(value): 2 false"
because 2 is not odd, so the loop continues.
9. Next, it processes the value 3, logging "filter value f(value): 3
true", "map value f(value): 3 9", "take value: 9", and finally
outputs result: 9.

Examining It with Direct Iterators

To understand why this execution order occurs, we now examine the


return values of the map and take functions by implementing them directly
as object-oriented iterators. For clarity, all type annotations have been
removed, and the code is presented in pure JavaScript to facilitate
understanding of the actual runtime behavior.

[Code 3-35] map, filter, take

1 function map(f, iterable) {


2 const iterator = iterable[Symbol.iterator]();
3 return {
4 next() {
5 console.log('map'); // (3)
6 const { done, value } = iterator.next(); // (4)
7 console.log('map value f(value):', value, f(value)); // (5)
8 return done
9 ? { done, value }
10 : { done, value: f(value) }; // (6)
11 },
12 [Symbol.iterator]() {
13 return this;
14 }
15 };
16 }
17
18 function take(limit, iterable) {
19 const iterator = iterable[Symbol.iterator]();
20 return {
21 next() {
22 if (limit === 0) return { done: true }; // (3-1)
23 console.log('take limit:', limit); // (1)
24 const { done, value } = iterator.next(); // (2)
25 if (done) return { done, value };
26 limit--;
27 console.log('take value:', value); // (7)
28 return { done, value };
29 },
30 [Symbol.iterator]() {
31 return this;
32 }
33 };
34 }
35
36 const mapped = map(a => a * a, [10, 20, 30]);
37 const taked = take(2, mapped);
38
39 console.log(taked.next());
40 // take limit: 2
41 // map
42 // map value f(value): 10 100
43 // take value: 100
44 // { done: false, value: 100 }
45
46 console.log(taked.next());
47 // take limit: 1
48 // map
49 // map value f(value): 20 400
50 // take value: 400
51 // { done: false, value: 400 }
52
53 console.log(taked.next());
54 // { done: true }

When calling taked.next() the first time:

1. The next method in the iterator returned by take is called, printing


"take limit: 2".
2. Because its iterator is mapped, it calls mapped.next().
3. The map iterator’s next() logs "map".
4. It calls iterator.next(), retrieving the next array value (here 10).
5. We see "map value f(value): 10 100".
6. So mapped.next() returns { done: false, value: 100 }.
7. Then take value: 100 is printed.
8. Finally, taked.next() returns { done: false, value: 100 }.

Second call to taked.next():

1. The take iterator’s next() prints "take limit: 1".


2. It calls mapped.next().
3. The map iterator’s next() logs "map".
4. It calls iterator.next(), retrieving 20.
5. "map value f(value): 20 400" is printed.
6. So mapped.next() returns 400.
7. Then take value: 400 is printed.
8. Finally, taked.next() returns { done: false, value: 400 }.

Third call:

1. In the third call to taked.next(), since the limit is now 0, it


immediately returns { done: true }.
2. We observe that map was used only twice, and the function to square
values was only evaluated twice as well.

This code demonstrates how lazy nested lazy iterators are evaluated.
Looking at the implementation of the iterator returned by the take function,
you can see that within the { next() { … } } block, it calls
iterator.next() again. It lterally nests iterators, as in { next () {
return iterator.next(); }}

Simplifying Our View

With this perspective, let’s represent the core part of [Code 3-34] as
code:

[Code 3-36] A Simplified Look at Nested Iterators

1 const filtered = {
2 next() {
3 return iterator.next();
4 }
5 }
6
7 const mapped = {
8 next() {
9 return filtered.next();
10 }
11 }
12
13 const taked = {
14 next() {
15 return mapped.next();
16 }
17 };
18
19 taked.next();

When taked.next() is invoked, it proceeds in the order take → map


→ filter to obtains a result, and then flows back in the order filter →
map → take. This is why the comments in [Code 3-34] appear in that
sequence.

[Code 3-37] From Code 3-34

1 // take limit: 2
2 // map
3 // filter
4 // filter value f(value): 1 true
5 // map value f(value): 1 1
6 // take value: 1
7 // result: 1
8 // ---
9 // take limit: 1
10 // map
11 // filter
12 // filter value f(value): 2 false
13 // filter
14 // filter value f(value): 3 true
15 // map value f(value): 3 9
16 // take value: 9
17 // result: 9
18 // ---

We’ve now seen how a nested, delayed iterator executes. Through this
process, we have gained a deeper understanding of the detailed workings of
lazy evaluation and iterators. Next, keeping these mechanics in mind, we’ll
further broaden our knowledge by implementing higher-order functions like
find, every, and some—functions that might need to stop iteration midway
—thus deepening our understanding of lazy evaluation and functional
higher-order functions.

3.4 Generator:Iterator:LISP – Lazy Evaluation


and Safe Composition
In this chapter, we want to extend the viewpoint that
Generator:Iterator:LISP can fully replace each other by implementing
higher-order functions like find, every, and some solely through
combinations of list processing functions. We’ll also delve into how
function composition, safe value access, dealing with absent values, and
language-level proposals address these topics.

The find Function Signature

map and filter are examples of functions that produce a lazy iterator
(a transform stage) so list processing can continue. By contrast, find is a
“result-producing” function that evaluates a lazy iterator. It traverses an
iterable, checking each element with f, and returns the first element for
which f returns true. If no element satisfies the condition, it returns
undefined.

Below is a partial representation of that behavior in the find function’s


signature:

[Code 3-38] The TypeScript find Function Signature

1 type Find = <A>(f: (a: A) => boolean, iterable: Iterable<A>) => A |


undefined;

This signature indicates that find takes (a: A) => boolean (a


function from A to boolean) and an Iterable<A>, returning either an A-type
value or undefined. As mentioned before, being familiar with reading such
function signatures will be very helpful in functional programming.

Now, to expand our perspective on “functions of type find,” let’s look


at Haskell’s find function signature:

[Code 3-39] Haskell’s find Function Signature

1 find :: (a -> Bool) -> [a] -> Maybe a


This signature means find takes (a -> Bool) and a list [a], returning
a Maybe a. (a -> Bool) is a function from a to Bool. Here, a is a generic
type, and the signature that expresses such generic types concisely is
elegant. Also, thanks to Haskell’s default currying, notation like function
overloading for different argument counts is unnecessary, making Haskell’s
signatures even more concise.

And the final return value, Maybe a, refers to a type that returns Just
a when there is a first element satisfying the search condition, and Nothing
when there isn’t. In Haskell, for safe function composition situations similar
to A | undefined in languages with TypeScript-like styles are handled with
values of the Maybe type.

find in Haskell and Safe Composition


[Code 3-40] An Example of Using find in Haskell

1 import Data.Maybe (fromMaybe)


2 import Data.List (find)
3
4 main :: IO ()
5 main = do
6 let result = fromMaybe 0 (find even [1, 3, 5])
7 print result -- Output: 0

This code is a simple example of using find in Haskell:

1. main is Haskell’s program entry point. main has type IO (), which
allows I/O operations.
2. fromMaybe takes two arguments. The first argument passed is 0, and by
currying, it receives another argument. The value evaluated inside
parentheses is passed as the second argument.
3. find also evaluates by taking even (a function of type (a -> Bool)
that checks if a number is even) as the first argument, and [1, 3, 5]
as the second argument.
4. The result of find even [1, 3, 5] is of type Maybe, returning
Nothing because there are no even numbers in the list.
5. The fromMaybe function returns the default value 0 when it receives
Nothing.
6. The print function outputs the result value.

This example demonstrates how to use the find function and Maybe
type to find elements in a list that satisfy a condition and handle them
safely. The find function takes a function of type (a -> Bool) and a list of
type [a] to return a value of type Maybe a, and the fromMaybe function is
useful for handling Maybe values by providing a default value.

Programming languages offer different solutions for handling optional


situations like empty collection processing in reduce or potential absence
of values in find. For example, Haskell clearly expresses these situations
with declarative types and values, while TypeScript offers solutions through
operators like ?., !, and ??. By comparing these approaches across various
languages, we can understand the solutions proposed by our primary
language from a broader and deeper perspective.

Rethinking Lazy Evaluation and List Processing via find

Returning to TypeScript, let’s build a find function that follows the


iterable protocol.

[Code 3-41] The Imperative find

1 function find<A>(f: (a: A) => boolean, iterable: Iterable<A>): A |


undefined {
2 const iterator = iterable[Symbol.iterator]();
3 while (true) {
4 const { value, done } = iterator.next();
5 if (done) break;
6 if (f(value)) return value;
7 }
8 return undefined;
9 }
10
11 const result = find(a => a > 2, [1, 2, 3, 4]);
12 console.log(result);
13 // 3

The find function traverses a given iterable and returns the first
element that satisfies the predicatef. if no such element is found, it returns
undefined. In TypeScript, you can omit return undefined; because a
function that doesn’t explicitly return a value implicitly returns undefined.

1. Create iterator object to prepare for iteration over the iterable.


2. It enters an infinite loop using while (true), calling the
iterator.next() method.
3. Destructure the result into { value, done }.
4. If done is true, the iteration ends.
5. If f(value) is true, return value.
6. If no matching element is found by the end of the loop, it return
undefined.
7. result ends up being 3.

We now want to reimplement find in a functional rather than an


imperative style. Existing functions like map, filter, and take are either
written imperatively using generators, or follow an object-oriented
approach where one iterator invokes methods on another. If tail call
optimization were available, these functions could also be implemented in a
functional style. However, for map, filter, and take, the imperative
approach often leads to code that is easier to understand.

In contrast, functions like find, every, and some—which return final


results—can be written in a purely functional way by composing existing
higher-order functions such as map, filter, and reduce. This approach
improves both clarity and expressiveness.

Let’s compare the imperative version of filter with the functionally


written version of find shown in [Code 3-41] to explore the differences
between these styles.

[Code 3-42] Lazy filter

1 function* filter<A>(f: (a: A) => boolean, iterable: Iterable<A>):


IterableIterator<A> {
2 const iterator = iterable[Symbol.iterator]();
3 while (true) {
4 const { value, done } = iterator.next();
5 if (done) break;
6 if (f(value)) yield value;
7 }
8 }

Comparing the imperative version of find with the filter shown


above, the only differences come down to * and yield.

1. filter is a generator (function*) that uses yields to return each value


for which f(value) is true, and continues iterating until the input is
fully consumed.
2. find is a ordinary function that uses return to return the first value for
which f(value) is true, immediately exiting both the loop and the
function.

Unlike lazy filters, Array.prototype.filter does not support lazy


evaluation—it traverses the entire array and returns a new array containing
all elements that evaluate to true. In contrast, a lazy version of filter
produces an iterator that is only consumed as much as the consumer
demands. If we call next() only once on such a lazy filter, its behavior
and efficiency are effectively the same as find, because a single yield in
that case behaves just like a return.

Now, here are three implementations of find written in a functional


style:

[Code 3-43] A First Functional Implementation of find

1 function find<A>(f: (a: A) => boolean, iterable: Iterable<A>): A |


undefined {
2 return filter(f, iterable).next().value;
3 // Alternatively:
4 // const [head] = filter(f, iterable);
5 // return head;
6 }
7
8 // [const result: number | undefined]
9 const result = find(a => a > 2, [1, 2, 3, 4]);
10 console.log(result);
11 // 3
12
13 const isOdd = (a: number) => a % 2 === 1;
14
15 const result2 = find(isOdd, [2, 4, 6]); // [const result2: number |
undefined]
16 console.log(result2);
17 // undefined

In this first functional implementation, we use filter to prepare a lazy


sequence of elements that satisfy the condition. Then, by calling next()
only once on the returned iterator, we immediately retrieve the first
matching element—at which point both the loop inside filter and the
function itself terminate.

Alternatively, as shown in the commented example, we can use


destructuring—[head] =—to extract the first element. Since the filter
function supports lazy evaluation, this destructuring triggers only a single
call to next() on the underlying iterator. As a result, it behaves just as
efficiently as the imperative version of find shown in [Code 3-41].

[Code 3-44] A Second Functional Implementation of find

1 const head = <A>(


2 iterable: Iterable<A>
3 ): A | undefined => iterable[Symbol.iterator]().next().value;
4 // Or: const head = <A>([a]: Iterable<A>): A | undefined => a;
5
6 const find = <A>(
7 f: (a: A) => boolean,
8 iterable: Iterable<A>
9 ): A | undefined => head(filter(f, iterable));
10
11 // [const result: number | undefined]
12 const result = find(a => a > 2, [1, 2, 3, 4]);
13 console.log(result);
14 // 3
15
16 const isOdd = (a: number) => a % 2 === 1;
17
18 const result2 = find(isOdd, [2, 4, 6]); // [const result2: number |
undefined]
19 console.log(result2);
20 // undefined

In this second approach, we define a helper function called head,


which returns the first element of a given iterable. The find function uses
filter to lazily filter the input, then delegates to head to retrieve the first
matching element. This method is similar to the first one, but improves
modularity by separating concerns—head encapsulates the logic for
extracting the first item. By defining head as a standalone function, we
make the code more reusable and clarify the distinct roles of each function.

[Code 3-45] Revisiting FxIterable

1 class FxIterable<A> {
2 constructor(private iterable: Iterable<A>) {}
3
4 // ... omitted ...
5
6 filter(f: (a: A) => boolean): FxIterable<A> {
7 return fx(filter(f, this));
8 }
9
10 to<R>(converter: (iterable: Iterable<A>) => R): R {
11 return converter(this.iterable);
12 }
13 }

[Code 3-46] A Third Functional Implementation of find using FxIterable

1 const find = <A>(f: (a: A) => boolean, iterable: Iterable<A>): A |


undefined =>
2 fx(iterable)
3 .filter(f)
4 .to(head);
5
6 // [const result: number | undefined]
7 const result = find(a => a > 2, [1, 2, 3, 4]);
8 console.log(result);
9 // 3
10
11 const isOdd = (a: number) => a % 2 === 1;
12
13 const result2 = find(isOdd, [2, 4, 6]); // [const result2: number |
undefined]
14 console.log(result2);
15 // undefined

This third approach uses the FxIterable class to implement find in a


fluent, chainable style. We create an instance using the fx function, apply
the filter method to prepare the sequence, and finally pass the head
function to the to method to return the first matching element.

All three functional versions of find provide the same efficiency as the
imperative approach in [Code 3-41], but with more concise and readable
code. Compared to the imperative version, each of these functional
implementations gives us a strong sense of confidence—they are so
declarative and straightforward that we almost don’t need to test them to
trust they work correctly.

At the beginning of Chapter 3, we made the following observation:

Ultimately, we can create an iterator in three ways, each fully


interchangeable 1:1:1:

1. Imperative (IP) - Generators producing an iterator


2. Object-oriented (OOP) - Implementing an iterator class
manually
3. Functional (FP) - Composing list processing functions to build
an iterator

And near the end of chapter 1.2:

We’ll try coding these iterators in an object-oriented style, imperative


style, and functional style, exploring how they are 1:1:1 replacements.

We’ve demonstrated that higher-order functions like find can be built


purely by combining list processing functions rather than imperative code,
and that the functional approach is just as efficient. Indeed, using a
functional approach to build find is highly convenient.

And code from each paradigm can fully replace each other or be
mixed. A multiparadigm language user can, even in a single function,
choose or combine the best-suited paradigm for the situation.

Safe Composition in TypeScript: ? ?? !

Returning to our earlier example of using the find function and a


Maybe type in Haskell to safely handle exceptional cases, here’s how
TypeScript addresses such scenarios:
[Code 3-47] What Is Safe Composition? ? ?? !

1 const desserts = [
2 { name: 'Chocolate', price: 5000 },
3 { name: 'Latte', price: 3500 },
4 { name: 'Coffee', price: 3000 }
5 ];
6
7 // (1) Using the optional chaining operator (?.) for safe access
8 // to the `name` property
9 const dessert = find(({ price }) => price < 2000, desserts);
10 console.log(dessert?.name ?? 'T^T');
11 // T^T
12
13 // (2) Using the non-null assertion operator (!) to inform the language
that
14 // we always expect to find something
15 const dessert2 = find(({ price }) => price < Infinity, desserts)!;
16 console.log(dessert2.name);
17 // Chocolate

In situations like these, TypeScript can use the optional chaining


operator ?. to enable safe composition, and conversely, use the non-null
assertion operator ! to explicitly propagate an error if the value doesn’t
exist.

When you use method (1), optional chaining, if the actual value is
missing, it will be treated as undefined without causing a runtime error. In
contrast, method (2), non-null assertion, outright assumes the value exists,
so if the value is in fact null or undefined, a runtime error may occur.
Does that mean method (2) should be avoided? Or is it simply a trick to
allow the code to compile?

Method (2) represents the developer telling the language, “I’ve


designed this logic so that the value must exist.” In other words, “Here,
null or undefined should never appear; if it does, it indicates our design is
incorrect, and a runtime error should occur.” If such an error happens, the
developer’s task isn’t to remove the !, but to investigate why the value
couldn’t be found at runtime and fix the root cause (e.g. an issue with the
API, a mistake in how data is stored in the database, or some scenario
where the DOM element doesn’t exist).
TypeScript supports these operators alongside try...catch for safe
composition, error propagation, and precise error handling. Additionally,
the IDE can use the type system to suggest whether you should safely
access a value via optional chaining, or whether this situation truly calls for
a non-null assertion. By using these tools properly, you can detect potential
runtime errors at the code-writing stage and clearly indicate whether you
want safe composition or an explicit exception in the event of null or
undefined.

1. Accessing the name Property Safely with the Optional


Chaining Operator (?.)

In this code, if the dessert object doesn’t exist (for example, if the
find function returns undefined), it’s accessed safely, and 'T^T' is printed
when dessert doesn’t exist. This is done by using the optional chaining
operator (?.) and the nullish coalescing operator (??) to safely retrieve
values and provide a default.

1. If dessert is undefined, dessert?.name returns undefined.


Otherwise, it returns dessert.name.
2. Finally, if dessert?.name is undefined, it returns 'T^T'; otherwise, it
returns the value of dessert?.name.

Code like dessert?.name acknowledges the possibility that dessert


might not be found and indicates that returning 'T^T' is the intended
behavior if it’s undefined.

2. Communicating Intent That We Will Always Find Something


via the Non-Null Assertion Operator (!)

In the case of dessert2.name, if there’s any possibility that dessert2


is missing, it contradicts the developer’s assumption that no such scenario
should arise in this program. As a result, the language is told that if this
actually happens, the error must not be suppressed but rather be propagated.
In other words, the absence of dessert2 is considered an error, and we
detect it using the non-null assertion operator (!).
every

Let’s now implement the every function. The every function should
return true if the given function f returns true for all elements, or false
otherwise.

First, here is the function signature of every. The comment shows the
signature of Haskell’s all function, which has the same behavior:

[Code 3-48] every Function Signature

1 // all :: (a -> Bool) -> [a] -> Bool


2 function every<A>(f: (a: A) => boolean, iterable: Iterable<A>): boolean {}

This time, instead of using imperative code, we’ll tackle the problem
from a functional and list-processing perspective. In functional
programming, we often think in terms of transforming a list step by step to
derive a final result.

There are many possible methods, but here we’ll use the following
strategy:

Convert all elements of the list into boolean values, and then
Combine all those boolean values with the logical AND operator (&&)
to easily obtain the desired result.

[Code 3-49] Strategy for Implementing every

1 // 1. [1, 3, 5]
2 // 2. [isOdd(1), isOdd(3), isOdd(5)]
3 // 3. (true && true && true)

There are various ways to implement every, but the approach


suggested here is appealing because it can be applied in virtually any
language. This method does not depend on language-specific or data-
structure-specific features or syntax. Instead, it simply uses the logical AND
operator (&&) supported by most languages. This allows us to implement the
every function in concise, easily understandable code without being tied to
a particular language.

Now let’s convert the above plan into code. In [Code 3-50], you can
see that the plan from [Code 3-49] has indeed been directly transferred into
the implementation.

[Code 3-50] Implementing every

1 function every<A>(f: (a: A) => boolean, iterable: Iterable<A>): boolean {


2 return fx(iterable)
3 .map(f)
4 .reduce((a, b) => a && b, true); // [a: boolean], [b: boolean]
5 }
6
7 console.log(every(isOdd, [1, 3, 5]));
8 // true
9 console.log(every(isOdd, [1, 2, 5]));
10 // false

The every function is created by chaining together map(f) and


reduce((a, b) => a && b, true).

1. We directly pass the function f received by every to the first map.


2. We then provide (a && b) as the accumulator function to reduce,
producing the same effect as (true && true && true).

Typically, when using reduce, we supply an accumulator function that


adds (+), subtracts (-), or merges objects ({...a, ...b}) or arrays ([...a,
...b]), but in this scenario we are using the logical AND operator (&&) to
accumulate the values. This is useful to check whether all elements satisfy a
certain condition. In essence, reduce can be used to aggregate all elements
with any operation, just like adding or subtracting.

Moreover, you might wonder why we are not implementing it directly


in one pass with something like (a && f(b)) in a single reduce call, and
instead are splitting the logic into map and reduce. However, using f(b) in
a single reduce and splitting it into map and reduce both have the same time
complexity.
For example, fx(list).reduce((a, b) => a && f(b), true)
evaluates f(b) on each element during a single pass, taking O(n) time.
Meanwhile, fx(list).map(f).reduce((a, b) => a && b, true) might
look like “map then reduce,” but because of lazy iterators, each element is
mapped immediately before it’s consumed by reduce, resulting in just one
pass of O(n) as well.

As another related case, if we were using a normal array rather than a


lazy iterator—like array.map(f).reduce(...)—the array would first be
fully mapped (producing a new array), and then reduced, effectively
traversing the array twice. However, each pass is still O(n), so the overall
complexity remains O(n). Even then, using a lazy iterator is more memory
efficient since it does not require creating the intermediate array in memory.

In functional programming, composing functions in this manner


increases simplicity and readability, and it can prove advantageous in
asynchronous programming as well. Hence this approach is recommended.

some

We can implement the some function in a similar way. The some


function should return true if the given function f returns true for at least
one element, or false if the function returns false for all elements.

Here is the function signature of some. The comment shows the


signature of Haskell’s any function, which has the same behavior:

[Code 3-51] some Function Signature

1 // any :: (a -> Bool) -> [a] -> Bool


2 function some<A>(f: (a: A) => boolean, iterable: Iterable<A>): boolean {}

We’ll use the same plan as with every. After converting all values to
boolean, we combine those boolean values with the OR operator (||) to
produce the desired outcome. Let’s express this in code.
[Code 3-52] Strategy for Implementing some

1. [2, 3, 4]
2. [isOdd(2), isOdd(3), isOdd(4)]
3. (false || true || false)

As mentioned earlier, there are many ways to implement some or


every, and there could be more concise approaches as well. For example,
you could check the truth value, find the index of a value that matches the
condition and compare it with -1, or create an array with a single element
and check if length is 0. However, these methods retain a somewhat
imperative feel even when using higher-order functions. They also tend to
depend on language syntax or standard libraries, and code like length ===
0 is more imperative than declarative, specifically detailing ‘how’ it should
operate rather than ‘what’ it does.

In contrast, methods like those in [Code 3-50] and [Code 3-53] are
directly expressed in the full context of the code as functions that “check if
all elements match this condition and then verify if they all satisfy (true &&
true && false)”. Code that expresses ‘what’ is being done rather than
‘how’ to do it is not only easier to read and better captures the meaning, but
is also easier to understand when revisited later.

[Code 3-53] Implementing some

1 function some<A>(f: (a: A) => boolean, iterable: Iterable<A>): boolean {


2 return fx(iterable)
3 .map(f)
4 .reduce((a, b) => a || b, false); // [a: boolean], [b: boolean]
5 }
6
7 console.log(some(isOdd, [2, 5, 6]));
8 // true
9 console.log(some(isOdd, [2, 4, 6]));
10 // false

Here, we used (a || b) as the accumulator in reduce, creating the


same effect as (false || true || false).
Insert a “Break” Logic with Lazy Evaluation

In truth, neither the some nor the every function requires traversing all
elements to produce a result. For some, if we encounter even one true, we
can return true and stop further traversal. Conversely, for every,
encountering just one false allows us to exit the loop immediately.

[Code 3-54] Increasing Efficiency in some

1 function some<A>(f: (a: A) => boolean, iterable: Iterable<A>): boolean {


2 return fx(iterable)
3 .map(f)
4 .filter(a => a)
5 .take(1)
6 .reduce((a, b) => a || b, false); // [a: boolean], [b: boolean]
7 }
8
9 console.log(some(isOdd, [2, 5, 6]));
10 // true
11 console.log(some(isOdd, [2, 4, 6]));
12 // false

We’ve improved on the earlier some function by adding .filter(a =>


a).take(1):

1. This code now uses .filter(a => a).take(1) so that once it


encounters a single true, it stops reading further elements and
produces an iterator containing at most one element, which then goes
to reduce.
2. If there isn’t a single true, the iterator passed to reduce is empty.
3. When there are no elements, reduce returns its initial value false; if
there is a single element, we get false || true in one step, thus true.

This makes some as efficient as if we used an if () break; approach


in a loop.

every can be optimized in a similar way.


[Code 3-55] Increasing Efficiency in every

1 function every<A>(f: (a: A) => boolean, iterable: Iterable<A>): boolean {


2 return fx(iterable)
3 .map(f)
4 .filter(a => !a)
5 .take(1)
6 .reduce((a, b) => a && b, true); // [a: boolean], [b: boolean]
7 }
8
9 console.log(every(isOdd, [1, 3, 5]));
10 // true
11 console.log(every(isOdd, [1, 2, 5]));
12 // false

We improved on the previous every by adding .filter(a =>


!a).take(1):

1. Now, .filter(a => !a).take(1) means if we see even one false,


we stop reading further elements. Only an iterator with at most one
element is passed to reduce.
2. If there’s no false, the iterator passed to reduce remains empty.
3. If there are no elements, reduce returns its initial value true; if one
element is present, that element will be false, leading to true &&
false once, which returns false.

Thus, we’ve confirmed that functions like every and some can be
composed from list-processing functions in a functional style rather than
using imperative logic.

Abstracting Shared Logic in every and some Functionally

Because functional programming treats lists, code, and functions as


values, it’s extremely easy to factor out common logic. Our
implementations of every and some so far are nearly identical in structure.
Below is a functional programming solution for removing this duplication:

[Code 3-56] The accumulateWith Function

1 function accumulateWith<A>(
2 accumulator: (a: boolean, b: boolean) => boolean,
3 acc: boolean,
4 taking: (a: boolean) => boolean,
5 f: (a: A) => boolean,
6 iterable: Iterable<A>
7 ): boolean {
8 return fx(iterable)
9 .map(f)
10 .filter(taking)
11 .take(1)
12 .reduce(accumulator, acc);
13 }
14
15 function every<A>(f: (a: A) => boolean, iterable: Iterable<A>): boolean {
16 return accumulateWith((a, b) => a && b, true, a => !a, f, iterable);
17 }
18
19 function some<A>(f: (a: A) => boolean, iterable: Iterable<A>): boolean {
20 return accumulateWith((a, b) => a || b, false, a => a, f, iterable);
21 }
22
23 console.log(every(isOdd, [1, 3, 5]));
24 // true
25 console.log(every(isOdd, [1, 2, 5]));
26 // false
27
28 console.log(some(isOdd, [2, 5, 6]));
29 // true
30 console.log(some(isOdd, [2, 4, 6]));
31 // false

In [Code 3-56], we employ the accumulateWith function to factor out


shared logic between every and some. It’s especially notable that we pass
around the logic-holding code as a function. Because every and some
already used calls like fx(iterable).map(f).filter(a =>
!a).take(1).reduce((a, b) => a && b, true), we simply took those
functional pieces and turned them into arguments, finishing the abstraction
with minimal modification. The duplication was removed so smoothly that
it hardly needs explanation. In this way, functional programming is well-
suited to refactoring and offers excellent maintainability.

Adding with concat

The array method concat is used to merge multiple arrays into one.
For instance, arr.concat(arr2) returns a new array that combines arr and
arr2, leaving the original array unmodified. However, because concat
immediately evaluates and merges all elements into a new array, memory
usage can grow significantly when merging very large arrays.
On the other hand, by implementing concat with a generator, you can
process elements as needed through lazy evaluation, which may offer
improved memory efficiency and performance.

Implementing concat with Generators

Using generators, you can concisely merge arrays or iterables as


shown below:

[Code 3-57] concat

1 function* concat<T>(...iterables: Iterable<T>[]): IterableIterator<T> {


2 for (const iterable of iterables) yield* iterable;
3 }
4
5 const arr = [1, 2, 3, 4];
6 const iter = concat(arr, [5]);
7 console.log([...iter]);
8 // [1, 2, 3, 4, 5]

This concat function takes multiple iterables as arguments and


sequentially yields each element. Rather than merging entire arrays at once,
it processes elements one by one when needed. In other words, it doesn’t
actually combine arrays but simply continues iteration. Let’s look at the
difference between the array method concat and the generator-based
concat.

Differences Between Array concat and Generator concat

The following code compares array concat with generator concat.

[Code 3-58] Comparing concat

1 const arr = [1, 2, 3, 4, 5];


2
3 // Example using array concat
4 const arr2 = arr.concat([6, 7, 8, 9, 10]);
5 console.log(arr2); // [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
6 let acc = 0;
7 for (const a of take(7, arr2)) {
8 acc += a;
9 }
10 console.log(acc); // 28
11
12 // Example using generator concat
13 const iter = concat(arr, [6, 7, 8, 9, 10]);
14 console.log(iter); // concat {<suspended>} (nothing happens yet)
15 let acc2 = 0;
16 for (const a of take(7, iter)) {
17 acc2 += a;
18 }
19 console.log(acc2); // 28

In this example, arr.concat([6, 7, 8, 9, 10]) creates a new array


[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. In contrast, the generator-based
concat does not create a new array; rather, it yields combined elements one
by one.

When using the array method concat, memory usage increases when
large arrays are copied. This happens because the newly created array must
generate a large index table to hold all elements and reassign all those slots.
In contrast, the generator-based concat does not copy values; it generates
values only when needed, operating more efficiently. In [Code 3-58], we
only need to compute the value for acc, yet arr2 is still created as a new
array. The generator-based concat does not create that new array but carries
out only the needed operations to compute acc2.

Thinking About Using concat Instead of push

push is a method that adds elements to the end of an array, modifying


the original array. With generator concat, you can achieve the same effect
without altering the original array, while also taking advantage of lazy
evaluation to process data more efficiently.

[Code 3-59] Using push

1 const arr = [1, 2, 3, 4, 5];


2
3 // Example of adding elements with push and then summing
4 arr.push(6, 7);
5 let acc1 = 0;
6 for (const a of arr) {
7 acc1 += a;
8 }
9 console.log(acc1); // 28
10 console.log(arr); // [1, 2, 3, 4, 5, 6, 7]
11
12 // Remove the elements added with push (restore the original array)
13 arr.pop();
14 arr.pop();
15
16 // Example of adding elements again with push and then summing
17 arr.push(8, 9);
18 let acc2 = 0;
19 for (const a of arr) {
20 acc2 += a;
21 }
22 console.log(acc2); // 32
23 console.log(arr); // [1, 2, 3, 4, 5, 8, 9]
24
25 // Remove the elements added with push (restore the original array)
26 arr.pop();
27 arr.pop();

[Code 3-59a] Using concat

1 const arr = [1, 2, 3, 4, 5];


2
3 // Example of creating an iterator with generator concat and then summing
4 const iter1 = concat(arr, [6, 7]);
5 let acc3 = 0;
6 for (const a of iter1) {
7 acc3 += a;
8 }
9 console.log(acc3); // 28
10 console.log(arr); // [1, 2, 3, 4, 5]
11
12 // Example of adding different elements with generator concat and then
summing
13 const iter2 = concat(arr, [8, 9]);
14 let acc4 = 0;
15 for (const a of iter2) {
16 acc4 += a;
17 }
18 console.log(acc4); // 32
19 console.log(arr); // [1, 2, 3, 4, 5]

Generator-based concat does not modify the original array and only
generates elements when needed, which is particularly useful if you need to
reuse the original array multiple times. In contrast, the array method concat
allocates a new array in memory. As arrays grow larger, memory usage can
increase. Since generator-based concat doesn’t allocate a new array, it
minimizes memory usage by generating values incrementally.
Hence, generator concat can be a great option for memory efficiency
and flexibility. Of course, you do not need to replace push with generator
concat in every situation—there are cases where push is more suitable. It’s
important to choose the right method for your needs.

Thinking About Using concat Instead of unshift

unshiftis a method that adds new elements to the front of an array,


modifying the original array. In doing so, it may need to shift all existing
elements one index over, which can be costly for large arrays. By using
generator-based concat, you can add elements at the front without
modifying the original array, processing data more efficiently through lazy
evaluation.

[Code 3-60] Example of Adding Elements with unshift and Joining into a String

1 const arr = ['2', '3', '4', '5'];


2 arr.unshift('1');
3 console.log(arr); // ['1', '2', '3', '4', '5']
4 let result1 = '';
5 for (const str of arr) {
6 result1 += str;
7 }
8 console.log(result1); // '12345'

When you use unshift, adding an item to the front of the array may
require shifting all existing items one step to the right, which grows more
expensive as the array size increases. For instance, if there are 100 elements
in the array, adding an element at the front means shifting all 100 elements,
increasing the time complexity for large arrays.

[Code 3-60a] Using Generator concat to Add Elements at the Front and Then Join into
a String

1 const arr = ['2', '3', '4', '5'];


2 const iter = concat(['1'], arr);
3 console.log(arr); // ['2', '3', '4', '5']
4 let result2 = '';
5 for (const str of iter) {
6 result2 += str;
7 }
8 console.log(result2); // '12345'

In contrast, generator concat does not require shifting indices. It


simply generates the newly added elements in front as needed, which can be
more efficient in terms of both memory and performance.

Using take Together with concat

If you opt for concat instead of unshift and only fetch as many
elements as needed using take, you can work with just the required items
without manipulating the entire array.

[Code 3-61] Combining take and concat

1 const arr1 = [1, 2, 3, 4, 5];


2 const arr2 = [6, 7, 8, 9, 10];
3 const iter = take(3, concat(arr1, arr2));
4 console.log([...iter]); // [1, 2, 3]

This method works efficiently by processing only the required items.


In this scenario, it never even iterates over arr2, making it effectively the
same as not calling concat on arr2 at all.

Using some Together with concat

If you replace unshift with concat and use some to find an element
that satisfies a condition, the generator approach can be more efficient by
only generating and processing what is necessary. Since the some function
stops as soon as it finds a matching element, no unnecessary computations
occur.

[Code 3-62] Using some with concat

1 const arr = [3, 4, 5];


2 console.log(some(n => n < 3, arr));
3 // false
4
5 const iter = concat([1, 2], arr);
6 console.log(some(n => n < 3, iter));
7 // true

We’ve now seen several examples of how to use generator-based


concat with lazy evaluation. In particular, this approach demonstrates how
to work with a list through lazy processing instead of modifying the values
directly. This can enable efficient, flexible processing and can spark plenty
of new ideas.

Code:Object:Function = Generator:Iterator:LISP = IP:OOP:FP

In Chapter 3, we explored a wide range of paradigms and techniques


proposed by LISP and various modern languages, along with inspiration
from Haskell, practical execution and application of lazy evaluation, and
TypeScript’s handling of safe composition and error handling. We also
looked at how to implement functional code and refactor imperative code
using list-processing concepts and ways of replacing value mutation with
new ideas, thereby seeing how different paradigms interconnect and
complement each other.

In a multiparadigm language, it’s possible to mix different paradigms


in a single function. Selecting or combining the right paradigms for the
context can lead to better solutions.

3.5 Summary
Code:Object:Function = Generator:Iterator:LISP =
IP:OOP:FP

“Code, object, function” can each be viewed through the lens of


“generator, iterator, LISP,” respectively. An iterator is the implementation
of the Iterator pattern, an object that traverses collection values in a
generalized manner. Because an iterator is lazy, it executes only as far as it
needs and can pause execution. This makes it possible to create iterators
that could, in principle, be consumed indefinitely. Consequently, combining
imperative, object-oriented, and functional paradigms yields a powerful
programming model.

Code as Data: A Logic-Bearing List

The philosophy of Lisp reflects a programming language paradigm


that blurs the boundary between code and data. In Lisp’s S-expressions,
logic is represented as a list, enabling code to be more flexible and
extensible. The LISP concept can be applied to modern programming
languages, by treating code and data as one and the same. LISP’s
metaprogramming capability makes it possible to treat code and data
identically, allowing them to be dynamically generated and manipulated.
Through this, programmers can write more flexible and powerful programs.

Learning from Haskell

Haskell is a purely functional programming language that provides


pure functions, function composition, currying, lazy evaluation, and a
robust type system. Studying Haskell offers insight into functional
programming concepts and how to apply them in modern languages. Its
function signatures and pattern matching strengthen both code safety and
readability. In particular, Haskell’s powerful type system can infer types
without explicit annotations, helping developers write safe yet concise code.

A Closer Look at Lazy Evaluation

Lazy evaluation is a strategy in which computations are deferred until


absolutely necessary, helping optimize performance. Through practical
implementations and application examples of lazy evaluation, we can write
more efficient code. It is an important concept for performance optimization
and resource conservation, enabling us to create better programs. Lazy
evaluation is particularly useful in processing large-scale data and complex
calculations, as it performs computations only when needed, reducing
unnecessary resource waste.
Generator:Iterator:LISP – Lazy Evaluation and Safe
Composition

We’ve seen that we can implement higher-order functions like find,


every, and some purely by combining list-processing functions. In
TypeScript, operators such as the optional chaining operator (?.) and the
non-null assertion operator (!) let us handle potentially missing values in a
safe and expressive manner. Meanwhile, list-processing functions remain
efficient through lazy evaluation, and functional design makes it easy to
abstract out common logic to eliminate duplication.

4. Asynchronous Programming
Asynchronous programming allows a program to keep executing
other tasks without waiting for a particular job to finish, thereby letting the
program handle multiple tasks concurrently and more efficiently. In
JavaScript environments, it’s commonly used for I/O operations. Browsers
rely on this asynchronous paradigm to keep the UI responsive during long
I/O operations. In frontend development, it’s often applied to API
communication and animation timing, while in backend programming it’s
used for handling the file system, databases, queries, and various network
communications. It is also used when controlling servers, storage, and other
resources offered by cloud providers such as AWS, Azure, and GCP.

In most JavaScript runtime environments, concurrency is managed via


single-threaded asynchronous I/O. For example, Node.js runs on a single
thread but uses asynchronous I/O to delegate tasks to external resources and
then coordinates the execution order. A programmer must accurately control
the order of these asynchronous events—ensuring that some tasks run
sequentially, others run in parallel, and the data is parsed or combined at the
appropriate time, so the logic flows according to plan.

In this chapter, we’ll learn how to handle Promises as values and


combine them with the iterator pattern to control asynchronous execution
order. We’ll also see how functional paradigms and a type system, which
excel at managing asynchrony and concurrency, can enhance both safety
and readability in asynchronous programming.

Additionally, building on the concepts introduced here in Chapter 4,


we’ll explore in Chapter 6 how to mix the object-oriented paradigm for a
multiparadigm approach to asynchronous handling, examining a variety of
asynchronous patterns and techniques to write better programs.

4.1 Asynchronous as a Value


A Promise is an object and a convention that allows the result of an
asynchronous task to be handled as a value. Much like how many
programming languages offer features equivalent to “iterators” for the
iterator pattern, they also provide functionality serving the same role as
Promise. Moreover, Promise allows you to treat asynchronous scenarios at
the type level. With the type system, we can compose code safely at
compile time, and at runtime, we can more effectively control various
asynchronous situations. By combining Promise with the iterator pattern,
we can create a very powerful asynchronous programming model. In this
chapter, we’ll particularly explore using it alongside list processing to
manage complex asynchronous logic easily and safely.

Promise

A Promise is an object used to manage success or failure of an


asynchronous task. The standardization of asynchronous “value plus
convention” helps developers and languages handle it precisely and safely.
Promise is created as an object immediately, regardless of whether the
asynchronous operation is done or not, so you can treat it as a value, then
retrieve its result or handle errors whenever you need them.

When first created, a Promise starts in the “pending” state, transitions


to “fulfilled” upon success, or “rejected” upon failure. Promise lets you
combine (compose) multiple promises sequentially or in parallel, and works
well with async/await to simplify asynchronous logic.
Promise-Related Features and Their Introduction

The Promise concept existed before ES6 (2015). From around 2010 to
2013, there were third-party libraries implementing promises, widely used
by developers—famous ones being Q, Bluebird, When.js. These libraries
made asynchronous programming easier and were popular. Then, with ES6
(2015) standardizing Promise, JavaScript got a consistent method for
asynchronous programming, so we could handle async tasks easily without
third-party libs.

Later, ECMAScript 2017 introduced async/await to make Promise


usage more concise and intuitive, and ECMAScript 2018 added
AsyncGenerator and AsyncIterator for more flexible async code. All
these new features build on Promise.

Below are Promise-related features and when they arrived:

ECMAScript 2015 (ES6)

1. Promise

An object for handling success or failure of an async operation.

2. Promise.all

Runs multiple promises in parallel, waiting until all are complete,


returning their results as an array.

3. Promise.race

Runs multiple promises in parallel, returning the result or error from


whichever completes first.

4. Promise.resolve

Converts a value into a Promise (if it isn’t already), or returns it as is if


it’s already a Promise.
5. Promise.reject

Returns a rejected promise with a given reason.

ECMAScript 2017 (ES8)

6. async/await

Enables asynchronous functions and awaiting promise results, making


asynchronous code simpler and more readable.

ECMAScript 2018 (ES9)

7. Promise.finally

A callback that always runs once a Promise is settled (success or


failure).

8. for await…of

A syntax to iterate over async iterable objects. You can process values
asynchronously.

9. AsyncIterator

An interface for asynchronous iteration. You can do async iteration


with for await...of.

10. AsyncGenerator

An async generator function that can perform asynchronous tasks and


generate values. Defined with async and function* together.

ECMAScript 2020 (ES11)

11. Promise.allSettled

Runs multiple promises in parallel, waits until all are done, and returns
the outcome of each (success or failure) as objects.
ECMAScript 2021 (ES12)

12. Promise.any

Among multiple promises, returns the value from whichever is


fulfilled first. If all reject, returns a single error containing all the
reasons.

ECMAScript 2022 (ES13)

13. Array.fromAsync

Creates an array by processing an async iterable.

These expansions of asynchronous programming in ECMAScript


revolve around Promise, which is core to IO tasks in JavaScript. Mastering
Promise is essential to effective IO handling, a critical aspect of practical
work.

Now let’s look at some ways to handle Promise in code.

A delay Function Returning a Promise

[Code 4-1] delay

1 function delay<T>(time: number, value: T): Promise<T> {


2 return new Promise((resolve) => setTimeout(() => resolve(value), time));
3 }

delay returns a Promise that resolves to value after time milliseconds:

1. A generic function taking a T to define the returned value type.


2. Creates a Promise object.
3. time is in milliseconds, used as the timeout in setTimeout.
4. After time, calls resolve(value).

Since setTimeout can pass extra arguments to the callback, we can


simplify:
[Code 4-2] A More Concise delay

1 function delay<T>(time: number, value: T): Promise<T> {


2 return new Promise((resolve) => setTimeout(resolve, time, value));
3 }

Below is an example using the Promise returned from delay.

[Code 4-3] Composing with then

1 function test() {
2 console.time('test');
3
4 delay(1000, "Hello, world!").then((result) => { // [result: string]
5 console.log(result); // after 1 second
6 }).then(() => {
7 return delay(2000, 40);
8 }).then((result) => { // [result: number]
9 console.log(result); // after 2 seconds
10 console.timeEnd('test'); // ~3000ms
11 });
12 }
13
14 test();
15 // Hello, world!
16 // 40
17 // test: 3002.367ms

Here, we chain multiple then calls. We print “Hello, world!” after one
second, then 40 after two more seconds, measuring total time with
console.time.

[Code 4-4] async/await (ES8)

1 async function test2() {


2 console.time('test2');
3
4 const result1 = await delay(1000, "Hello, world!"); // [result1: string]
5 console.log(result1); // after 1 second
6
7 const result2 = await delay(2000, 40); // [result2: number]
8 console.log(result2); // after 2 seconds
9
10 console.timeEnd('test2'); // ~3000ms
11 }
12
13 await test2();
14 // Hello, world!
15 // 40
16 // test2: 3002.519ms

Using async/await, we similarly chain two delay calls with more


direct style.

Have You Ever Used new Promise() Directly?

During interviews, I often ask questions like these:

Have you actually used new Promise() in real-world production code?


Did you encounter cases where you used new Promise() in actual
service code, whether while learning or working with your team?
Have you implemented functions that receive a Promise instance as an
argument?
Have you used features like Promise.all or Promise.race?

These questions serve as a great starting point for discussing


fundamental knowledge of asynchronous programming as well as various
application scenarios. For instance, we might talk about how asynchronous
programming was handled before or during the initial introduction of ES6,
and what approaches were attempted when transitioning from existing
libraries or helper functions to more skillful implementations of
asynchronous flow control. These conversations allow both parties to freely
exchange experiences and perspectives, often yielding valuable insights.

[Code 4-5] new Promise (ES6 or Earlier)

1 function handleConcurrentTasks<T>(): Promise<T[]> {


2 return new Promise((resolve, reject) => {
3 // ...
4 });
5 }

Of course, it’s now far less common to write new Promise() directly.
When Promise was first introduced, many official Web APIs, Node.js
modules, and third-party libraries still relied on callback-based
asynchronous control, requiring frequent conversion of callbacks into
Promise objects. As a result, using new Promise was quite common.
However, most Web APIs, Node.js modules, and third-party libraries
nowadays already provide promise-based interfaces. With the introduction
of async/await, the need to call new Promise directly in code has become
even rarer. In fact, for typical I/O operations or data-loading processes,
manually creating a Promise is almost unheard of today.

That said, does that mean there’s never a need for new Promise? If you
need to implement a unique form of parallel execution control that isn’t
covered by existing functions (like Promise.all or Promise.race), you
may need to create and manage Promise objects yourself using new
Promise() or a similar approach. This goes beyond simply using existing
libraries or helper functions; it involves applying custom algorithms to
solve specific issues in asynchronous programming or building reusable
functions that efficiently manage concurrency.

In short, having substantial experience with new Promise can indicate


that someone has gone beyond simply utilizing the environment at hand,
demonstrating deeper insight into asynchronous control and problem-
solving capabilities.

Below are a few examples of tasks that typically demand advanced


knowledge of asynchronous programming and may involve concurrency
control using new Promise or a similar technique:

Implementing a parallel processing function that limits the number of


concurrently running tasks, unlike Promise.all
Handling concurrent operations for both animations and data requests
to improve UI responsiveness
Optimizing server-side operations for data fetching, saving, and
rendering
Scheduling recurring jobs such as batch processing or CronJobs, and
optimizing them with parallel execution
Combining generators in Node.js for streaming file reads and writes
Developing reusable function libraries for database handling

Such tasks often require a high level of expertise and familiarity with
asynchronous programming, along with concurrency control provided by
new Promise or equivalent patterns.
Promise.race

Promise.race runs multiple promises in parallel and returns as soon as


the first promise fulfills or rejects.

[Code 4-5] Promise.race (ES6)

1 const promise1 = new Promise((resolve) => setTimeout(resolve, 500, 'one'));


2 const promise2 = new Promise((resolve) => setTimeout(resolve, 100, 'two'));
3
4 await Promise.race([promise1, promise2]).then((value) => {
5 console.log(value); // "two", the earliest to finish
6 });

When might you use Promise.race in real life? Think about it before
reading further.

Setting a Timeout for IO Operations

If fetching /friends API takes more than 5 seconds, you might want
to show “Your network connection is poor.” So we can do:

[Code 4-6] Controlling Timeout via Promise.race

1 function getRandomValue<T>(a: T, b: T): T {


2 // randomly pick 0 or 1
3 const randomIndex = Math.floor(Math.random() * 2);
4 return randomIndex === 0 ? a : b;
5 }
6
7 type User = {
8 name: string;
9 };
10
11 function getFriends(): Promise<User[]> {
12 // like: return fetch('/friends').then(res => res.json());
13 return delay(
14 getRandomValue(60, 6_000), // either ~0.06s or 6s
15 [{ name: 'Marty' }, { name: 'Michael' }, { name: 'Sarah' }]
16 );
17 }
18
19 const result = await Promise.race([
20 getFriends(),
21 delay(5000, 'timeout')
22 ]);
23
24 if (result === 'timeout') {
25 console.log("Your network connection is poor.");
26 } else {
27 const friends = result as User[];
28 console.log("Render friend list:", friends.map(({ name }) =>
`<li>${name}</li>`));
29 }

[Code 4-6] is a neat solution. Of course, we also have


AbortController for fetch, but the principle of using Promise.race is
generally powerful and not limited to fetch.

The snippet is simplified. In practice, you might pass an error or typed


“Timeout” so you can handle it more elegantly.

Rendering UI Depending on Response Speed

Imagine a “friend invite” button that, when clicked, fetches the friend
list for a “Group Chat.” If the /friends API responds under 100ms, we
simply show the friend list. Otherwise, we show a loading indicator and
then display the list once it’s ready. That’s easily done with Promise.race.

[Code 4-7] Different UI if Response is Slow or Fast

1 function toggleLoadingIndicator(show: boolean): void {


2 if (show) {
3 console.log("append loading...");
4 } else {
5 console.log("remove loading...");
6 }
7 }
8
9 async function renderFriendsPicker(): Promise<void> {
10 const friendsPromise = getFriends();
11
12 const result = await Promise.race([
13 friendsPromise,
14 delay(100, 'isSlow')
15 ]);
16
17 if (result === 'isSlow') {
18 toggleLoadingIndicator(true);
19 await friendsPromise;
20 toggleLoadingIndicator(false);
21 }
22
23 const friends = await friendsPromise;
24 console.log("Render friend list:", friends.map(({ name }) =>
`<li>${name}</li>`));
25 }
26
27 await renderFriendsPicker();
28 // If fast response:
29 // ~0.06s later:
30 // Render friend list: <li>Marty</li><li>Michael</li><li>Sarah</li>
31
32 await renderFriendsPicker();
33 // If slow response:
34 // append loading...
35 // ~6s later
36 // remove loading...
37 // Render friend list: <li>Marty</li><li>Michael</li><li>Sarah</li>

Explanation:

1. friendsPromise calls getFriends.


2. Promise.race with friendsPromise and delay(100, 'isSlow').
3. If result === 'isSlow', the response is slower than 100ms, so show
a loading indicator. Once friendsPromise finishes, remove the
loading indicator. Then show the list.
4. Otherwise, if not 'isSlow', we skip the indicator and directly display
the list.

We handle Promise as a value, storing it in friendsPromise, passing it


to Promise.race, etc. Because Promise is a value, we can create
specialized functions like Promise.race and reuse them.

Exercise

Let’s consider extending [Code 4-7] with more advanced UI to further


refine the user experience:

1. Fast response (<100ms):

If the API finishes under 100ms, we first render the friend list off-
screen, then show an animation (e.g., from bottom to top) for a smooth
entry.

2. Delayed response (>=100ms):


If the API takes 100ms or more, we show the friend list window with
only a title and loading indicator via an animation.
Once the response finishes, we render the list.
If the response arrives before the animation finishes, don’t just pop in
the results. Wait until the animation completes, then display the friend
list so the UI transitions smoothly.

This is a deeper test of controlling Promise precisely, adjusting UI


render timing for a better user experience.

Promise.all

Promise.all runs all given promises in parallel, returning an array


once all are fulfilled. If any promise rejects, it rejects immediately with that
reason. This is used for parallel execution with one final result.

[Code 4-8] Promise.all (ES6)

1 type File = {
2 name: string;
3 body: string;
4 size: number;
5 };
6
7 function getFile(name: string, size = 1000): Promise<File> {
8 return delay(size, { name, body: '...', size });
9 }
10
11 const files = await Promise.all([
12 getFile('img.png', 500),
13 getFile('book.pdf', 1000),
14 getFile('index.html', 1500)
15 ]);
16
17 console.log(files);
18 // ~1500ms later:
19 // [
20 // { name: 'img.png', body: '...', size: 500 },
21 // { name: 'book.pdf', body: '...', size: 1000 },
22 // { name: 'index.html', body: '...', size: 1500 }
23 // ]

getFile is a simulated function returning a Promise<File> after some


delay. [Code 4-8] shows how Promise.all waits for all 3. Though total
time if done serially might be ~3000ms, in parallel it’s just the max
(1500ms). The results come as an array of File.

What if one of them rejects?

[Code 4-9] Promise.all Failure Example

1 try {
2 const files = await Promise.all([
3 getFile('img.png'), // default size: 1000
4 getFile('book.pdf'),
5 getFile('index.html'),
6 delay(500, Promise.reject('File download failed'))
7 ]);
8 console.log(files); // never runs
9 } catch (error) {
10 // ~500ms later
11 console.error(error); // 'File download failed'
12 }

The 4th promise fails after ~500ms, so Promise.all rejects


immediately, throwing 'File download failed'. The try...catch with
await handles it.

Promise.allSettled

Promise.allSettled waits until all the given promises are settled


(either fulfilled or rejected), returning an array with each’s result. This is
useful if you need the outcomes of all tasks, even if some fail.

[Code 4-10] Promise.allSettled (ES11)

1 const files = await Promise.allSettled([


2 getFile('img.png'),
3 getFile('book.pdf'),
4 getFile('index.html'),
5 Promise.reject('File download failed')
6 ]);
7
8 console.log(files);
9 // ~1000ms later:
10 // [
11 // { status: 'fulfilled', value: { name: 'img.png', body: '...', size:
1000 } },
12 // { status: 'fulfilled', value: { name: 'book.pdf', body: '...', size:
1000 } },
13 // { status: 'fulfilled', value: { name: 'index.html', body: '...',
size: 1000 } },
14 // { status: 'rejected', reason: 'File download failed' }
15 // ]

All results are returned, with each outcome labeled 'fulfilled' or


'rejected'. Compare to [Code 4-9], where a single rejection ends the
operation. In Promise.allSettled, you see all results. If you want
immediate error propagation, keep using Promise.all.

What if you needed “all settled” behavior before ES11? It’s quite
simple:

[Code 4-11] settlePromise

1 const settlePromise = <T>(promise: Promise<T>) =>


2 promise
3 .then(value => ({ status: 'fulfilled', value }))
4 .catch(reason => ({ status: 'rejected', reason }));
5
6 const files = await Promise.all([
7 getFile('img.png'),
8 getFile('book.pdf'),
9 getFile('index.html'),
10 Promise.reject('File download failed')
11 ].map(settlePromise));
12
13 console.log(files);
14 // [
15 // { status: 'fulfilled', value: { name: 'img.png', body: '...', size:
1000 } },
16 // { status: 'fulfilled', value: { name: 'book.pdf', body: '...', size:
1000 } },
17 // { status: 'fulfilled', value: { name: 'index.html', body: '...',
size: 1000 } },
18 // { status: 'rejected', reason: 'File download failed' }
19 // ]

[Code 4-11] uses a small “settlePromise” function plus .map() to


produce an effect similar to Promise.allSettled. That’s a typical example
of “treating promises as data” in a functional style.
Promise.any

While Promise.race returns or rejects as soon as the first promise


finishes (be it success or failure), Promise.any returns the value of
whichever promise fulfills first, ignoring rejections unless all fail. If all are
rejected, it returns an aggregate error.

[Code 4-12] Promise.any (ES12)

1 const files = await Promise.any([


2 getFile('img.png', 1500),
3 getFile('book.pdf', 700),
4 getFile('index.html', 900),
5 delay(100, Promise.reject('File download failed'))
6 ]);
7
8 console.log(files);
9 // ~700ms later
10 // { name: 'book.pdf', body: '...', size: 700 }
11
12 const files2 = await Promise.any([
13 delay(200, Promise.reject('File download failed')),
14 delay(100, Promise.reject('File download failed'))
15 ]);
16 // ~200ms later
17 // Uncaught (in promise) AggregateError: All promises were rejected

[Code 4-12] shows Promise.any ignoring rejections except if all fail.


The first to fulfill is returned.

Before ES12 (2021), if you needed this “any but only success” logic,
you’d have to implement something akin to Promise.any yourself. In the
future, if we need a concurrency pattern that isn’t covered by the built-in
APIs, we might again do new Promise or treat Promise as a data value,
building custom solutions. The more we practice such patterns, the more
effectively we can handle complex async tasks.

Next, we’ll explore how to combine Promise with laziness and handle
asynchronous situations with list processing.
4.2 Asynchronous Handling with Laziness
We’ve seen how Promise allows us to treat asynchronous scenarios as
values, and we’ve examined official Promise helper functions—all, race,
allSettled, any—to control asynchronous states. In this chapter, we’ll
expand the idea of treating Promise as a value by creating re-usable
functions that handle asynchronous scenarios in ways similar to
Promise.all or Promise.race. We’ll also explore how and why to combine
Promise with the iterator pattern, lazy evaluation, and list processing.

How to Defer a Promise’s Execution?

Recall the code from 4.1. It runs all six Promises in parallel, then
returns an array of results:

[Code 4-13] Running All in Parallel with Promise.all

1 type File = {
2 name: string;
3 body: string;
4 size: number;
5 };
6
7 function getFile(name: string, size = 1000): Promise<File> {
8 return delay(size, { name, body: '...', size });
9 }
10
11 async function test() {
12 const files: File[] = await Promise.all([
13 getFile('img.png'),
14 getFile('book.pdf'),
15 getFile('index.html'),
16 getFile('img.png'),
17 getFile('book.pdf'),
18 getFile('index.html'),
19 ]);
20
21 console.log(files);
22 }
23
24 await test();
25 // ~1000ms later:
26 // [
27 // { name: 'img.png', body: '...', size: 1000 },
28 // { name: 'book.pdf', body: '...', size: 1000 },
29 // { name: 'index.html', body: '...', size: 1000 },
30 // { name: 'img.png', body: '...', size: 1000 },
31 // { name: 'book.pdf', body: '...', size: 1000 },
32 // { name: 'index.html', body: '...', size: 1000 },
33 // ]

In [Code 4-13], Promise.all starts all 6 promises immediately, in


parallel. But suppose we want to control the load—for instance, let’s run the
6 tasks in two batches of 3 each. Then we’d need a custom concurrency
function.

[Code 4-14] executeWithLimit(promises, limit)

1 async function executeWithLimit<T>(


2 promises: Promise<T>[],
3 limit: number
4 ): Promise<T[]> {
5 const result1 = await Promise.all([promises[0], promises[1],
promises[2]]);
6 const result2 = await Promise.all([promises[3], promises[4],
promises[5]]);
7 return [
8 ...result1,
9 ...result2
10 ];
11 }
12
13 async function test() {
14 const files: File[] = await executeWithLimit([
15 getFile('img.png'),
16 getFile('book.pdf'),
17 getFile('index.html'),
18 getFile('img.png'),
19 getFile('book.pdf'),
20 getFile('index.html'),
21 ], 3);
22
23 console.log(files);
24 }
25
26 await test();
27 // ~1000ms later:
28 // [
29 // { name: 'img.png', body: '...', size: 1000 },
30 // { name: 'book.pdf', body: '...', size: 1000 },
31 // { name: 'index.html', body: '...', size: 1000 },
32 // { name: 'img.png', body: '...', size: 1000 },
33 // { name: 'book.pdf', body: '...', size: 1000 },
34 // { name: 'index.html', body: '...', size: 1000 },
35 // ]
To verify whether our implementation approach is correct, we first
wrote a hard-coded example like [Code 4-14]. We expected it to take
around 2,000 ms, but it doesn’t seem to work as intended. Even though I
intended to run three Promises in parallel first, wait, then run the next three
Promises, the actual runtime was around 1,000 ms, and the result was
effectively the same as Promise.all.

Why did this happen?

1. Immediate Execution of Promises

A Promise starts as soon as it’s created. In other words, when getFile


is called, the Promise has already begun.
Consequently, even if you batch them into groups of three, calling
Promise.all and using await to wait, all six Promises end up starting
at the same time.

2. Meaning of Parallel Execution

Promise.all accepts existing Promise objects that have already begun


execution and waits for them to complete, returning an array of results.
It doesn’t actually control when a Promise starts.
Even if you call Promise.all twice, each group’s Promises have
already started, so it doesn’t affect the total execution time.

In the end, while it might look like we’re batching three promises at a
time, all six promises start immediately, causing the total execution time to
be about 1,000 ms.

To illustrate this, in [Code 4-14a] we added a log inside the getFile


function:

[Code 4-14a] Adding a Log Inside getFile

1 function getFile(name: string, size = 1000): Promise<File> {


2 console.log(`${name} started`); // Log output when each file download
begins
3 return delay(size, { name, body: '...', size });
4 }
5
6 async function executeWithLimit<T>(
7 promises: Promise<T>[],
8 limit: number
9 ): Promise<T[]> {
10 const result1 = await Promise.all([promises[0], promises[1],
promises[2]]);
11 const result2 = await Promise.all([promises[3], promises[4],
promises[5]]);
12 return [
13 ...result1,
14 ...result2
15 ];
16 }
17
18 async function test() {
19 const promises = [
20 getFile('1-img.png'),
21 getFile('2-book.pdf'),
22 getFile('3-index.html'),
23 getFile('4-img2.png'),
24 getFile('5-book.pdf'),
25 getFile('6-index.html'),
26 ];
27 // The following six logs appear simultaneously:
28 // 1-img.png started
29 // 2-book.pdf started
30 // 3-index.html started
31 // 4-img2.png started
32 // 5-book.pdf started
33 // 6-index.html started
34
35 const files: File[] = await executeWithLimit(promises, 3);
36
37 console.log(files);
38 }
39
40 await test();
41 // After about 1,000 ms:
42 // [
43 // { name: '1-img.png', body: '...', size: 1000 },
44 // { name: '2-book.pdf', body: '...', size: 1000 },
45 // ...
46 // { name: '6-index.html', body: '...', size: 1000 },
47 // ]

By adding the log inside getFile and creating the array of Promise
objects one line before passing it to executeWithLimit, you can see that the
downloads for all files have already started before executeWithLimit takes
over the asynchronous situation.

To fix this, you need to arrange the batching before the Promise
objects actually start, and make sure each group runs sequentially. This
requires a way to delay the start of those Promise objects.

Fortunately, preventing a Promise from starting immediately is very


straightforward. You only need to add a few symbols.

[Code 4-15] executeWithLimit(fs, limit)

1 async function executeWithLimit<T>(


2 fs: (() => Promise<T>)[],
3 limit: number
4 ): Promise<T[]> {
5 const result1 = await Promise.all([fs[0](), fs[1](), fs[2]()]);
6 const result2 = await Promise.all([fs[3](), fs[4](), fs[5]()]);
7 return [
8 ...result1,
9 ...result2
10 ];
11 }
12
13 async function test() {
14 const files: File[] = await executeWithLimit([
15 () => getFile('1-img.png'),
16 () => getFile('2-book.pdf'),
17 () => getFile('3-index.html'),
18 () => getFile('4-img2.png'),
19 () => getFile('5-book.pdf'),
20 () => getFile('6-index.html'),
21 ], 3);
22
23 console.log(files);
24 }
25
26 await test();
27 // Immediately logs 3 starts
28 // ~1000ms later logs next 3 starts
29 // ...
30 // ~2000ms total

By simply adding () => and () as shown, we were able to solve the


problem. This change prevents the Promise from executing immediately.
Wrapping the Promise in a function delays its execution until explicitly
invoked. Although not applicable in this example, closures could be used to
capture and pass external context if needed. This ability to treat both
functions and Promise objects as values unlocks numerous possibilities for
developers.
ChatGPT’s Imperative Implementation of Concurrency –
executeWithLimit

We asked ChatGPT to implement executeWithLimit:

The Prompt to ChatGPT

“Write a function that splits fs into groups of limit, then runs each
group in parallel.
async function executeWithLimit)[], limit: number): Promise<T[]> {}”

ChatGPT’s Answer

It wrote an “imperative” version:

[Code 4-16] ChatGPT’s executeWithLimit

1 async function executeWithLimit<T>(


2 fs: (() => Promise<T>)[],
3 limit: number
4 ): Promise<T[]> {
5 // (1)
6 const results: T[] = [];
7
8 // (2) group the entire array by limit
9 for (let i = 0; i < fs.length; i += limit) {
10 const batchPromises: Promise<T>[] = [];
11
12 // (3) create tasks in groups of limit
13 for (let j = 0; j < limit && (i + j) < fs.length; j++) {
14 batchPromises.push(fs[i + j]());
15 }
16
17 // (4) run each group in parallel, gather results
18 const batchResults = await Promise.all(batchPromises);
19 results.push(...batchResults); // (5)
20
21 // (6) i += limit to move to the next group
22 }
23
24 return results; // (7)
25 }
26
27 async function test() {
28 const files: File[] = await executeWithLimit([
29 () => getFile('1-img.png'),
30 () => getFile('2-book.pdf'),
31 () => getFile('3-index.html'),
32 () => getFile('4-img2.png'),
33 () => getFile('5-book.pdf'),
34 () => getFile('6-index.html'),
35 () => getFile('7-img.png'),
36 ], 3);
37
38 console.log(files);
39 }
40
41 await test();
42 // Logs the same as we want

How it works:

1. A results array for final results.


2. Outer for loop groups the tasks in increments of limit.
3. The inner for collects tasks (calls f()) into batchPromises.
4. Promise.all runs them in parallel, stored in batchResults.
5. We push them into results.
6. i += limit, next batch.
7. Return everything.

Thoughts on ChatGPT’s Code

ChatGPT’s code is correct, but somewhat tricky to read. We have


nested loops with conditions like j < limit && (i + j) < fs.length.
Next, we’ll rewrite it using functional higher-order functions for clarity and
maintainability.

A Functional Implementation of Concurrency –


executeWithLimit

As with every/some in chapter 3.4, we plan executeWithLimit in a


“list processing” style:

1. [() => P<T>, () => P<T>, () => P<T>, ...]


2. Break into chunks (size=3), e.g. [[() => P<T>, ...], ...]
3. Convert each chunk to [[P<T>, P<T>, P<T>], ...] by calling them
4. Then wrap each chunk in Promise.all([...]) so they run in parallel
by chunk
5. We get [Promise<[T, T, T]>, ...]
6. We flatten them into [T, T, T, T, ...].

chunk(size, iterable)

First of all, it would be handy to have a list-processing function that


groups items in pairs of two. The chunk function is a list-processing
function that divides an iterable into chunks of a specified size. It creates
an iterator and yields each chunk.

[Code 4-17] chunk

1 function* chunk<T>(size: number, iterable: Iterable<T>):


IterableIterator<T[]> {
2 const iterator = iterable[Symbol.iterator]();
3 while (true) {
4 const arr = [
5 ...take(size, {
6 [Symbol.iterator]() {
7 return iterator;
8 },
9 }),
10 ];
11 if (arr.length) yield arr;
12 if (arr.length < size) break;
13 }
14 }
15
16 class FxIterable<A> {
17 // ... omitted methods ...
18
19 chunk(size: number) {
20 return fx(chunk(size, this));
21 }
22 }
23
24 console.log([...chunk(2, [1, 2, 3, 4, 5])]);
25 // [[1, 2], [3, 4], [5]]
26
27 fx([1, 2, 3, 4, 5])
28 .chunk(2)
29 .map(arr => arr.map(a => a * 10)) // [arr: number[]], [a: number]
30 .forEach(arr => console.log(arr));
31 // [10, 20]
32 // [30, 40]
33 // [50]
1. The chunk function converts the iterable into an iterator and traverses
it, grouping items into arrays of length size.
2. Once there are no more items to group, the loop terminates.
3. The first example breaks an array into chunks of size 2 and prints
them.
4. In the second example, each chunk’s elements are multiplied by 10
after being split by chunk.

Finishing executeWithLimit Starting from chunk

Now, let’s implement and run the executeWithLimit function based


on list processing as planned. In [Code 4-18], fromAsync reproduces the
core functionality of Array.fromAsync, which is introduced in
ECMAScript but not yet in TypeScript.

[Code 4-18] executeWithLimit

1 async function fromAsync<T>(


2 iterable: Iterable<Promise<T>> | AsyncIterable<T>
3 ): Promise<T[]> {
4 const arr: T[] = [];
5 for await (const a of iterable) {
6 arr.push(a);
7 }
8 return arr;
9 }
10
11 const executeWithLimit = <T>(fs: (() => Promise<T>)[], limit: number):
Promise<T[]> =>
12 fx(fs)
13 .chunk(limit) // [[() => P<T>, () => P<T>, ...], ...]
14 .map(fs => fs.map(f => f())) // [[P<T>, P<T>, P<T>], ...]
15 .map(ps => Promise.all(ps)) // [P<[T, T, T]>, ...]
16 .to(fromAsync) // P<[[T, T, T], ...]>
17 .then(arr => arr.flat()); // P<[T, T, T, T, ...]>
18
19 async function test() {
20 const files: File[] = await executeWithLimit([
21 () => getFile('1-img.png'),
22 () => getFile('2-book.pdf'),
23 () => getFile('3-index.html'),
24 () => getFile('4-img2.png'),
25 () => getFile('5-book.pdf'),
26 () => getFile('6-index.html'),
27 () => getFile('7-img.png'),
28 ], 3);
29
30 console.log(files);
31 }
32
33 await test();
34 // Console output:
35 // Same as [Code 4-16]

1. The first three file downloads begin immediately (1-img.png, 2-


book.pdf, 3-index.html).
2. After about 1,000 ms, the next three begin (4-img2.png, 5-book.pdf, 6-
index.html).
3. After about 2,000 ms, the final file download begins (7-img.png).
4. After about 3,000 ms, all downloads complete, and the results are
printed.

Everything works correctly, and the code is now easier to read and
understand.

The Stepping Stone to Effective Asynchronous Handling


– Laziness

In [Code 4-18], we used list processing and laziness to manage


asynchronous tasks. Although [Code 4-16] and [Code 4-18] have the same
behavior and time complexity, [Code 4-18] offers superior readability and
maintainability.

Unlike [Code 4-16], [Code 4-18] contains no i++ or j++ state changes,
nor does it have conditions like j < limit && (i + j) < fs.length.
Instead of imperative expressions like push(fs[i + j]()) or
push(...batchResults), we employ more declarative constructs like
fromAsync or arr.flat().

It’s hard to partially test code like [Code 4-16], whereas in [Code 4-
18], each stage of list transformation is already separated, making it easier
to test individual parts. Also, even in an asynchronous context, you can be
confident that the code will follow the intended flow more quickly.

The core of our executeWithLimit implementation lies in laziness.


Laziness serves as a stepping stone toward effective asynchronous handling.
Revisiting Section 3.3 on the order of lazy evaluation will help clarify both
the code and explanation:

1. The executeWithLimit function accepts an array of functions that,


when called, return a Promise.
2. map(fs => fs.map(f => f())) might look like it executes all these
functions at once, but in fact, this map is lazily evaluated—unlike the
built-in array map.
3. Therefore, when fromAsync consumes one element from the iterator, it
only executes the functions in that chunk, and then the next map wraps
them with Promise.all.
4. Inside fromAsync, for await...of retrieves and waits for the results
from Promise.all.
5. As a result, fromAsync sequentially evaluates the lazy asynchronous
tasks and collects them into an array.
6. This behavior is possible, along with the possible code structure,
because we built a list of “lazily evaluated asynchronous tasks” before
fromAsync.

Using Laziness Even More Effectively: Simplifying with


Function Composition

Because chunk and map are lazily evaluated, we can make the code
even simpler by rearranging their order, as shown in [Code 4-19], while
achieving the same behavior.

[Code 4-19] executeWithLimit – Rearranged Order

1 const executeWithLimit = <T>(fs: (() => Promise<T>)[], limit: number):


Promise<T[]> =>
2 fx(fs)
3 .map(f => f()) // [P<T>, P<T>, P<T>, ...]
4 .chunk(limit) // [[P<T>, P<T>, P<T>], ...]
5 .map(ps => Promise.all(ps)) // [P<[T, T, T]>, ...]
6 .to(fromAsync) // P<[[T, T, T], ...]>
7 .then(arr => arr.flat()); // P<[T, T, T, T, ...]>

If you clearly understand when and by what map(f => f()) is


evaluated, you can predict exactly when f() will run.
1. Nothing executes until we reach this code: fx(fs).map(f =>
f()).chunk(limit).map(ps => Promise.all(ps)).
2. The moment the iterator is first consumed in fromAsync’s for
await...of, three of these fs are executed, batched, and passed to
Promise.all.

Compared to [Code 4-18], [Code 4-19] uses laziness more effectively


to handle a two-dimensional list in a strategic manner, ultimately managing
parallel asynchronous scenarios with concise code.

Depending on the problem, an imperative paradigm with new Promise


or async/await might be more suitable in certain parts. However, for most
asynchronous contexts, leveraging laziness and list processing proves more
effective, safer, and better aligned with typical use cases. Practicing treating
Promise as a value through lazy evaluation and list processing can enhance
your ability to handle diverse asynchronous scenarios in real-world
development. This becomes a powerful solution for many backend
programming challenges.

Laziness, in this sense, is a significant feature of the language. Far


from being just a tool for performance improvement or optimization, the
code pattern of evaluating iterators and passed-in first-class functions only
at the needed time empowers us to make logic reusable. Employing
laziness, first-class functions, and the iterator pattern through list processing
forms a robust technique and a linguistic expression.

4.3 Treating Asynchronous as a Type


In this chapter, we will cover patterns for handling asynchrony based
on types, interfaces, and protocols. In TypeScript, the value at the very
foundation of asynchronous programming, Promise, can also be considered
a type. Going beyond that, we will look into the AsyncIterator and
AsyncIterable types and protocols, and then explore how to handle these
types via AsyncGenerator and asynchronous higher-order functions, as well
as a combination and design of asynchronous list-processing classes. Let us
now delve into asynchronous programming based on the type system and
list processing in concrete detail.
AsyncIterator, AsyncIterable, AsyncGenerator Protocols

JavaScript supports sequential processing of asynchronous tasks via


protocols such as AsyncIterator, AsyncIterable, and AsyncGenerator.
These protocols make asynchronous tasks more efficient and flexible,
allowing each element to be generated and consumed asynchronously.

AsyncIterator, AsyncIterable Interfaces

Below is the structure of AsyncIterator, AsyncIterable, and


AsyncIterableIterator expressed through TypeScript interface
definitions6.

[Code 4-20] AsyncIterator, AsyncIterable, AsyncIterableIterator

1 interface IteratorYieldResult<T> {
2 done?: false;
3 value: T;
4 }
5
6 interface IteratorReturnResult {
7 done: true;
8 value: undefined;
9 }
10
11 interface AsyncIterator<T> {
12 next(): Promise<IteratorYieldResult<T> | IteratorReturnResult>;
13 }
14
15 interface AsyncIterable<T> {
16 [Symbol.asyncIterator](): AsyncIterator<T>;
17 }
18
19 interface AsyncIterableIterator<T> extends AsyncIterator<T> {
20 [Symbol.asyncIterator](): AsyncIterableIterator<T>;
21 }

1. IteratorYieldResult<T>

Represents a value of type T when done is false. Here, this interface


indicates that the AsyncIterator has not yet completed.

2. IteratorReturnResult
Represents a value of type undefined when done is true. Here, this
interface indicates that the AsyncIterator is completed.

3. AsyncIterator<T>

An interface with a next method that returns a Promise. This Promise


returns either IteratorYieldResult<T> or IteratorReturnResult.
Used to return the next value asynchronously.

4. AsyncIterable<T>

An interface with the Symbol.asyncIterator method that returns an


AsyncIterator<T>.
By implementing this interface, an object is declared to be
asynchronously iterable.

5. AsyncIterableIterator<T>

An interface that extends AsyncIterator<T> and additionally


implements the Symbol.asyncIterator method.
This interface represents an iterator that can be iterated over
asynchronously, and the iterator itself is an AsyncIterable.

These interfaces in TypeScript clarify and make type-safe the


asynchronous iterators and iterables of JavaScript. Just as you can traverse
Iterable via a for...of statement, you can traverse an AsyncIterable
object asynchronously using the for await...of statement.

[Code 4-20] is deliberately shortened to succinctly convey the core


content of this chapter. If you write it as is in an actual editor, duplicate
type declarations or inference errors might occur. To see the officially
provided Iterator interfaces in TypeScript, you can refer to the
lib.es2018.asynciterable.d.ts file.
Basic Syntax of AsyncGenerator

[Code 4-21] is an example that lets us see firsthand the basic syntax
and characteristics of AsyncGenerator. AsyncGenerator provides
functionality for generating values asynchronously and processing them
sequentially. The stringsAsyncTest function is an AsyncGenerator that
asynchronously generates strings.

[Code 4-21] stringsAsyncTest

1 async function* stringsAsyncTest(): AsyncIterableIterator<string> {


2 yield delay(1000, 'a');
3
4 const b = await delay(500, 'b') + 'c'; // AsyncGenerator can use await.
5
6 yield b;
7 }
8
9 async function test() {
10 const asyncIterator: AsyncIterableIterator<string> = stringsAsyncTest();
11 const result1 = await asyncIterator.next();
12 console.log(result1.value); // After about 1,000ms: a
13
14 const result2 = await asyncIterator.next();
15 console.log(result2.value); // Another 500ms later: bc
16
17 const { done } = await asyncIterator.next();
18 console.log(done); // true
19 }
20
21 await test();

The stringsAsyncTest function uses the delay function so that a


value is returned after a certain period of time. The first yield
delay(1000, 'a') returns 'a' after 1,000 ms. const b = await
delay(500, 'b') + 'c' awaits 500 ms to obtain 'b' and then appends 'c'
to assign to b. The second yield b returns 'bc'.

The test function creates an AsyncIterableIterator<string> from


the stringsAsyncTest function. The result of result1 = await
asyncIterator.next() is a Promise, which, when awaited, returns {
value, done }. The first yield returns 'a' after about 1,000 ms. Then,
result2 = await asyncIterator.next() returns 'bc' after another 500
ms. Finally, the next call to next checks that done is true, indicating that
iteration is complete.

The toAsync Function

The toAsync function takes an Iterable containing synchronous


values or Promise values and converts it into an AsyncIterable that can be
processed asynchronously. There are two ways to implement this. The first
is by directly implementing AsyncIterator, and the second is by using an
AsyncGenerator.

[Code 4-22] function toAsync() {}

1 function toAsync<T>(iterable: Iterable<T | Promise<T>>):


AsyncIterable<Awaited<T>> {
2 return {
3 [Symbol.asyncIterator](): AsyncIterator<Awaited<T>> {
4 const iterator = iterable[Symbol.iterator]();
5 return {
6 async next() {
7 const { done, value } = iterator.next();
8 return done ? { done, value } : { done, value: await value };
9 }
10 };
11 }
12 };
13 }
14
15 async function test() {
16 const asyncIterable = toAsync([1]);
17 const asyncIterator = asyncIterable[Symbol.asyncIterator]();
18 await asyncIterator.next().then(({ value }) => console.log(value));
19 // 1
20
21 const asyncIterable2 = toAsync([Promise.resolve(2)]);
22 const asyncIterator2 = asyncIterable2[Symbol.asyncIterator]();
23 await asyncIterator2.next().then(({ value }) => console.log(value));
24 // 2
25 }
26
27 await test();

The toAsync function takes as an argument an Iterable that contains


synchronous values (T) or a Promise (Promise<T>). It returns
AsyncIterable<Awaited<T>>. Inside the function, it implements the
Symbol.asyncIterator method to return an asynchronous iterator. The
next method of this asynchronous iterator calls the internal iterator’s
next method to retrieve a value. If done is true, it returns { done, value
} to indicate completion. If done is false, it returns the awaited value with
{ done, value: await value }.

In the first example of the test function, we pass an array [1]—which


contains a synchronous value—to toAsync. When asyncIterator.next()
is called, it immediately returns { done: false, value: 1 }. As a result,
the console prints 1.

In the second example, we pass an array containing a Promise


([Promise.resolve(2)]) to toAsync. When asyncIterator.next() is
called, it waits until the Promise completes, then returns { done: false,
value: 2 }. Consequently, the console prints 2.

Below is the same function implemented using an AsyncGenerator.

[Code 4-23] async function* toAsync() {}

1 async function* toAsync<T>(


2 iterable: Iterable<T | Promise<T>>
3 ): AsyncIterableIterator<Awaited<T>> {
4 for await (const value of iterable) {
5 yield value;
6 }
7 }
8
9 async function test() {
10 const asyncIterable = toAsync([1]);
11 const asyncIterator = asyncIterable[Symbol.asyncIterator]();
12 await asyncIterator.next().then(({ value }) => console.log(value));
13 // 1
14
15 const asyncIterable2 = toAsync([Promise.resolve(2)]);
16 const asyncIterator2 = asyncIterable2[Symbol.asyncIterator]();
17 await asyncIterator2.next().then(({ value }) => console.log(value));
18 // 2
19 }
20
21 await test();

The behavior and outcome of the test function in [Code 4-23] are the
same as in the test function of [Code 4-22].
In the case of toAsync, using an AsyncGenerator is simpler, shorter in
code, and more intuitive. In this situation, among object-oriented,
imperative, and functional paradigms, we want to note that the imperative
approach is the most suitable. Selecting the right paradigm for each
problem can lead to writing better code and achieving high maintainability.

The result of the toAsync function can be used along with for
await...of.

[Code 4-24] Using for await…of

1 async function test() {


2 // (1)
3 for await (const a of toAsync([1, 2])) {
4 console.log(a);
5 }
6 // 1
7 // 2
8
9 // (2)
10 for await (const a of toAsync([Promise.resolve(1), Promise.resolve(2)]))
{
11 console.log(a);
12 }
13 // 1
14 // 2
15
16 // (3) for await...of can also iterate over Iterable<A>.
17 for await (const a of [1, 2]) {
18 console.log(a);
19 }
20 // 1
21 // 2
22
23 // (4) In fact, for await...of can also iterate over
Iterable<Promise<A>>.
24 for await (const a of [Promise.resolve(1), Promise.resolve(2)]) {
25 console.log(a);
26 }
27 // 1
28 // 2
29 }
30
31 await test();

You may not see the necessity of toAsync at the moment, especially
seeing that the third and fourth examples (3) and (4) in [Code 4-24] work
perfectly fine. But in the upcoming sections, the toAsync function will
increasingly demonstrate its importance in handling asynchrony as a type.

To briefly explain in advance: this function converts a regular


Iterable into an AsyncIterable not only to process the values at runtime,
but also to declare that the type will change at compile time. By executing
the toAsync function, you effectively declare at compile time that the
values will henceforth be handled asynchronously, and this can be inferred
in various parts of your code. We will explore how this feature is utilized in
asynchronous programming in subsequent sections.

Higher-Order Functions for AsyncIterable

Next, let us create some higher-order functions that work with


AsyncIterable. Higher-order functions that work with AsyncIterable are
helpful for processing asynchronous tasks efficiently. In this section, we
will explore patterns for creating higher-order functions that handle
asynchronous tasks by looking at mapAsync—which directly implements
AsyncIterableIterator—and filterAsync—which uses
AsyncGenerator.

mapAsync Implemented by Directly Implementing AsyncIterator

mapSync is the same code we implemented in [Code 1-39], just with


types applied and renamed. mapSync deals with Iterable, and mapAsync
deals with AsyncIterable. The next example helps understand how
mapAsync works by comparing it to mapSync.

[Code 4-25] mapSync and mapAsync

1 function mapSync<A, B>(


2 f: (a: A) => B,
3 iterable: Iterable<A>
4 ): IterableIterator<B> {
5 const iterator = iterable[Symbol.iterator]();
6 return {
7 next() {
8 const { done, value } = iterator.next();
9 return done
10 ? { done, value }
11 : { done, value: f(value) }; // [value: B], [const value: A]
12 },
13 [Symbol.iterator]() {
14 return this;
15 }
16 };
17 }
18
19 function mapAsync<A, B>(
20 f: (a: A) => B,
21 asyncIterable: AsyncIterable<A>
22 ): AsyncIterableIterator<Awaited<B>> {
23 const asyncIterator = asyncIterable[Symbol.asyncIterator]();
24 return {
25 async next() {
26 const { done, value } = await asyncIterator.next();
27 return done
28 ? { done, value }
29 : { done, value: await f(value) }; // [value: Awaited<B>] [value:
A]
30 },
31 [Symbol.asyncIterator]() {
32 return this;
33 }
34 };
35 }
36
37 async function* strings(): AsyncIterableIterator<string> {
38 yield delay(500, 'a');
39 yield delay(200, 'b');
40 }
41
42 const mapped = mapAsync(a => a.toUpperCase(), strings()); // [a: string]
43
44 for await (const a of mapped) {
45 console.log(a); // [const a: string]
46 }
47 // After 500ms: A
48 // Another 200ms later: B

mapAsync is a function that directly implements


AsyncIterableIterator to process asynchronously generated values and is
declared as follows:

1. f: a function that transforms the input value of type A into type B. Note
that B can also be a Promise.
2. asyncIterable: an object that can be iterated asynchronously.
3. AsyncIterableIterator<Awaited<B>>: Awaited<B> refers to the type
of the result when B is a Promise.

The function is implemented as follows:


1. Inside mapAsync, we call the Symbol.asyncIterator method on
asyncIterable to create an asyncIterator.
2. The next method calls asyncIterator.next() to fetch the next value
asynchronously.
3. We await the result to get the done status and the value.
4. If done is true, it returns { done, value } directly, indicating
iteration is complete.
5. If done is false, it calls f(value), waits for the result with await, and
returns { done, value: await f(value) }.
6. It also implements the Symbol.asyncIterator method to represent
that it is an AsyncIterable. Through this, it can be used in a for
await...of loop, etc.

In essence, mapSync and mapAsync share the same logic and data flow.
mapAsync works similarly to mapSync but is designed to handle
asynchronous iterables. This allows you to transform and process values
asynchronously, and you can carry out repetitive tasks in many
asynchronous codes the same way you do with mapSync.

Implementing mapAsync With AsyncGenerator

Below is mapAsync implemented using an AsyncGenerator. Using a


generator allows for the concise implementation shown below.

[Code 4-26] async function* mapAsync() {}

1 async function* mapAsync<A, B>(


2 f: (a: A) => B,
3 asyncIterable: AsyncIterable<A>
4 ): AsyncIterableIterator<Awaited<B>> {
5 for await (const value of asyncIterable) {
6 yield f(value);
7 }
8 }

This code behaves exactly as [Code 4-25]’s mapAsync and returns


AsyncIterableIterator<Awaited<B>>. By using an AsyncGenerator, it
applies the function f to each element of the asynchronous iterable and
returns the result asynchronously. Within the for await...of loop, it
iterates over each value of asyncIterable and yields the transformed value
via f(value) as Awaited<B>.

We could have implemented mapAsync as a generator from the start,


but when writing a generator, you should always be aware that the
code is being converted into an iterator in your head—yield becomes
the value of next(), and return; becomes { done: true }. We have
provided two implementations so that this can serve as practice in
mentally mapping it out.

Using mapAsync With toAsync

To run the mapAsync function we’ve implemented, we need to provide


an AsyncIterable, so we have to use an AsyncGenerator. Even if we want
to test simple values, we can use toAsync, because mapAsync only takes an
AsyncIterable.

[Code 4-27] mapSync and mapAsync

1 async function* numbers(): AsyncIterableIterator<number> {


2 yield 1;
3 yield 2;
4 }
5
6 for await (const a of mapAsync(a => a * 2, numbers())) {
7 console.log(a);
8 }
9 // 2
10 // 4
11
12 for await (const a of mapAsync(a => a * 2, toAsync([1, 2]))) {
13 console.log(a);
14 }
15 // 2
16 // 4
17
18 for await (const a of mapAsync(a => delay(100, a * 2), toAsync([1, 2]))) {
19 console.log(a);
20 }
21 // After 100ms: 2
22 // Another 100ms later: 4
filterAsync Created With AsyncGenerator

Let us now look at how to create a filterAsync function using an


AsyncGenerator. We will again compare it with filterSync to help clarify.

[Code 4-28] filterSync and filterAsync

1 function* filterSync<A>(
2 f: (a: A) => boolean,
3 iterable: Iterable<A>
4 ): IterableIterator<A> {
5 for (const value of iterable) {
6 if (f(value)) {
7 yield value;
8 }
9 }
10 }
11
12 async function* filterAsync<A>(
13 f: (a: A) => boolean | Promise<boolean>,
14 asyncIterable: AsyncIterable<A>
15 ): AsyncIterableIterator<A> {
16 for await (const value of asyncIterable) {
17 if (await f(value)) {
18 yield value;
19 }
20 }
21 }
22
23 for await (const a of filterAsync(a => a % 2 === 1, toAsync([1, 2, 3]))) {
24 console.log(a);
25 }
26 // 1
27 // 3
28
29 for await (const a of filterAsync(a => delay(100, a % 2 === 1),
toAsync([1, 2, 3]))) {
30 console.log(a);
31 }
32 // After 100ms: 1
33 // Another 200ms later: 3

filterAsync is a function for asynchronous filtering, taking an


AsyncIterable object and returning only those values that satisfy the
predicate function f via yield. Through the for await...of loop, it
iterates over values of asyncIterable, calls the predicate function f for
each value, and yields the value only if it’s true after await.
In the first example, toAsync([1, 2, 3]) turns the array [1, 2, 3]
into an AsyncIterable, and for each value, it applies the predicate function
(a => a % 2 === 1) to filter out only the odd numbers, printing 1 and 3.

The second example uses an asynchronous predicate function. For


each value, delay(100, a % 2 === 1) is invoked, adding a 100 ms delay
before checking the condition. Each condition is checked asynchronously,
and only odd numbers are filtered out, ultimately printing 1 after 100 ms
and 3 after another 200 ms.

A Convention for Functions Supporting Both Sync and


Async – toAsync

We previously noted that the toAsync function plays a “very important


role in handling asynchrony as a type,” that it “converts an ordinary
Iterable to an AsyncIterable at runtime,” and that “by running toAsync,
we declare at compile time that we will deal with values asynchronously.”

The signature of toAsync is <T>(iterable: Iterable<T |


Promise<T>>) => AsyncIterable<T>, which converts a regular
Iterable<T> or an Iterable<Promise<T>> consisting of Promise values
into AsyncIterable<T>. Hence, it produces an AsyncIterable<T>.

map That Supports Both Synchronous and Asynchronous

mapSync takes an Iterable<A> as an argument, while mapAsync takes


an AsyncIterable<A> as an argument.

1 type MapSync = <A, B>(


2 f: (a: A) => B,
3 iterable: Iterable<A>
4 ) => IterableIterator<B>;
5
6 type MapAsync = <A, B>(
7 f: (a: A) => B,
8 asyncIterable: AsyncIterable<A>
9 ) => AsyncIterableIterator<Awaited<B>>;
In TypeScript, you can use function overloads, which allow a single
function to perform two or more roles based on the parameter types. This
makes it clear at compile time through type inference which function will
be chosen and executed.

For instance, if mapSync and mapAsync have the same signatures and
purpose, and their internal operations and flow are identical, you can unify
these two functions into one. This allows more polymorphism while
keeping the code concise and safe.

[Code 4-29] Applying Type-Based Function Overloads to map

1 function isIterable<T = unknown>(a: Iterable<T> | unknown): a is


Iterable<T> {
2 return typeof a?.[Symbol.iterator] === "function";
3 }
4
5 function map<A, B>(
6 f: (a: A) => B,
7 iterable: Iterable<A>
8 ): IterableIterator<B>;
9 function map<A, B>(
10 f: (a: A) => B,
11 asyncIterable: AsyncIterable<A>
12 ): AsyncIterableIterator<Awaited<B>>;
13 function map<A, B>(
14 f: (a: A) => B,
15 iterable: Iterable<A> | AsyncIterable<A>
16 ): IterableIterator<B> | AsyncIterableIterator<Awaited<B>> {
17 return isIterable(iterable)
18 ? mapSync(f, iterable) // [iterable: Iterable<A>]
19 : mapAsync(f, iterable); // [iterable: AsyncIterable<A>]
20 }

isIterable checks whether the given value is iterable, helping ensure


type safety in the code.

By using the type guard (a is Iterable<T>), the TypeScript compiler


knows that a is Iterable<T> when this function returns true.
The optional chaining operator (?.) checks if a is not null or
undefined before accessing the Symbol.iterator property and
verifying if it is a function.
unifies the signatures of mapSync and mapAsync using function
map
overloads and implements them in a single function.

1. The first signature processes synchronous iterables.

Parameter 1: function f transforms each element of Iterable<A> into


B.
Parameter 2: iterable is a synchronous iterable of type Iterable<A>.
Return value: an IterableIterator<B>.

2. The second signature processes asynchronous iterables.

Parameter 1: function f transforms each element of AsyncIterable<A>


into B, and we plan to unwrap the result with await as Awaited<B>.
Parameter 2: asyncIterable is an asynchronous iterable of type
AsyncIterable<A>.
Return value: an AsyncIterableIterator<Awaited<B>>.

3. The actual implementation of map unifies the two signatures.

It checks the type of iterable using isIterable(iterable).


If true, it calls mapSync(f, iterable) under the assumption that it’s
a synchronous iterable.
If false, it calls mapAsync(f, iterable), assuming it’s an
asynchronous iterable.
As mentioned in the comments, type narrowing ensures that the value
is passed safely to each function.

Below is an example showing how TypeScript’s type system


recognizes that the map function can handle both synchronous and
asynchronous iterables. Each line of code shows a different usage scenario
of the map function, confirming that type inference works as intended.

[Code 4-30] Compile-Time of map Supporting Both Sync and Async

1 const iter1: IterableIterator<string> = map(


2 (a: number) => a.toFixed(),
3 [1, 2]
4 );
5
6 const iter2: IterableIterator<Promise<string>> = map(
7 (a: number) => Promise.resolve(a.toFixed()),
8 [1, 2]
9 );
10
11 const iter3: AsyncIterableIterator<string> = map(
12 (a: number) => a.toFixed(),
13 toAsync([1, 2])
14 );
15
16 const iter4: AsyncIterableIterator<string> = map(
17 (a: number) => Promise.resolve(a.toFixed()),
18 toAsync([1, 2])
19 );
20
21 const iter5: AsyncIterableIterator<string> = map(
22 (a: number) => a.toFixed(),
23 toAsync([Promise.resolve(1), Promise.resolve(2)])
24 );
25
26 const iter6: AsyncIterableIterator<string> = map(
27 (a: number) => Promise.resolve(a.toFixed()),
28 toAsync([Promise.resolve(1), Promise.resolve(2)])
29 );

1. In iter1 and iter2, map operates as mapSync, and the helper function’s
return value becomes the elements of an IterableIterator<string>.
2. Consequently, the type of iter2 is handled as
IterableIterator<Promise<string>>.
3. From iter3 onward to iter6, map operates as mapAsync, and the type
is inferred as AsyncIterableIterator<string>.
4. For all cases where mapAsync is used, the argument a is inferred as
number after unwrapping any Promise<number>, ensuring it is passed
correctly.
5. All 6 cases here work properly at both runtime and compile time as
intended.

In this code, we make heavy use of function overloads and type


inference from the TypeScript type system. The difference between iter2
and iter3–iter6 is that in iter3–iter6, we explicitly pass an
asynchronous iterable to the second parameter by using toAsync. The type
system uses this to decide whether the function behaves synchronously or
asynchronously, and the compiler also guarantees correctness.
Until now, we have shown a few examples of what it means to treat
asynchrony as a type and what this concept entails. Through the remaining
examples and explanations in this chapter, we will further delve into how
we can control asynchronous scenarios in a type-driven manner.

[Code 4-31] Runtime of map Supporting Both Sync and Async

1 async function test() {


2 // (1) Synchronous array processing, operates as mapSync
3 console.log([...map(a => a * 10, [1, 2])]);
4 // [10, 20]
5
6 // (2) Asynchronous iterable processing, operates as mapAsync
7 for await (const a of map(a => delay(100, a * 10), toAsync([1, 2]))) {
8 console.log(a);
9 }
10 // After 100ms: 10
11 // Another 100ms later: 20
12
13 // (3) Convert asynchronous iterable to an array, mapAsync + fromAsync
14 console.log(
15 await fromAsync(map(a => delay(100, a * 10), toAsync([1, 2])))
16 );
17 // After 200ms: [10, 20]
18
19 // (4) Process synchronous array asynchronously, mapSync + Promise.all
20 console.log(
21 await Promise.all(map(a => delay(100, a * 10), [1, 2]))
22 );
23 // After 100ms: [10, 20]
24 }
25
26 await test();

1. Synchronous array processing, operates as mapSync

A synchronous array [1, 2] is passed to mapSync, and the function a


* 10 is applied to each element, then printed.
Since mapSync returns a synchronous iterator, we immediately get the
result [10, 20].

2. Asynchronous iterable processing, operates as mapAsync

toAsync creates an asynchronous iterable, and we apply delay(100, a


* 10) to each element through mapAsync.
The for await...of loop asynchronously receives each element after
a 100 ms delay, sequentially printing 10 and 20.

3. Convert an asynchronous iterable to an array, mapAsync + fromAsync

We convert the asynchronous iterable created by mapAsync to an array


using fromAsync.
After all elements are processed, [10, 20] is printed after 200 ms.

4. Process a synchronous array asynchronously, mapSync + Promise.all

For the synchronous array [1, 2], mapSync applies delay(100, a *


10) to each element, and Promise.all waits for all asynchronous tasks
to complete.
[10, 20] appears after 100 ms.

These examples demonstrate that a single map function can effectively


handle both synchronous and asynchronous iterables. Example (4) in
particular shows that you can handle asynchronous tasks in parallel using
Promise.all instead of deliberately controlling them in a sequential
manner. mapSync and mapAsync each fit neatly into the right scenarios,
letting you flexibly handle synchronous and asynchronous contexts.

filter That Supports Both Sync and Async

[Code 4-32] Applying Type-Based Function Overloads to filter

1 function filter<A>(
2 f: (a: A) => boolean,
3 iterable: Iterable<A>
4 ): IterableIterator<A>;
5 function filter<A>(
6 f: (a: A) => boolean | Promise<boolean>,
7 asyncIterable: AsyncIterable<A>
8 ): AsyncIterableIterator<A>;
9 function filter<A>(
10 f: (a: A) => boolean | Promise<boolean>,
11 iterable: Iterable<A> | AsyncIterable<A>
12 ): IterableIterator<A> | AsyncIterableIterator<A> {
13 return isIterable(iterable)
14 ? filterSync(f as (a: A) => boolean, iterable)
15 : filterAsync(f, iterable);
16 }
We have implemented filter in the same pattern as the unified map
function supporting both synchronous and asynchronous iterables. filter
can handle both synchronous and asynchronous iterables. One distinctive
point here is that when iterable is Iterable, type narrowing works with
the type guard, so we can pass it safely to filterSync, but because we do
not perform type checks on f, we must use the as keyword to clarify the
type of f as (a: A) => boolean.

Below is an example demonstrating that the filter function can


handle both synchronous and asynchronous iterables, type-system-wise, in
TypeScript.

[Code 4-33] Compile-Time of filter Supporting Both Sync and Async

1 // (1) Synchronous filtering, matches filterSync


2 const iter1: IterableIterator<number> = filter(
3 (a: number) => a % 2 === 1,
4 [1, 2]
5 );
6
7 // (2) Function matching failure
8 // Error TS2769: No overload matches this call.
9 const iter2 = filter(
10 (a: number) => Promise.resolve(a % 2 === 1), // Error
11 [1, 2] // If it's Iterable, the function must return boolean
12 );
13
14 // (3) Asynchronous iterable filtering, matches filterAsync
15 const iter3: AsyncIterableIterator<number> = filter(
16 (a: number) => a % 2 === 1,
17 toAsync([1, 2])
18 );
19
20 // (4) Asynchronous iterable with an asynchronous helper function, matches
filterAsync
21 const iter4: AsyncIterableIterator<number> = filter(
22 (a: number) => Promise.resolve(a % 2 === 1),
23 toAsync([1, 2])
24 );

Let’s see how the TypeScript type system handles these different
scenarios:

1. Synchronous filtering, matches filterSync


The filter function receives the synchronous iterable [1, 2] with a
synchronous predicate (a: number) => a % 2 === 1.
Because filter is given Iterable<number>, it matches filterSync,
and the result is correctly treated as IterableIterator<number>.

2. Function matching failure

The filter function receives the synchronous iterable [1, 2] with the
asynchronous predicate (a: number) => Promise.resolve(a % 2
=== 1).
For a synchronous iterable, the predicate must return a boolean, so
returning Promise<boolean> causes a type error. Because TypeScript
cannot find an appropriate overload, it throws Error TS2769. This is
the intended result.
Promise.resolve(true) and Promise.resolve(false) are both
objects (truthy values), so you cannot properly evaluate them without
extracting the result. Hence, if we want to use an asynchronous
predicate, we must match it up with filterAsync by explicitly passing
an AsyncIterable.
Here, we used Iterable vs. AsyncIterable at the library level as the
dividing line for whether to handle iteration synchronously or
asynchronously, because it can be determined at runtime by checking
the type of the iterable right away, before iteration. In contrast, at
runtime, you cannot know if a helper function is asynchronous or not
without actually running it, so you cannot choose a different iteration
strategy in advance.

3. Asynchronous iterable filtering, matches filterAsync

The filter function receives the asynchronous iterable toAsync([1,


2]) and the synchronous predicate function (a: number) => a % 2
=== 1.
Here, filter calls filterAsync to asynchronously filter the values.
The result is correctly typed as AsyncIterableIterator<number>.

4. Asynchronous iterable with an asynchronous helper function, matches


filterAsync
The filter function receives toAsync([1, 2]) along with the
asynchronous predicate (a: number) => Promise.resolve(a % 2
=== 1).
Hence, filter calls filterAsync to process the filter asynchronously,
awaiting the Promise result. The result is correctly typed as
AsyncIterableIterator<number>.

This example shows how you can properly handle both synchronous
and asynchronous scenarios using the type system. In particular, thanks to
function overloads, the filter function chooses the correct approach based
on whether it receives a synchronous or asynchronous iterable. Also,
TypeScript raises compile-time errors for invalid combinations of
parameters, helping to ensure safe code. Such code patterns let you
precisely control asynchronous contexts at compile time, before any real
execution.

[Code 4-34] Runtime of filter Supporting Both Sync and Async

1 const isOdd = (a: number) => a % 2 === 1;


2
3 async function test() {
4 // (1) filterSync -> mapSync
5 console.log([...
6 map(a => a * 10,
7 filter(isOdd,
8 naturals(4)))
9 ]);
10 // [10, 30]
11
12 // (2) toAsync -> filterAsync -> mapAsync
13 const iter2: AsyncIterableIterator<string> =
14 map(a => a.toFixed(2),
15 filter(a => delay(100, isOdd(a)),
16 toAsync(naturals(4))));
17
18 for await (const a of iter2) {
19 console.log(a);
20 }
21 console.log('end');
22 // After 100ms: 1.00
23 // Another 200ms later: 3.00
24 // Another 100ms later: end
25
26 // (3) filter -> toAsync -> mapAsync
27 console.log(
28 await fromAsync(
29 map(a => delay(100, a * 10),
30 toAsync(
31 filter(isOdd,
32 naturals(4)))))
33 );
34 // After 200ms: [10, 30]
35 }
36
37 await test();

1. filterSync -> mapSync

The filter function receives naturals(4) (a synchronous iterable)


and the predicate isOdd.
Here, filter operates as filterSync.
Next, map transforms each filtered value by multiplying by 10,
resulting in [10, 30] being printed.

2. toAsync -> filterAsync -> mapAsync

The filter function receives toAsync(naturals(4)) (an


asynchronous iterable) and the predicate (a => delay(100,
isOdd(a))).
Here, filter operates as filterAsync, adding a 100 ms delay before
checking isOdd(a).
With 1 and 3 filtered out, the map function formats each filtered
number via a.toFixed(2), printing "1.00" and "3.00".

3. filter -> toAsync -> mapAsync

We first filter out odd numbers from naturals(4) using a synchronous


predicate, and then convert it to an FxAsyncIterable using toAsync,
so that map is chosen as mapAsync.
We apply delay(100, a * 10) to each element, and then use
fromAsync to unwrap the AsyncIterableIterator<number> into a
Promise<number[]> type, yielding [10, 30].

These runtime examples illustrate how IterableIterator,


AsyncIterator, Promise, and for await...of operate under their
respective protocols. If you understand exactly how toAsync, map, and
filter handle synchronous/asynchronous types at compile time and how
they perform asynchronous tasks at runtime, then you have gained a deeper
understanding of the language’s approach to asynchronous programming.
JavaScript and TypeScript provide AsyncIterableIterator and
IterableIterator as values and protocols, following the Iterator pattern
for lazy-evaluated traversable objects. Eventually, these lazy-evaluated
IterableIterator and AsyncIterableIterator form the foundation for
list processing. The language suggests that when doing asynchronous
programming, you treat code as values, reason about logic in terms of lists,
and manage tasks as lists.

Type System + Asynchronous Functional Functions +


Class

By combining the type system with asynchronous functional functions


and classes, you can structure and manage asynchronous tasks more
cohesively and consistently. Languages like TypeScript provide a robust
type system that lets you define and implement the interfaces and protocols
of asynchronous tasks. This way, you can maintain each asynchronous
task’s unique characteristics and advantages while still interacting with
them through a common interface.

In this section, we will look at how to add class-based techniques to


these existing concepts and examine a practical example to confirm how
useful they can be for asynchronous programming.

FxIterable and FxAsyncIterable

We have previously seen how the toAsync function can decide


whether to run as mapSync or mapAsync by converting a FxIterable type
into FxAsyncIterable via the toAsync method. This is another way of
expressing the same problem with a minor difference in how it looks.
Sometimes, this small difference can provide developers with greater
productivity or inspiration.

[Code 4-35] shows FxIterable and FxAsyncIterable implemented


only using the functions covered in Section 4.3. We used function overloads
extensively to keep things concise, and FxIterable implements
[Symbol.iterator]() {...}, while FxAsyncIterable implements
[Symbol.asyncIterator]() {...}. In the map or similar function
implementations, we pass this as an argument, resulting in a more concise
expression.

[Code 4-35] FxIterable and FxAsyncIterable

1 function fx<A>(iterable: Iterable<A>): FxIterable<A>;


2 function fx<A>(asyncIterable: AsyncIterable<A>): FxAsyncIterable<A>;
3 function fx<A>(
4 iterable: Iterable<A> | AsyncIterable<A>
5 ): FxIterable<A> | FxAsyncIterable<A> {
6 return isIterable(iterable)
7 ? new FxIterable(iterable)
8 : new FxAsyncIterable(iterable);
9 }
10
11 class FxIterable<A> implements Iterable<A> {
12 constructor(private iterable: Iterable<A>) {}
13
14 [Symbol.iterator]() {
15 return this.iterable[Symbol.iterator]();
16 }
17
18 map<B>(f: (a: A) => B): FxIterable<B> {
19 return fx(map(f, this));
20 }
21
22 filter(f: (a: A) => boolean): FxIterable<A> {
23 return fx(filter(f, this));
24 }
25
26 toArray(): A[] {
27 return [...this];
28 }
29
30 toAsync(): FxAsyncIterable<Awaited<A>> {
31 return fx(toAsync(this));
32 }
33 }
34
35 class FxAsyncIterable<A> implements AsyncIterable<A> {
36 constructor(private asyncIterable: AsyncIterable<A>) {}
37
38 [Symbol.asyncIterator]() {
39 return this.asyncIterable[Symbol.asyncIterator]();
40 }
41
42 map<B>(f: (a: A) => B): FxAsyncIterable<Awaited<B>> {
43 return fx(map(f, this));
44 }
45
46 filter(f: (a: A) => boolean | Promise<boolean>): FxAsyncIterable<A> {
47 return fx(filter(f, this));
48 }
49
50 toArray(): Promise<A[]> {
51 return fromAsync(this);
52 }
53 }

By using implements, TypeScript can check at compile time whether


all required parts of the interface are correctly implemented. This ensures
you have not left anything out or implemented anything incorrectly.

We can write it in an even more concise way that delegates type


inference to TypeScript, as shown in [Code 4-36].

[Code 4-36] Letting TypeScript Infer FxIterable and FxAsyncIterable

1 class FxIterable<A> {
2 constructor(private iterable: Iterable<A>) {}
3
4 [Symbol.iterator]() {
5 return this.iterable[Symbol.iterator]();
6 }
7
8 map<B>(f: (a: A) => B) {
9 return fx(map(f, this));
10 }
11
12 filter(f: (a: A) => boolean) {
13 return fx(filter(f, this));
14 }
15
16 toArray() {
17 return [...this];
18 }
19
20 toAsync() {
21 return fx(toAsync(this));
22 }
23 }
24
25 class FxAsyncIterable<A> {
26 constructor(private asyncIterable: AsyncIterable<A>) {}
27
28 [Symbol.asyncIterator]() {
29 return this.asyncIterable[Symbol.asyncIterator]();
30 }
31
32 map<B>(f: (a: A) => B) {
33 return fx(map(f, this));
34 }
35
36 filter(f: (a: A) => boolean | Promise<boolean>) {
37 return fx(filter(f, this));
38 }
39
40 toArray() {
41 return fromAsync(this);
42 }
43 }

Using these classes, we can rewrite the example from [Code 4-34].

[Code 4-37] Rewriting with FxIterable and FxAsyncIterable

1 async function test() {


2 // (1) filterSync -> mapSync
3 console.log(
4 fx(naturals(4))
5 .filter(isOdd)
6 .map(a => a * 10)
7 .toArray()
8 );
9 // [10, 30]
10
11 // (2) toAsync -> filterAsync -> mapAsync
12 const iter2 = fx(naturals(4))
13 .toAsync()
14 .filter(a => delay(100, isOdd(a)))
15 .map(a => a.toFixed(2));
16
17 for await (const a of iter2) {
18 console.log(a);
19 }
20 console.log('end');
21 // After 100ms: 1.00
22 // Another 200ms later: 3.00
23 // Another 100ms later: end
24
25 // (3) filter -> toAsync -> mapAsync
26 console.log(
27 await fx(naturals(4))
28 .filter(isOdd)
29 .toAsync()
30 .map(a => delay(100, a * 10))
31 .toArray()
32 );
33 // After 200ms: [10, 30]
34 }
35
36 await test();

1. filterSync -> mapSync


We create an FxIterable instance with fx(naturals(4)), apply
synchronous filtering and mapping, then convert it to an array to get
[10, 30].

2. toAsync -> filterAsync -> mapAsync

We create an FxIterable with fx(naturals(4)), call toAsync to


convert it into an FxAsyncIterable, then filter and map
asynchronously to eventually print "1.00" and "3.00".

3. filter -> toAsync -> mapAsync

We filter synchronously first, then convert it to an FxAsyncIterable


with toAsync, apply a mapping asynchronously, and finally convert it
into an array that prints [10, 30].

This example shows how we can effectively manage synchronous and


asynchronous tasks via classes that leverage TypeScript’s type system and a
functional programming approach. The toAsync method seamlessly
transitions the method chain of FxIterable into FxAsyncIterable,
allowing us to mix synchronous and asynchronous tasks naturally.

Using the Type System to Validate Asynchronous Logic

In the “function matching failure” example of [Code 4-33], we saw


that type errors can be raised at compile time when asynchronous logic isn’t
handled properly. Similarly, combining classes in our current approach, we
can rely on the type system to validate our asynchronous logic in advance.
Below is an example of a type error that occurs when we don’t use toAsync.

[Code 4-38] Validate Asynchronous Logic at Compile Time

1 async function test() {


2 // Attempting asynchronous filtering and mapping without toAsync
3 const iter2 = fx(naturals(4))
4 .filter(a => delay(100, isOdd(a))) // Type error (TS2322)
5 .map(a => a.toFixed());
6
7 // TS2322: Type Promise<boolean> is not assignable to type boolean
8 }
9
10 await test();

Because toAsync was not used here, TypeScript throws a type error
when we try to call filter with an asynchronous function on an
FxIterable.

ECMAScript distinctly separates Iterable and AsyncIterable,


designing them for different use cases, and TypeScript leverages that
structure to provide robust type checks. This language design and
functionality let us architect more reliable and efficient asynchronous
programming patterns. By using the strength of the type system, we can
check in advance whether the asynchronous logic is correct and reduce
runtime errors significantly.

reduce That Supports Both Sync and Async

Whereas map and filter both return iterables, map converting


Iterable<A> to Iterable<B> (or AsyncIterable<A> to AsyncIterable<B>)
and filter returning Iterable<A> or AsyncIterable<A> with only certain
elements remaining, reduce produces a final result. Programs do not
usually end in an array state, because eventually you might render HTML
strings from the array, or sum up the array’s elements. Even if you store
data as an array in a database, in cross-system communication you
eventually convert it into a serialized string. If map and filter keep the
iterables in memory while transforming values, reduce disassembles the
iterable as it goes through each element, generating an accumulated result
and eliminating the iterable and its contents from memory. In other words,
reduce merges the contents of the iterable to produce a final result in
memory, discarding the iterable in the process.

To summarize, reduce takes an Iterable<A> and iterates through it to


accumulate a result Acc, or takes an AsyncIterable<A> and returns a
Promise<Acc>. Below is an example of the function signature and
implementation of reduce.
[Code 4-39] reduce

1 function reduce<A, Acc>(


2 f: (acc: Acc, a: A) => Acc, acc: Acc, iterable: Iterable<A>
3 ): Acc;
4 function reduce<A, Acc>(
5 f: (acc: Acc, a: A) => Acc | Promise<Acc>, acc: Acc, asyncIterable:
AsyncIterable<A>
6 ): Promise<Acc>;
7 function reduce<A, Acc>(
8 f: any, acc: Acc, iterable: Iterable<A> | AsyncIterable<A>
9 ): Acc | Promise<Acc> {
10 return isIterable(iterable)
11 ? reduceSync(f, acc, iterable)
12 : reduceAsync(f, acc, iterable);
13 }
14
15 function reduceSync<A, Acc>(
16 f: (acc: Acc, a: A) => Acc, acc: Acc, iterable: Iterable<A>
17 ): Acc {
18 for (const a of iterable) {
19 acc = f(acc, a);
20 }
21 return acc;
22 }
23
24 async function reduceAsync<A, Acc>(
25 f: (acc: Acc, a: A) => Acc | Promise<Acc>, acc: Acc, asyncIterable:
AsyncIterable<A>
26 ): Promise<Acc> {
27 for await (const a of asyncIterable) {
28 acc = await f(acc, a);
29 }
30 return acc;
31 }

[Code 4-39] shows a reduce function that handles both synchronous


and asynchronous iterables. It iterates over Iterable<A> or
AsyncIterable<A> and returns an accumulated value, using function
overloads to handle both synchronous and asynchronous iterables:

1. Signatures of reduce

Overloads separate synchronous and asynchronous iterable types. If


the function receives Iterable<A>, it returns Acc. If it receives
AsyncIterable<A>, it returns Promise<Acc>.

2. reduce Implementation
Inside reduce, it uses isIterable(iterable) to distinguish between
synchronous or asynchronous iterables. If synchronous, it calls
reduceSync, otherwise it calls reduceAsync.
reduceSync handles synchronous iterables with a for...of loop,
applying the accumulator function f to each element and updating acc.
After processing all elements, it returns the final accumulator.
reduceAsync handles asynchronous iterables with a for await...of
loop. Because f may involve asynchronous operations, it awaits the
result of f(acc, a) to update acc. It returns a Promise<Acc> with the
final accumulator.

The reduce function processes each element of the Iterable or


AsyncIterable one by one, synchronously or asynchronously. Even in
asynchronous contexts, it processes elements one at a time through the
iterator, allowing a declarative style of logic. As you build up the final
Promise<Acc>, you evaluate elements in a lazy manner. Good lazy
evaluation depends on having the right kind of iterator and on properly
consuming it in the final step.

[Code 4-40] Applying reduce to FxIterable/FxAsyncIterable

1 class FxIterable<A> {
2 // ... omitted methods ...
3 reduce<Acc>(f: (acc: Acc, a: A) => Acc, acc: Acc) {
4 return reduce(f, acc, this);
5 }
6 }
7
8 class FxAsyncIterable<A> {
9 // ... omitted methods ...
10 reduce<Acc>(f: (acc: Acc, a: A) => Acc | Promise<Acc>, acc: Acc) {
11 return reduce(f, acc, this);
12 }
13 }
14
15 const result: number =
16 fx(naturals(4))
17 .filter(isOdd)
18 .map(a => a * 10)
19 .reduce((acc, a) => acc + a, 0);
20
21 const resultPromise: Promise<number> =
22 fx(naturals(4))
23 .filter(isOdd)
24 .map(a => delay(100, a * 10))
25 .toAsync()
26 .reduce((acc, a) => acc + a, 0);
27
28 console.log(
29 result,
30 await resultPromise
31 );
32 // 40 40

[Code 4-40] integrates the reduce function into the FxIterable and
FxAsyncIterable classes, demonstrating how to accumulate values for both
synchronous and asynchronous iterables. This allows us to chain methods
like filter and map, concluding with reduce.

By adding reduce to FxIterable and FxAsyncIterable, we have


established a consistent interface and chaining pattern for both synchronous
and asynchronous iterables. This pattern supports a declarative coding style,
allowing flexible combinations of synchronous and asynchronous
operations, and significantly improves code readability and maintainability.

Moreover, type-based asynchronous programming goes beyond merely


providing type safety; it also creates opportunities to apply more powerful
structures and patterns to the design and implementation of asynchronous
logic.

4.4 Asynchronous Error Handling


Effectively handling errors in asynchronous programming is essential.
Due to the nature of asynchronous logic, when an error occurs, it may not
be clear where the code is currently being executed, and inadequate error
handling can lead to performance issues, side effects, and difficulties in
debugging. In particular, tasks that involve interactions with external
systems, such as network requests, file I/O, or database operations, have a
higher likelihood of errors, making efficient error handling a critical need.

In this chapter, we will look at the fundamental principles of


asynchronous error handling and use concrete code examples to examine
potential pitfalls in poorly implemented solutions, as well as ways to
improve them.
Loading images and getting their heights

For example, suppose we have multiple image URLs, and we want to


calculate the height of each image and then sum them up. [Code 4-41] is a
function that loads an image for a given URL asynchronously.

[코드 4-41] loadImage

1 function loadImage(url: string): Promise<HTMLImageElement> {


2 return new Promise((resolve, reject) => {
3 const image = new Image();
4 image.src = url;
5 image.onload = function() {
6 resolve(image);
7 }
8 image.onerror = function() {
9 reject(new Error(`load error : ${url}`));
10 }
11 });
12 }

The loadImage function takes a URL and returns a Promise. If the


image is successfully loaded, resolve is called; if an error occurs while
loading, reject is called. This function allows you to handle the result of
the image loading process as a Promise.

[Code 4-42] is a function that takes a list of image URLs (urls) and
calculates the total height of all images.

[코드 4-42] calcTotalHeight

1 async function calcTotalHeight(urls: string[]) {


2 try {
3 const totalHeight = await urls
4 .map(async (url) => {
5 const img = await loadImage(url);
6 return img.height;
7 })
8 .reduce(
9 async (a, b) => await a + await b,
10 Promise.resolve(0)
11 );
12 return totalHeight;
13 } catch (e) {
14 console.error('error: ', e);
15 }
16 }
17
18 console.log(await calcTotalHeight(urls));
19 // 585
20 console.log(await calcTotalHeight(urls2)); // Error: load error..
21 // undefined

The calcTotalHeight function asynchronously loads an image for


each URL in urls, calculates its height, and returns the sum. If an error
occurs, it is handled in the try/catch block, which logs the error.

Even though [Code 4-42] might seem to work fine, it actually has the
following problems:

1. Unnecessary load: Even if an error occurs, the download attempt for


all remaining URLs continues. For example, if the second URL fails, it
will still keep requesting the third, fourth, and so on.
2. Side effects: Although this example is just a GET request, if you were
using the approach of [Code 4-42] to control POST requests or
database INSERT operations, you could incur unnecessary requests and
additional side effects.

Although [Code 4-42] uses Array methods in a synchronous manner


while carrying out asynchronous operations, it is not really designed with a
deep understanding of Promise and asynchronous scenarios. Because the
fundamental flow of asynchronous logic and error handling is insufficiently
considered, there is a risk of unnecessary requests or inefficient flow, and
errors can be obscured.

Improved Asynchronous Logic

Let us now improve the asynchronous logic so that if an error occurs,


the requests stop immediately, preventing further load.

[코드 4-43] calcTotalHeight2

1 async function calcTotalHeight2(urls: string[]) {


2 try {
3 const totalHeight = await fx(urls)
4 .toAsync()
5 .map(loadImage)
6 .map(img => img.height)
7 .reduce((a, b) => a + b, 0);
8 return totalHeight;
9 } catch (e) {
10 console.error('error: ', e);
11 }
12 }
13
14 console.log(await calcTotalHeight2(urls));
15 // 585
16 console.log(await calcTotalHeight2(urls2)); // Error: load error..
17 // Only tried loadImage twice: undefined

In [Code 4-43], if an error occurs at the first URL, all remaining URL
requests stop immediately and the error is handled right away. This code
controls Promise and AsyncIterator safely, works exactly as intended, and
is more readable.

In asynchronous programming, having a precise understanding of


Promise and of the nature of synchronous–asynchronous operations is
crucial for writing efficient and robust code. Especially in modern
application environments, where asynchronous tasks are increasingly
complex, it is essential to clearly design the flow of code and error
handling. By taking advantage of the powerful asynchronous protocol in
JavaScript, such as AsyncIterator, you can combine the declarative style
of functional programming with lazy evaluation to write code that is more
flexible and maintainable. This approach goes beyond merely making the
code run: by providing clear and consistent logic, it contributes to better
developer productivity and user experience.

Ensuring that errors occur properly is key

In asynchronous programming, the most important thing is not simply


to handle errors, but to design your code so that errors can manifest
properly. Making sure errors occur in the right situations is a core principle
that increases the reliability and maintainability of your code.

First, as in [Code 4-44], if you design your code so that the


responsibility for throwing errors does not lie inside the function, but rather
can be handled by the caller, you can maintain purity in your code while
enabling better error handling.
[코드 4-44] getTotalHeight

1 const getTotalHeight = (urls: string[]) =>


2 fx(toAsync(urls))
3 .map(loadImage)
4 .map(img => img.height)
5 .reduce((a, b) => a + b, 0);

In [Code 4-44], we take image URLs and calculate their total height,
but do not handle errors internally. Instead, the caller detects and handles
any errors. We have deliberately omitted the error-handling code. But why
is this approach better?

[코드 4-45] Handling errors at the usage site

1 try {
2 const height = await getTotalHeight(urls);
3 // ...
4 } catch (e) {
5 console.error(e);
6 }
7
8 // or
9 async function myFunction(urls: string[]) {
10 try {
11 return await getTotalHeight(urls);
12 } catch {
13 return 0;
14 }
15 }
16
17 console.log(await myFunction(urls));
18 console.log(await myFunction(urls2));

This approach makes it easier to write pure functions and manage side
effects. It is most effective to write error-handling code close to where the
error might arise. For example, network requests or file I/O—areas that
produce side effects—are good candidates for focusing error handling.

Moreover, if the caller handles errors, each caller can handle them in
whichever way suits its needs. Some may simply log the errors, while
others may display a message to the user, add automatic recovery logic, or
return a default value such as 0. This flexibility considerably enhances code
reusability and maintainability.
Additionally, this approach does not hide errors. Hiding errors makes it
difficult to pinpoint the cause and may lead to unexpected behavior. On the
other hand, explicitly generating errors and delegating them to the caller
allows for early detection and appropriate responses.

Ultimately, by not handling errors internally but rather delegating them


to the caller, we clarify the responsibilities in our code and support various
error-handling strategies for different scenarios. This practice provides more
than just cleaner code.

Designing so that errors occur properly

1. Accurately understand and use Promise, async/await, and


try/catch
When performing asynchronous tasks, use Promise and async/await
in a way that makes errors clearly surface. By leveraging try/catch,
you can explicitly deliver errors to the caller.

2. Do not hide errors; make them visible


Attempting to handle errors unnecessarily or writing overly
complicated error-handling code can actually increase the likelihood of
obscuring errors. Rather than hiding them, allow them to occur and
handle or monitor them at a higher level—possibly with logging tools.

3. Have pure functions throw errors


Since pure functions do not have side effects, a preferable design is for
them to throw errors and delegate handling to an upper-level caller.
Attempting to handle errors inside a pure function can blur the
function’s purpose.

4. Use generators/iterators/iterables for declarative


programming
By using generators and iterables, you can increase the expressiveness
of your code while designing error handling in asynchronous tasks
more intuitively and clearly. For example, asynchronous iterables let
you control the timing of errors and declaratively represent how errors
propagate.
5. Place error-handling code near side-effect code
Handling errors next to code that causes side effects (such as network
requests, file I/O, and database queries) allows you to clearly identify
the cause of errors and how to fix them. Debugging and
maintainability suffer if you try to handle errors in areas unrelated to
side effects.

6. Use error-logging services such as Sentry.io


To supplement the limitations of error handling, you can adopt an
error-logging service like Sentry.io to monitor all occurrences of errors
in real time. This ensures that no errors go unnoticed, even in
production, and that you can respond quickly to problems. Of course,
for these tools to effectively manage errors, the code must be designed
so as not to hide them.

Stable Software and Asynchronous Programming

We have now gained a deeper understanding of asynchronous


programming. The techniques introduced in Chapter 4 build upon the
concepts covered in Chapters 1, 2, and 3. We have learned to handle
asynchronous tasks by treating them as values, implement various logic to
manage asynchronous scenarios with lazy evaluation in a readable manner,
and use the type system to verify asynchronous scenarios at compile time.
These techniques not only handle asynchronous tasks but also improve the
readability and maintainability of your code and help you implement
complex asynchronous logic in a concise and accurate way.

The ultimate goals of asynchronous programming are to optimize


system resources, enhance user experience, and create stable and efficient
software. Now, we have powerful tools for achieving those goals.
Moreover, we have revisited modern programming languages from the
perspective of multiparadigm languages, developing a solid foundation.
From Chapter 5 onward, we will continue to expand our thinking on
multiparadigm programming through more practical code examples.
4.5 Summary
The Importance of Asynchronous Programming

Asynchronous programming is an essential technique in modern


software development. By running operations in a non-blocking manner,
you can improve an application’s performance and responsiveness—
particularly when it comes to time-consuming I/O or network requests.

Viewing Asynchrony as a Value

In asynchronous programming, a Promise can be more than just the


eventual outcome of an operation; you can also treat it as a value. This lets
you compose and control multiple asynchronous tasks. Functions like
Promise.all or Promise.race help manage complex async flows by
aggregating results or settling on the first resolved or rejected promise.
Adopting such a paradigm makes it easier to reason about and orchestrate
asynchronous operations.

Handling Asynchrony Through Laziness

Laziness plays a central role in efficiently handling async tasks. By


combining lazy evaluation with list-processing patterns (like map and
filter), you can systematically process async operations. For instance,
deferring execution until you truly need a value—rather than spawning all
tasks immediately—helps you build custom concurrency and throttling
schemes, such as an executeWithLimit function. Using lazy iterators (e.g.,
AsyncIterableIterator) also allows you to sequence and control async
operations incrementally.

Using Types to Manage Asynchrony

Type systems in languages like TypeScript provide powerful ways to


build asynchronous functional code. By defining interfaces and contracts
for async tasks, you can enhance safety and maintain readability. Classes
like FxIterable and FxAsyncIterable expand on the iterator pattern so
that both synchronous and asynchronous tasks can be managed consistently
under a common API. This makes mixing synchronous and asynchronous
workflows straightforward while retaining uniform management through a
shared interface.

Asynchronous Error Handling

A critical principle in async error handling is to design errors so they


emerge clearly, rather than attempting to hide them. In pure functions, you
typically let errors be thrown and allow the caller to deal with them. For
side effects, handle them at the closest point where the I/O or external
interaction happens, ensuring clarity around the cause and resolution of
problems. Avoid suppressing errors; it’s better to let them surface and
delegate to the caller’s context for appropriate handling. Proper use of
Promise, async/await, try/catch, and dedicated logging tools (like
Sentry) all ensure that runtime issues don’t remain hidden and can be
reacted to in a maintainable, flexible manner.

By integrating these ideas—values-based asynchrony, laziness, type-


driven design, and robust error handling—you gain a powerful set of
tools for building responsive, stable, and maintainable software. In the
chapters ahead, we’ll explore more real-world code examples that illustrate
how these techniques and paradigms can work together in practice.
1. “List processing” refers to a language, paradigm, or set of
techniques designed to work with list-based data. In these paradigms,
both data and code can be expressed in generalized list forms, and
code is treated as data. Typical functions include map, filter, and
reduce.↩︎

2. Design Patterns: Elements of Reusable Object-Oriented Software


is the classic reference on OOP design patterns, written by the Gang of
Four—Erich Gamma, Richard Helm, Ralph Johnson, and John
Vlissides—and published in 1994.↩︎

3. We’ll continue to explore lazy evaluation with more examples in


later sections.↩︎

4. Lisp is a classic functional language developed by John McCarthy


in the late 1950s. With lists as its core data structure, it naturally
supports recursion and metaprogramming. Descendants like Scheme,
Common Lisp, and Clojure have significantly influenced modern
language design.↩︎

5. Technically, developers could design their own iteration protocol


based on the iterator pattern in JavaScript before ES6 and TypeScript
1.6. However, with ES6 and TS 1.6 and onward, a standardized
iteration protocol emerged, enabling more consistent and powerful list
processing and feature extensions. Chapters 1 and 2 will delve deeper
into these concepts.↩︎

6. The built-in AsyncIterator interface in TypeScript is actually a


bit more complicated, including optional return?() or throw?()
methods, etc. However, for clarity and focus on the core concepts of
this book, we have simplified it here. You can refer to
lib.es2018.asynciterable.d.ts for the official Iterator interface
definitions provided by TypeScript.↩︎
5. Practical Functional
Programming
Functional programming isn’t just a theoretical concept or an abstract
idea. In fact, it often proves far more practical when dealing with real-world
data and problems, offering substantial, tangible value to both developers
and the services they build.

List processing is one of the core tools of functional programming. It


allows you to filter and transform data and then produce the desired
outcome with concise, clear logic. In this chapter, we’ll explore how code
written in a functional style—once used only for handling numeric
sequences—can address real-world challenges, and how it can be broadly
applied to the varied problems we encounter in our daily work.

We’ll also look at how to apply functional programming to actual


backend programming in order to solve practical issues. In particular, we’ll
delve into handling data processing, asynchronous operations, concurrency,
and system constraints using a functional approach. We’ll examine how
backend programming based on list processing can achieve higher
reliability and scalability in modern backend environments, and precisely
what practical help it offers.

Now, through real-world examples of functional programming, we’ll


see how the concepts and ideas we’ve covered can effectively resolve real
problems.

5.1 Working With Real World Data


Until now, aside from a few examples, this book has focused primarily
on sequences. While this is effective for learning the basic concepts and
usage of functional programming, you may have found it hard to visualize
how this applies to real-world data processing. In this chapter, we will
address such questions by handling data structures frequently encountered
in practice, thereby enhancing your proficiency in applying functional
programming techniques.

Working With Numbers in a 2D Array

First, let’s revisit sequences once more. This will serve as a refresher
on the fundamental concepts of functional programming while helping you
get comfortable with the data-processing flow. Below is code that sums all
the squares of odd numbers in a simple two-dimensional array.

[Code 5-1] Summing all squares of odd numbers

1 const numbers = [
2 [1, 2],
3 [3, 4, 5],
4 [6, 7, 8],
5 [9, 10]
6 ];
7
8 const oddSquareSum = numbers
9 .flat() // Flatten the 2D array into a 1D array
10 .filter(a => a % 2 === 1) // Filter out odd numbers
11 .map(a => a * a) // Square them
12 .reduce((a, b) => a + b, 0); // Sum them
13
14 console.log(oddSquareSum); // 165

In [Code 5-1], we use flat to flatten the nested array, filter to


extract odd numbers, map to square them, and reduce to sum them all up.
The result is 165.

Handling Basketball Team Data

Next, let’s look at data that might be used in a real-world setting. We


organize the data as players and teams, with each team containing multiple
players. Such data is commonly used in basketball game statistics or sports-
related applications to display player performance or records.

How should we compute the total score of players who scored 30


points or more in data like [Code 5-2]?
[Code 5-2] Total score of players who scored at least 30 points

1 type Player = {
2 name: string;
3 score: number;
4 };
5
6 type Team = {
7 name: string;
8 players: Player[];
9 };
10
11 const teams: Team[] = [
12 {
13 name: 'Bears', players: [
14 { name: 'Luka', score: 32 },
15 { name: 'Anthony', score: 28 },
16 { name: 'Kevin', score: 15 },
17 { name: 'Jaylen', score: 14 },
18 ]
19 },
20 {
21 name: 'Lions', players: [
22 { name: 'Stephen', score: 37 },
23 { name: 'Zach', score: 20 },
24 { name: 'Nikola', score: 19 },
25 { name: 'Austin', score: 22 },
26 ]
27 },
28 {
29 name: 'Wolves', players: [
30 { name: 'Jayson', score: 32 },
31 { name: 'Klay', score: 37 },
32 { name: 'Andrew', score: 15 },
33 { name: 'Patrick', score: 14 },
34 ]
35 },
36 {
37 name: 'Tigers', players: [
38 { name: 'DeMar', score: 37 },
39 { name: 'Marcus', score: 21 },
40 { name: 'Al', score: 19 },
41 { name: 'Dennis', score: 22 },
42 ]
43 },
44 ];
45
46 const totalHighScorers = teams
47 .map(team => team.players) // Convert the team objects into a 2D array
of players
48 .flat() // Flatten into a 1D array
49 .filter(player => player.score >= 30) // Filter only players who scored
30 or more
50 .map(player => player.score) // Extract the score
51 .reduce((a, b) => a + b, 0); // Sum
52
53 console.log(totalHighScorers); // 175

Comparing this to earlier sequence-based examples, you can see that


the structure of the code is nearly the same. Rather than writing in an
imperative style, we used functional methods such as map, filter, flat,
and reduce to implement the logic concisely. In functional programming,
nested data structures of objects and arrays—like team.players or
player.score—can be easily transformed and processed via simple helper
functions. This shows that we can handle complex data structures just as we
do for two-dimensional arrays of numbers, following the same pattern. It is
a good example of leveraging polymorphism effectively in functional
programming.

We can replace the code that uses both map and flat with flatMap.
flatMap is a method that effectively performs a map followed by a flat all
at once.

[Code 5-3] flatMap is map followed by flat

1 const totalHighScorers2 = teams


2 .flatMap(team => team.players) // Flatten all teams' player arrays
3 .filter(player => player.score >= 30) // Filter only players scoring 30
or more
4 .map(player => player.score) // Extract the score
5 .reduce((a, b) => a + b, 0); // Sum
6
7 console.log(totalHighScorers2); // 175

Now compare the two-dimensional array example to the basketball


team example.

[Code 5-4] Comparing 2D array processing and basketball team data processing

1 const oddSquareSum = numbers


2 .flat() // Flatten the 2D array into a 1D array
3 .filter(a => a % 2 === 1) // Filter out odd numbers
4 .map(a => a * a) // Square them
5 .reduce((a, b) => a + b, 0); // Sum
6
7 const totalHighScorers = teams
8 .flatMap(team => team.players) // Flatten all teams' player arrays
9 .filter(player => player.score >= 30) // Filter only players scoring 30
or more
10 .map(player => player.score) // Extract the score
11 .reduce((a, b) => a + b, 0); // Sum

Although these two snippets process different data, they are


structurally identical. This demonstrates how functional programming
provides a general pattern for handling data, so you can easily apply it to all
kinds of data forms. Whether you’re dealing with simple numeric sequences
or arrays of objects, you can handle them using the same logic, thus
simplifying your code and enhancing maintainability.

Handling Commerce Data

In shopping malls or commerce services, working with cart data is


common. For example, you often need to calculate the total quantity or total
price of selected products. In this section, we will explore how to write
concise, readable code using functional programming while dealing with
cart data.

[Code 5-5] Calculating total quantity and price of items in a shopping cart

1 type Product = {
2 name: string;
3 price: number; // In USD
4 quantity: number;
5 selected: boolean;
6 };
7
8 const products: Product[] = [
9 {
10 name: 'T-Shirt',
11 price: 10,
12 quantity: 1,
13 selected: true,
14 },
15 {
16 name: 'Shirt',
17 price: 30,
18 quantity: 2,
19 selected: false,
20 },
21 {
22 name: 'Pants',
23 price: 15,
24 quantity: 2,
25 selected: true,
26 }
27 ];
28
29 const sumSelectedQuantities = (products: Product[]) =>
30 products.reduce((total, prd) => {
31 if (prd.selected) { // Sum the quantity of selected items
32 return total + prd.quantity;
33 } else {
34 return total;
35 }
36 }, 0);
37
38 const calcSelectedPrices = (products: Product[]) =>
39 products.reduce((total, prd) => {
40 if (prd.selected) { // Sum the price (USD) of selected items
41 return total + prd.price * prd.quantity;
42 } else {
43 return total;
44 }
45 }, 0);
46
47 console.log(sumSelectedQuantities(products)); // Total quantity of
selected items: 3
48 console.log(calcSelectedPrices(products)); // Total price (USD) of
selected items: 40

[Code 5-5] uses reduce to compute the total quantity and price of
selected items. Compared to the approach of a loop (for + i++) with an
accumulator variable, using reduce is relatively concise. However, the code
still feels somewhat cumbersome and harder to read because all the logic is
packed into a single reduce, including if-else statements. We can simplify
it as follows:

[Code 5-6] Splitting one complex reduce into map, filter, and reduce

1 const sumSelectedQuantities = (products: Product[]) =>


2 products
3 .filter(prd => prd.selected) // Filter only selected items
4 .map(prd => prd.quantity) // Extract just the quantity
5 .reduce((a, b) => a + b, 0); // Sum
6
7 const calcSelectedPrices = (products: Product[]) =>
8 products
9 .filter(prd => prd.selected) // Filter only selected items
10 .map(prd => prd.price * prd.quantity) // Calculate total price (USD)
11 .reduce((a, b) => a + b, 0); // Sum
12
13 console.log(sumSelectedQuantities(products)); // Total quantity of
selected items: 3
14 console.log(calcSelectedPrices(products)); // Total price (USD) of
selected items: 40
When you separate responsibilities using filter, map, and reduce,
code readability improves dramatically. It becomes clear what happens in
each step, making it easy to understand. Moreover, because the logic is
broken down by step, modifying or adding to a specific part is simpler,
which in turn makes the code more maintainable. If necessary, you can
extract the filter or map steps into separate functions that can be reused in
other data-processing tasks. Finally, by easily reviewing the data at each
stage, debugging becomes simpler, reducing the time required to find and
fix bugs.

Thus, by simplifying one complex reduce and dividing its


responsibilities, you can write more intuitive, straightforward code that is
better structured for maintenance and expansion.

Handling Commerce Data 2

In real-world commerce scenarios, cart data rarely consists of just


product name, price, and quantity. For instance, apparel items may have
size or color options, and each option can have its own price or quantity. In
this section, we will examine more complex cart data that includes options,
demonstrating how to handle them neatly and clearly using functional
programming.

[Code 5-7] Cart data structure with added options

1 type Option = {
2 name: string;
3 price: number; // In USD
4 quantity: number;
5 };
6
7 type Product = {
8 name: string;
9 price: number; // Base price (USD)
10 selected: boolean;
11 options: Option[];
12 };
13
14 const products: Product[] = [
15 {
16 name: 'T-Shirt',
17 price: 10,
18 selected: true,
19 options: [
20 { name: 'L', price: 0, quantity: 3 },
21 { name: 'XL', price: 2, quantity: 2 },
22 { name: '2XL', price: 5, quantity: 2 },
23 ]
24 },
25 {
26 name: 'Shirt',
27 price: 30,
28 selected: false,
29 options: [
30 { name: 'L', price: 0, quantity: 2 },
31 { name: 'XL', price: 2, quantity: 5 },
32 { name: '2XL', price: 5, quantity: 4 },
33 ]
34 },
35 {
36 name: 'Pants',
37 price: 15,
38 selected: true,
39 options: [
40 { name: 'XL', price: 2, quantity: 3 },
41 { name: '2XL', price: 5, quantity: 5 },
42 ]
43 }
44 ];

In this data structure, we add an array of options to each product. Each


option contains name, price, and quantity. We must calculate the final
price by adding the option’s price to the product’s base price.

Though the data structure is more complex, we can still handle it with
the same functional programming approach. [Code 5-8] shows an example
of calculating the total quantity and total price of selected products.

[Code 5-8] Calculating total quantity and price, including option data

1 const sumSelectedQuantities2 = (products: Product[]) =>


2 products
3 .filter(prd => prd.selected) // Filter only selected products
4 .map(prd => prd.options) // Convert each product to its array of
options
5 .flat() // Flatten the array of options into 1D
6 .map(opt => opt.quantity) // Extract the quantity of each option
7 .reduce((a, b) => a + b, 0); // Compute the total
8
9 const calcSelectedPrices2 = (products: Product[]) =>
10 products
11 .filter(prd => prd.selected) // Filter only selected products
12 .map(prd => prd.options.map(
13 opt => (prd.price + opt.price) * opt.quantity
14 )) // Compute final price (USD) per option
15 .flat() // Flatten the array of all options’ prices
16 .reduce((a, b) => a + b, 0); // Compute the total
17
18 console.log(sumSelectedQuantities2(products)); // 15
19 console.log(calcSelectedPrices2(products)); // 235

For example, consider a quick breakdown:

T-Shirt (base 10)


L option (add 0, 3 units) -> 10 * 3 = 30
XL option (add 2, 2 units) -> (10 + 2) = 12 * 2 = 24
2XL option (add 5, 2 units)-> (10 + 5) = 15 * 2 = 30
=> Total 84
Shirt (base 30) -> Not selected, hence excluded
Pants (base 15)
XL option (add 2, 3 units) -> (15 + 2) = 17 * 3 = 51
2XL option (add 5, 5 units) -> (15 + 5) = 20 * 5 = 100
=> Total 151

T-Shirt(84) + Pants(151) = 235 (USD)

Thus, even with a more complex data structure, you can still write
concise code. Adding option data makes the structure more complicated,
but the combination of filter, map, flat, and reduce allows you to handle
it in a straightforward, consistent manner. This highlights the advantages of
functional programming: consistency and reusability.

We also preserve readability and maintainability. The sequential flow


of filtering only selected products, converting each product’s options via
map, then flattening them is both easy to read and easy to follow. Because
the code is broken down into clear steps, modifying or adding features is
simpler. The code for calculating the final price per option is also separated,
making it easy to test or modify the individual calculation logic
independently.

You can also switch it to use flatMap as follows:


[Code 5-9] Changing to flatMap

1 const sumSelectedQuantities2 = (products: Product[]) =>


2 products
3 .filter(prd => prd.selected)
4 .flatMap(prd => prd.options) // Convert each product’s option array
into 1D
5 .map(opt => opt.quantity)
6 .reduce((a, b) => a + b, 0);

As a result, this code is structurally similar to the code that handled


numeric sequences or basketball team data in [Code 5-4]. This shows that
functional programming provides a solution that is consistent and flexible
for dealing with various data structures.

In [Code 5-10], we organize the two functions—calcSelectedPrices


and calcSelectedPrices2, which process different data structures—into a
more uniform structure to improve consistency.

[Code 5-10] Refactoring code into a more uniform structure

1 const calcProductPrice = (prd: Product) => prd.price * prd.quantity;


2
3 const calcSelectedPrices = (products: Product[]) =>
4 products
5 .filter(prd => prd.selected)
6 .map(calcProductPrice)
7 .reduce((a, b) => a + b, 0);
8
9 const calcProductOptionPrices = (prd: Product) =>
10 prd.options.map(opt => (prd.price + opt.price) * opt.quantity);
11
12 const calcSelectedPrices2 = (products: Product[]) =>
13 products
14 .filter(prd => prd.selected)
15 .flatMap(calcProductOptionPrices)
16 .reduce((a, b) => a + b, 0);

We extracted the logic for computing each product’s price into


calcProductOptionPrices and gave it a clear name that reveals its
intention. By replacing a somewhat complex expression with a named
function, the code becomes more readable and explicit. Also, isolating the
price-calculation logic into a function increases maintainability, because
you only need to modify that function if a new condition or calculation
approach is required. This also increases reusability, allowing you to reuse
the same calculation logic elsewhere.

The change from calcSelectedPrices to calcSelectedPrices2


demonstrates the flexibility of list processing, where you can flatten nested
data (flat) and simplify the approach when data structures become more
complex or functionality expands.

If you want to calculate both the total price of all items in the cart and
the total price of only the selected items, you can rework the code into a
concise form as follows:

[Code 5-11] Calculating total price and selected price

1 const calcProductOptionPrices = (prd: Product) =>


2 prd.options.map(opt => (prd.price + opt.price) * opt.quantity);
3
4 const calcTotalPrice = (products: Product[]) =>
5 products
6 .flatMap(calcProductOptionPrices)
7 .reduce((a, b) => a + b, 0);
8
9 const calcSelectedPrices2 = (products: Product[]) => calcTotalPrice(
10 products.filter(prd => prd.selected)
11 );
12
13 console.log(calcTotalPrice(products)); // Total price (USD) of all
items
14 console.log(calcSelectedPrices2(products)); // Total price (USD) of
selected items

For instance, if you hypothetically sum all products:

Shirt (base 30, plus options; assume it’s selected in some scenario)
T-Shirt (base 10, plus options = 84)
Pants (base 15, plus options = 151)

They would sum differently depending on which products are actually


selected and how the options are chosen. In practice, the example comments
might vary based on the environment or user selection.

Here, calcSelectedPrices2 filters out just the selected products and


then calls calcTotalPrice, effectively demonstrating the composability
and reusability of list processing.

What would [Code 5-11] look like if we wrote it using imperative


code?

[Code 5-12] Viewing it as imperative code

1 function calcTotalPrice(products: Product[]): number {


2 let totalPrice = 0;
3
4 for (let i = 0; i < products.length; i++) {
5 const product = products[i];
6 for (let j = 0; j < product.options.length; j++) {
7 const option = product.options[j];
8 totalPrice += (product.price + option.price) * option.quantity;
9 }
10 }
11
12 return totalPrice;
13 }
14
15 function calcSelectedPrices2(products: Product[]): number {
16 const selectedProducts: Product[] = [];
17 for (let i = 0; i < products.length; i++) {
18 if (products[i].selected) {
19 selectedProducts.push(products[i]);
20 }
21 }
22
23 // Reuse calcTotalPrice
24 return calcTotalPrice(selectedProducts);
25 }

[Code 5-12] illustrates several drawbacks of imperative code. First,


readability suffers. Logic consisting of loops and conditionals makes it hard
to grasp each step’s purpose at a glance. For example, in
calcSelectedPrices2, nested loops and conditionals obscure the central
intent.

Second, maintainability is lacking. Because the logic is connected in


one flow, modifying or expanding a specific part requires reading and
understanding the entire code. In particular, introducing new conditions or
computational methods necessitates a thorough review of the code.

Third, because the code is composed of statements, composability and


readability are diminished, and unnecessary code duplication can occur.
Repetitive iteration, declaring temporary variables, and modifying values
happen in one long flow, dispersing the overall intent and making it difficult
to avoid duplication. Furthermore, because each operation is not clearly
separated, it’s hard to execute or test one part independently. To verify a
single step, you have to follow the entire function flow, perhaps by inserting
logs or temporarily modifying the code, which is inconvenient. Such issues
reduce both maintainability and testing efficiency.

Solving Problems With a Consistent Approach

In this chapter, we looked at how to process a variety of data structures


using functional programming. Starting from a simple two-dimensional
array of numbers, we moved on to basketball team data and shopping-cart
data, confirming how powerful and consistent the functional methods like
map, filter, and reduce can be.

Consistency: The same patterns can be applied to diverse data


structures, making the code predictable and easy to read.
Readability: Each step’s role is clearly revealed. By separating the
logic into distinct parts, we greatly improve readability.
Reusability: If you write filter, map, and reduce steps
independently, you can reuse them in other data-processing tasks.
Maintainability: Modifying or extending the code step by step is
straightforward.
Ease of Debugging: You can easily check intermediate data at each
stage, allowing you to quickly identify and fix errors.

Functional programming or list processing doesn’t just change your


coding style—it provides tools for designing complex logic in simpler and
clearer ways, thereby improving both code quality and productivity.

5.2 Applying to More Problems


In this chapter, we will look at several functions you should know to
apply functional programming to a broader range of problems. To explain
these effectively, we will leverage FxTS, an excellent functional
programming library for TypeScript.
FxTS provides a powerful set of list-processing functions, type
inference, and robust support for asynchronous/parallel/concurrent
programming based on the Iterable/AsyncIterable protocol, making it
highly useful for real-world work environments. It maintains the essence of
JavaScript and TypeScript, was designed in a way faithful to the language’s
conventions, and harmonizes naturally with the language’s features. It also
remains flexible enough to keep pace with future language developments.

This open-source project was led by Hyunwoo Jo, whose solid


fundamentals and deep insight into programming languages in general have
led to a high-level functional library. With extensive experience at Naver,
Naver Pay, and now serving as a tech lead at Marpple Corporation,
Hyunwoo Jo is a valued colleague and excellent developer whom I respect
and have learned a lot from. I am glad to introduce FxTS in this book.

We will now explore some key functions that help apply functional
programming to a variety of problems and look at concrete examples of
how to use them in practice.

pipe

pipe is a function that processes a value by consecutively applying


multiple functions. It is mainly used to compose two or more functions,
improving the expressive power of your code.

The pipe function that composes functions

The pipe function takes the initial value as its first argument and the
functions to apply from the second argument onward. pipe treats the first
argument as the initial accumulator, applies the remaining functions in
order, and derives the final result.

[Code 5-13] Consecutively applying functions using pipe

1 import { pipe } from "@fxts/core";


2
3 const result = pipe(
4 10,
5 a => a + 4, // a = 10
6 a => a / 2, // a = 14
7 );
8
9 console.log(result); // 7

In [Code 5-13], a function that adds 4 to a is followed by a function


that divides the result by 2, yielding 7.

You could say that the logic of pipe is essentially a reduce. pipe
utilizes reduce to treat multiple functions as an iterable, passing the result
of each function to the next function in sequence to produce the final
output. This allows you to handle functions as values and express data flow
from left to right or top to bottom in a clear manner.

Using it together with currying

When combined with curried functions, pipe gains expressive power


that leverages the benefits of currying, while still providing strong type
inference.

[Code 5-14]

1 // An add function implemented to evaluate one argument at a time


2 const add = (a: number) => (b: number) => a + b;
3
4 const result = add(10)(5);
5 console.log(result); // 15
6
7 const add10 = add(10); // A function with a fixed 'a' value of 10
8 console.log(add10(5)); // 15
9
10 const result2: number = pipe(
11 5,
12 add(10),
13 add(5),
14 );
15 console.log(result2); // 20

1. The add function is curried, so it applies the first argument a, then


returns a function that takes the second argument b to compute the
final result.
2. add(10) returns a function with the a value fixed at 10.
3. When you pass a b value to the returned function, it calculates a + b.
4. By using pipe, you can chain these curried functions in a natural,
readable style.
5. With pipe, applying add(10) and then add(5) in sequence adds 10 to
5 and then adds 5 more to the result.

Because FxTS’s list-processing functions support currying, they also


provide type inference through the pipe function.

[Code 5-15] pipe with map, filter, reduce

1 import { pipe, map, filter, forEach } from "@fxts/core";


2
3 pipe(
4 ['1', '2', '3', '4', '5'],
5 map(a => parseInt(a)), // [a: string]
6 filter(a => a % 2 === 1), // [a: number]
7 forEach(console.log),
8 );
9 // 1
10 // 3
11 // 5

This code shows how pipe transforms a string array to integers (map),
filters out odd numbers (filter), and then prints each value (forEach). The
final output is 1, 3, 5. Data flow is natural from top to bottom via pipe,
and type inference provides convenience during coding.

Compared to chaining, pipe can produce more flexible code


structures. Chaining is typically extended via a class’s methods, but pipe
freely combines both library-provided and user-defined logic, effectively
extending the reach of functional programming.

[Code 5-16] Using console.log with pipe

1 import { pipe, map, filter, reduce } from "@fxts/core";


2
3 pipe(
4 ['1', '2', '3', '4', '5'],
5 map(a => parseInt(a)), // [a: string]
6 filter(a => a % 2 === 1), // [a: number]
7 reduce((a, b) => a + b),
8 console.log,
9 );
10 // 9

[Code 5-16] converts a string array to numbers (map), filters out odd
numbers (filter), sums them (reduce), and prints the final result
(console.log). Although console.log is not a function provided by the
library that implements pipe, it integrates naturally here, showing that pipe
can flexibly incorporate general external functions.

By using pipe, you can express the execution order of functions more
visibly and clearly. For instance, you can rewrite a function like
calcSelectedPrices2 from [Code 5-11] to use pipe, achieving concise and
intuitive code:

[Code 5-17] Using pipe for calcSelectedPrices2

1 // Original code
2 const calcSelectedPrices2 = (products: Product[]) => calcTotalPrice(
3 products.filter(prd => prd.selected)
4 );
5
6 // Composition with pipe
7 const calcSelectedPrices3 = (products: Product[]) => pipe(
8 products.filter(prd => prd.selected),
9 calcTotalPrice,
10 );

Here, it becomes more evident that after filtering the selected products,
calcTotalPrice is called. The function’s order of execution matches the
reading order, making the data flow and the purpose of each step intuitive.
By using pipe, you improve readability and make it easier to modify or
combine each step independently.

Composing Asynchronous Functions With pipe

also allows you to seamlessly combine synchronous and


pipe
asynchronous functions, expressing your workflows in a declarative
manner.
[Code 5-18] Composing asynchronous functions

1 import { pipe, delay } from "@fxts/core";


2
3 const result = await pipe(
4 Promise.resolve(5), // Initial async value
5 a => a + 10, // Synchronous function: a is 5, inferred as
number
6 async a => { // Asynchronous function: a is 15, inferred as
number
7 await delay(1000); // Wait 1 second
8 return a * 2;
9 },
10 a => a - 5, // Synchronous function: a is 30
11 );
12
13 console.log(result); // Output: 25

Here, we declaratively combine synchronous and asynchronous


functions into a workflow using pipe. Starting from an async value (5), we
add 10, wait 1 second while multiplying by 2, and then subtract 5 to get the
final result.

pipe automatically handles the results (Promises) of asynchronous


functions and passes them as arguments to the next function, also providing
accurate type inference at each stage. This increases clarity and readability,
allowing you to seamlessly mix synchronous and asynchronous operations.

pipe also combines naturally with functions that handle


AsyncIterable objects, as we saw in Chapter 4.

[Code 5-19] Combining with asynchronous list processing

1 import { pipe, toAsync, map, filter, toArray, fx } from "@fxts/core";


2
3 const arr = [1, 2, 3, 4, 5];
4
5 // pipeline
6 const result2 = await pipe(
7 arr,
8 toAsync,
9 map(a => Promise.resolve(a + 10)), // Even if it returns
Promise<number>,
10 filter(a => a % 2 === 0), // filter sees a as unwrapped 11, 12, 13 ...
11 toArray,
12 arr => arr.reverse(),
13 );
14
15 console.log(result2); // [result2: number[]]
16 // [14, 12]
17
18 // chaining
19 const result3 = await fx(arr)
20 .toAsync()
21 .map(a => Promise.resolve(a + 10))
22 .filter(a => a % 2 === 0)
23 .toArray()
24 .then(arr => arr.reverse());
25
26 console.log(result3); // [result3: number[]]
27 // [14, 12]

Hence, pipe can compose synchronous functions, asynchronous


functions, list-processing functions based on synchronous/asynchronous
iterables, curried functions, and general external functions that are not part
of the library. Finally, since the result returns as a Promise, you can
evaluate it with await and integrate it seamlessly with JavaScript’s error-
handling mechanisms. This broadens the scope of functional programming
and aids in declaratively expressing complex workflows.

zip

Sometimes in programming, you need an increasing index like i++.


Below is code where we create an object by using the index of the keys and
values arrays. It uses i++ to connect keys[i] and values[i] in order.

[Code 5-20] Creating an object from keys and values

1 const keys = ['name', 'job', 'location'];


2 const values = ['Marty', 'Programmer', 'New York'];
3
4 const obj = {};
5 for (let i = 0; i < keys.length; i++) {
6 obj[keys[i]] = values[i];
7 }
8
9 console.log(obj);
10 // { name: 'Marty', job: 'Programmer', location: 'New York' }

As shown, tasks that require the combination of two arrays often need
to map each index one by one. If you pre-build keys and values into an
entries form, the problem becomes much simpler.

[Code 5-21] Creating an object from entries

1 const entries = [
2 ['name', 'Marty'],
3 ['job', 'Programmer'],
4 ['location', 'New York']
5 ];
6
7 const obj = {};
8 for (const [key, value] of entries) {
9 obj[key] = value;
10 }
11 console.log(obj);
12 // { name: 'Marty', job: 'Programmer', location: 'New York' }
13
14 // Or
15 const obj2 = Object.fromEntries(entries);
16 console.log(obj2);
17 // { name: 'Marty', job: 'Programmer', location: 'New York' }

In [Code 5-21], we start with an array that pairs each key with a value.
This makes it more concise to create an object by looping through each
[key, value] pair with a for...of statement or use Object.fromEntries
as a helper function to do the same work.

This approach lets you handle the data structure clearly and write code
with better readability, without directly using something like i++.

The zip function is a tool for pairing up arrays like keys and values
into an entries-shaped iterable. By using it, you can combine elements of
the two arrays without directly dealing with their indices.

[Code 5-22] zip(keys, values)

1 import { zip } from "@fxts/core";


2
3 const keys = ['name', 'job', 'location'];
4 const values = ['Marty', 'Programmer', 'New York'];
5
6 const iterator = zip(keys, values);
7 console.log(iterator.next()); // { done: false, value: [ 'name', 'Marty' ]
}
8 console.log(iterator.next()); // { done: false, value: [ 'job',
'Programmer' ] }
9 console.log(iterator.next()); // { done: false, value: [ 'location', 'New
York' ] }
10 console.log(iterator.next()); // { done: true, value: undefined }
11
12 const obj = Object.fromEntries(zip(keys, values));
13 console.log(obj);
14 // { name: 'Marty', job: 'Programmer', location: 'New York' }

zip combines two arrays into an iterable of [key, value]. The


iterable returns these pairs one by one via the next method until it’s done
(done: true). By combining keys and values this way, you can easily turn
the result into an object using Object.fromEntries. No need to manually
loop or manage indices, making it simpler and more intuitive.

[Code 5-23] Using pipe together

1 pipe(
2 zip(keys, values),
3 Object.fromEntries,
4 console.log,
5 );
6 // { name: 'Marty', job: 'Programmer', location: 'New York' }
7
8 pipe(
9 values,
10 zip(keys), // Currying
11 Object.fromEntries,
12 console.log,
13 );
14 // { name: 'Marty', job: 'Programmer', location: 'New York' }

zip pairs the two arrays, and then Object.fromEntries transforms the
result into an object, all neatly structured via pipe.

In functional style, you generally prefer a declarative approach that


expresses data flow and transformation, rather than using imperative code
such as loops or incrementing indexes. The zip function is a useful tool for
solving these problems through list processing.

When You Need the Index as a Value

Most of the time you can tackle situations needing an increasing index
with zip in a declarative manner. But what if you genuinely need a value
that increments like i++?
[Code 5-24] Zipping with a numeric array

1 const strings = ['a', 'b', 'c', 'd'];


2 const iter = zip([0, 1, 2, 3], strings);
3 console.log(iter.next()); // {done:false, value: [0, 'a']}
4 console.log(iter.next()); // {done:false, value: [1, 'b']}
5 console.log(iter.next()); // {done:false, value: [2, 'c']}
6 console.log(iter.next()); // {done:false, value: [3, 'd']}
7 console.log(iter.next()); // {done:true, value: undefined}

[Code 5-24] creates a new iterable by using zip on [0, 1, 2, 3] and


['a', 'b', 'c', 'd']. Calling the iterable’s next() method returns pairs
like [0, 'a'], [1, 'b'] in the same index from the two arrays. Once all
elements are consumed, done: true is returned.

Here we explicitly defined the numeric array [0, 1, 2, 3], but you
might want to dynamically generate numbers that match the length of a
variable-length strings array. In that case, you can use the range function
for a declarative approach.

[Code 5-25] Generating numbers dynamically using range

1 import { range, zip } from "@fxts/core";


2
3 const iter = range(3);
4 console.log(iter.next()); // { done: false, value: 0 }
5 console.log(iter.next()); // { done: false, value: 1 }
6 console.log(iter.next()); // { done: false, value: 2 }
7 console.log(iter.next()); // { done: true, value: undefined }
8
9 const strings = ['a', 'b', 'c', 'd', 'e'];
10 const iter2 = zip(range(Infinity), strings);
11 for (const a of iter2) {
12 console.log(a);
13 }
14 // [0, 'a']
15 // [1, 'b']
16 // [2, 'c']
17 // [3, 'd']
18 // [4, 'e']

range generates an iterator of numbers of the specified length. For


instance, range(3) yields [0, 1, 2], and range(Infinity) creates an
infinite iterator.
zip(range(Infinity), strings) pairs a potentially infinite iterator
with the strings array, continuing until strings ends. This yields index-
value pairs while skipping the rest.

Thus, combining zip and range gives you flexibility in generating and
mapping indexes regardless of the array’s length, all in a declarative way
without imperative loops, greatly improving readability and flexibility.

Collatz Conjecture: Counting Until It Becomes 1

Sometimes the program’s result is the incremented index itself. That


means you genuinely need logic for calculating indexes as you repeat work.
This commonly appears in tasks that need to track the iteration count. For
example, you might have a counting problem.

A counting problem involves performing a repeated process according


to some rule or condition and returning how many times this operation took
place by the time it ends. Using a functional style to solve this demands a
declarative approach for tracking and calculating indexes instead of
managing them imperatively.

Let’s look at a coding-test-style problem that uses the zip and range
list-processing tools.

What is the Collatz Conjecture?

Proposed in 1937 by German mathematician Lothar Collatz, the


Collatz Conjecture states that all positive integers eventually reach 1 if you
repeatedly apply the following rules. The conjecture’s simple rules and
intriguing iterative structure make it a common example in algorithm
design, analysis, and programming education.

[Rules]

1. If the given number is even, divide it by 2.


2. If the number is odd, multiply it by 3 and then add 1.
3. Repeat this.
[Example Runs]

1. Input: 6

6 is even → 6 ÷ 2 = 3
3 is odd → 3 × 3 + 1 = 10
10 is even → 10 ÷ 2 = 5
5 is odd → 5 × 3 + 1 = 16
16 → 8 → 4 → 2 → 1
Result: eventually reaches 1

2. Input: 11

11 → 34 → 17 → 52 → 26 → 13 → 40 → 20 → 10 → 5 → 16 → 8
→4→2→1
Result: eventually reaches 1

Solving the Collatz Conjecture With List Processing

We want to solve a problem where we start from some given number,


apply the Collatz rules, and return how many times the operation is
performed.

[Code 5-26] collatzCount

1 import { range, pipe, zip, find, head } from "@fxts/core";


2
3 // A reusable function that creates an iterator of numbers starting from
'start'
4 const count = (start = 1) => range(start, Infinity);
5
6 // A reusable function that takes a function and an accumulator 'acc', and
repeatedly app\
7 lies 'f' infinitely
8 function* repeatApply<A>(f: (acc: A) => A, acc: A) {
9 while (true) yield acc = f(acc);
10 }
11
12 const nextCollatzValue = (num: number) =>
13 num % 2 === 0 // If even
14 ? num / 2
15 : num * 3 + 1;
16
17 const collatzCount = (num: number) => pipe(
18 zip(
19 count(), // (1)
20 repeatApply(nextCollatzValue, num), // (2)
21 ), // (3)
22 find(([cnt, val]) => val === 1), // (4)
23 collatz => collatz!, // (5)
24 head, // (6)
25 );
26
27 console.log(collatzCount(1)); // 3
28 console.log(collatzCount(4)); // 2
29 console.log(collatzCount(5)); // 5

According to the comments, [Code 5-26] works as follows:

1. Generate a count

count() uses range(1, Infinity) to create an iterator that


increments from 1 to infinity, tracking how many times the Collatz
operation has been performed.

2. Repeat the Collatz operation

repeatApply(nextCollatzValue, num) repeatedly applies the Collatz


rule (divide by 2 if even, multiply by 3 and add 1 if odd) infinitely,
yielding each intermediate result in sequence.

3. Map counts to results

zip(count(), repeatApply(nextCollatzValue, num)) pairs each


step’s “round (cnt)” with the “Collatz result,” forming [cnt, val].

4. Find the moment when the result is 1

find(([, val]) => val === 1) returns the first time the Collatz
process reaches 1.

5. Use collatz! for assertion

Assuming the Collatz Conjecture is true, we expect eventually to reach


1. We assert ! that find won’t return undefined.
6. Extract the count

Since collatz is [cnt, val], we use head to return cnt.

Because zip is curried, you could rewrite this pipeline as follows:

[Code 5-27] Using zip in curried style

1 const collatzCount = (num: number) => pipe(


2 repeatApply(nextCollatzValue, num), // Repeatedly apply Collatz operation
infinitely
3 zip(count()), // zip(count(), collatzVals) to add
counting
4 find(([, val]) => val === 1), // consume until we get val === 1
5 collatz => collatz!, // '!' assertion
6 head, // from [cnt, val], return cnt
7 );

[Code 5-27] is set up very much like the problem definition. You
repeatedly apply the Collatz rule (nextCollatzValue via repeatApply),
keep track of the number of times using zip(count()), stop at the point
where the value is first 1 (val === 1 in find), and return the count from
[cnt, val].

This code expresses almost exactly the statement, “Perform the Collatz
operation until reaching 1, and return how many times it was done.” From a
performance standpoint, it is also efficient. This efficiency and clarity come
from using lazy evaluation with an iterator, consuming only as many
elements as needed through find. It’s a good example of how a declarative
approach in list processing can be both straightforward and efficient.

Verifying

How extensively do we need to verify or test collatzCount? Because


each component of [Code 5-27]—repeatApply, nextCollatzValue, count,
zip, find, the function ([, val]) => val === 1, collatz => collatz!,
and head—is individually testable, the likelihood of failure in their
composition is quite low. One of the greatest advantages of functional
programming and list processing is that each function does one simple job,
making implementation and testing easier.
If we have already verified that each component works correctly, the
chance of collatzCount failing is very small, since collatzCount is
simply an expression combining these functions with no additional internal
state changes or conditionals (if-else). Therefore, if these functions are
correct and they are combined in a logical sequence, collatzCount can be
quickly trusted to do its job. Such confidence in code behavior greatly
boosts development productivity.

In conclusion, this example concisely solves the Collatz conjecture


problem and highlights the declarative, expressive approach that functional
programming offers. It also underscores how one can effectively handle
complex problems using the stability and reliability of functional
programming.

take, takeUntilInclusive, takeWhile Instead of break

In imperative code, break is used in loops to skip unnecessary


repetitions and reduce time complexity for efficiency. Functional
programming has similar functions—like take, find, some, every, head—
that serve the same purpose.

Particularly, functions like find, some, and every follow a logic


similar to what take represents. They are designed to stop consuming the
iterator once a certain condition is met, playing a crucial role in reducing
time complexity. If these functions weren’t properly implemented, the worst
time complexity might be O(n). By using take(n) appropriately, if the
condition is satisfied early, we can reduce it to O(k) (k is the number of
elements until the condition is satisfied). On average, this does not consume
the entire iterator, akin to using a break in an imperative loop.

In short, take is a function that limits the number of elements


consumed from a lazy iterator, thereby reducing time complexity. One
might say that take is a function that uses a numeric argument to reduce
time complexity.

But what if we want to stop based on a condition rather than a


number? In those scenarios, functions like takeWhile and
takeUntilInclusive exist, designed to stop iterator consumption according
to a certain condition.

takeWhile and takeUntilInclusive

Both takeWhile and takeUntilInclusive control iterator


consumption based on a condition. They differ, however, in how many
elements they consume and how they handle the boundary.

[Code 5-28] takeWhile, takeUntilInclusive execution results

1 import { fx } from "@fxts/core";


2
3 fx([1, 2, 3, 0, 0, 0, 5, 6])
4 .takeWhile(a => {
5 console.log('takeWhile:', a, a >= 1);
6 return a >= 1;
7 })
8 .forEach(a => console.log(a));
9
10 // Output
11 // takeWhile: 1 true
12 // forEach: 1
13 // takeWhile: 2 true
14 // forEach: 2
15 // takeWhile: 3 true
16 // forEach: 3
17 // takeWhile: 0 false
18
19 fx([0, 10, 1, 3, 5, 0, 4, 2])
20 .takeUntilInclusive(a => {
21 console.log('takeUntilInclusive:', a, a === 5);
22 return a === 5;
23 })
24 .forEach(a => console.log(a));
25
26 // Output
27 // takeUntilInclusive: 0 false
28 // forEach: 0
29 // takeUntilInclusive: 10 false
30 // forEach: 10
31 // takeUntilInclusive: 1 false
32 // forEach: 1
33 // takeUntilInclusive: 3 false
34 // forEach: 3
35 // takeUntilInclusive: 5 true
36 // forEach: 5
The difference between takeWhile and takeUntilInclusive

Viewed from the perspective of how many elements are consumed:

takeWhile: stops consuming after evaluating the first element (0) for
which the condition is false.
Number of elements consumed: 4 (1, 2, 3, 0)
Result: [1, 2, 3]
takeUntilInclusive: stops after consuming up to and including the
first element (5) for which the condition is true.
Number of elements consumed: 5 (0, 10, 1, 3, 5)
Result: [0, 10, 1, 3, 5]

Rephrased from another angle:

takeWhile returns elements while the condition is true. Once an


element returns false for the first time, it stops evaluating and halts
consumption. However, in order to evaluate that element, it must
consume the item for which the condition is false.
takeUntilInclusive stops as soon as the condition becomes true,
returning that element as well, and not consuming the rest.

Why is this distinction important?

1. takeWhile

If the condition is complex or costly to evaluate, you might end up


consuming that first false element before stopping, possibly incurring
extra consumption. In that scenario, takeUntilInclusive might be
more efficient.
However, takeWhile is useful for cases where you only want elements
up to (but not including) where the condition fails.
In short, takeWhile is best when you need strict control over which
portion of the data meets the condition. For example, it’s useful if data
is valid only while a condition is met, and you want to discard
everything after that condition fails.
2. takeUntilInclusive

If the condition is expensive to compute or if the cost of consuming the


iterator is high, you can stop as soon as the condition is true, which
might be more efficient.
It’s suitable for situations where you want to cease operations
immediately upon satisfying a certain condition, or when you need to
reduce further consumption.
In short, takeUntilInclusive targets scenarios where you focus on
the moment the condition is met and are concerned with stopping
consumption for efficiency. For example, you might want to terminate
the process as soon as you encounter a certain value.

3. Summary

takeWhile focuses on guaranteeing completeness of the resulting data,


extracting only the portion that meets the condition.
takeUntilInclusive focuses on efficiency via stopping consumption
at the moment a condition is met.

Far from being just utility functions for taking a few elements, take,
takeWhile, and takeUntilInclusive are ways of structuring the
consumption of an iterator. They optimize time complexity and offer
powerful flexibility in list processing. When your program’s tasks revolve
around consuming lazy iterators, these functions effectively control how
long, under what conditions, and how extensively the tasks run.

Logic Created by Combining Functions

In this chapter, we explored pipe, zip, range, take, takeWhile,


takeUntilInclusive, and various supporting functions. This exploration
shows that problems seemingly solvable only in an imperative style can
also be approached declaratively and flexibly via list processing. These
functions serve as powerful tools for code readability and extensibility,
opening the door to solving complex problems more succinctly.

In the next chapter, we will look at how to extend and apply the
concepts introduced here to real-world problems, such as developing a
small-scale scheduling or similar backend service, thereby exploring how to
further leverage these ideas in practice.

5.3 Backend Asynchronous Programming


In this chapter, we will look at an example of solving problems
frequently encountered in backend programming by using a functional style
and list processing. We will write a program similar to a scheduler task that
runs repeatedly at a specific time.

In a backend environment, asynchronous situations occur frequently,


and efficiently leveraging resources or shortening task times via parallelism
is crucial. As a practical example of solving such problems, we will build a
payment-history synchronization scheduler.

The Payment Process Problem in a Commerce Platform

A payment process typically goes as follows: When a customer clicks


the “Pay” button, the Payment Gateway (PG) company’s payment window
opens; if payment succeeds, the PG company sends the result to our
commerce platform. Then the commerce platform creates or updates
order data to a paid state based on the payment result received.

However, for a variety of reasons—such as network failure, browser


closing, or server downtime—the success notification might not be
delivered correctly to the commerce platform. This can result in data
mismatches, where the customer’s payment goes through but no order is
generated.

To fix this, we need to periodically query the PG company’s payment


data and compare it with our commerce platform data. This process
identifies missing orders and either automatically processes them or refunds
missing payments. Such a sync process prevents issues where the
customer’s payment is successful but the order is not created, thereby
improving the overall customer experience.
Creating a Payment History Synchronization Scheduler

We will tackle the issue of missed payment status synchronization by


creating a repeatedly executing scheduler. It will periodically retrieve
payment data, identify missing data, and automatically initiate a refund
process.

Payment Service (PG Company) SDK

Below is a hypothetical SDK provided by the Payment Service (PG).


The SDK offers both payment-data query and payment-cancellation
features.

[Code 5-29] PG Company SDK

1 function delay<T>(time: number): Promise<undefined>;


2 function delay<T>(time: number, value: T): Promise<T>;
3 function delay<T>(time: number, value?: T): Promise<T | undefined> {
4 return new Promise((resolve) => {
5 setTimeout(() => resolve(value), time);
6 });
7 }
8
9 type Payment = {
10 pg_uid: string;
11 store_order_id: number;
12 amount: number;
13 };
14
15 const pgDataPaymentsPages: Payment[][] = [
16 [
17 { pg_uid: 'PG11', store_order_id: 1, amount: 15000 },
18 { pg_uid: 'PG12', store_order_id: 2, amount: 25000 },
19 { pg_uid: 'PG13', store_order_id: 3, amount: 10000 }
20 ],
21 [
22 { pg_uid: 'PG14', store_order_id: 4, amount: 25000 },
23 { pg_uid: 'PG15', store_order_id: 5, amount: 45000 },
24 { pg_uid: 'PG16', store_order_id: 6, amount: 15000 }
25 ],
26 [
27 { pg_uid: 'PG17', store_order_id: 7, amount: 20000 },
28 { pg_uid: 'PG18', store_order_id: 8, amount: 30000 }
29 ],
30 ];
31
32 // Payment Gateway API
33 const PgApi = {
34 /**
35 * Retrieves all payment data for a specified period
36 * (omitting a time-range parameter for convenience).
37 * @param page The page number to retrieve
38 */
39 async getPayments(page: number) {
40 console.log(`Payment data request: https://pg.com/payments?
page=${page}`);
41 await delay(500);
42
43 const payments = pgDataPaymentsPages[page - 1] ?? [];
44 console.log(
45 `${payments.length} items: ${payments.map(p => p.pg_uid).join(', ')
|| '-'}`
46 );
47
48 return payments;
49 },
50
51 /**
52 * Cancels and refunds a payment
53 * @param pg_uid The payment ID to cancel
54 */
55 async cancelPayment(pg_uid: string) {
56 console.log(`Cancel request: ${pg_uid}`);
57 await delay(300);
58 return {
59 code: 200,
60 message: `${pg_uid}: Cancellation and refund complete`,
61 pg_uid,
62 };
63 }
64 };

Here’s a breakdown of each part:

1. delay function
Used to simulate asynchronous operations.
2. Payment interface
pg_uid: The unique payment ID on the PG side.
store_order_id: The order number recorded in our commerce
platform’s order database.
3. PgApi.getPayments
Returns all payments made during a certain time, split into
multiple pages.
Takes a page number and returns payment data for that page.
Each page can have up to 3 payments; if there’s no data for the
requested page, it returns an empty array.
(We are simulating a scenario, so we omit the time-range
parameter for convenience.)
4. PgApi.cancelPayment
Takes a payment ID, cancels and refunds the corresponding
payment recorded at the PG.

Our Commerce Platform’s StoreDB

Below is a hypothetical StoreDB that can query order data from our
commerce platform. It returns only the required order information based on
an order ID and indicates completed payments as needed.

[Code 5-30] StoreDB

1 type Order = {
2 id: number;
3 amount: number;
4 is_paid: boolean;
5 };
6
7 const StoreDB = {
8 async getOrders(ids: number[]): Promise<Order[]> {
9 console.log(`SELECT * FROM orders WHERE IN (${ids}) AND is_paid =
true;`);
10 await delay(100);
11 return [
12 { id: 1, amount: 15000, is_paid: true },
13 { id: 3, amount: 10000, is_paid: true },
14 { id: 5, amount: 45000, is_paid: true },
15 { id: 7, amount: 20000, is_paid: true },
16 { id: 8, amount: 30000, is_paid: true },
17 ];
18 }
19 };

StoreDB’s getOrders function simulates a database that filters and


returns only orders in a “paid” state. It queries the database for the IDs
given and returns only the fully paid orders.

We simulate query execution time with delay(100). The returned data,


once compared with the PG payment data, identifies payments that
succeeded but for which our commerce platform never updated the order
state to “paid,” prompting a cancellation/refund if necessary.
Implementing the Payment-Sync Task

This function contains the core logic for payment-history


synchronization. The main steps and a brief explanation of the tasks we
want to implement are as follows:

[Code 5-31] syncPayments Comments

1 async function syncPayments() {


2 // 1. Retrieve PG company’s payment data
3 // - Request data page by page
4 // - Merge all pages containing payment data into one
5
6 // 2. Compare PG’s payment data with our commerce platform’s order data
7
8 // 3. Cancel and refund missing payments
9 // - Filter out PG payments that have no matching order
10 // - Cancel these payment IDs (pg_uid) via the cancellation API
11 }
12
13 async function main() {
14 await syncPayments();
15 }

Retrieving PG’s Payment Data (payments)

Following the comments, let’s implement each part step by step. First,
we must retrieve the PG company’s payment data via PgApi.getPayments.
Its spec:

PgApi.getPayments

Returns all payments made during a specified time, possibly split


into multiple pages.
Takes a page number, returning payment data for that page.
Each page has up to 3 payment items; if no data exist for the
requested page, it returns an empty array.

Because we don’t know how many pages we need to request in


advance, we must keep requesting pages until there are no more payment
items. How to express this in terms of list processing?
If we try to represent a job of unknown length as a list, it’s effectively
an infinite iterable (range(Infinity)).

[Code 5-32] Retrieving Payment History

1 import { fx, range } from "@fxts/core";


2
3 async function syncPayments() {
4 // 1. Retrieve PG company’s payment data (payments)
5 // - Request data page by page
6 // - Merge all pages that contain payment data into one
7
8 const payments =
9 fx(range(1, Infinity)) // A job list of unknown length
10 .map(page => [page, page, page]) // 1 => [1, 1, 1]
11 .take(5) // Assume it ends after 5 tries
(temporary)
12 .toArray(); // Convert the iterable to an
array
13
14 console.log(payments);
15 // [[1, 1, 1], [2, 2, 2], [3, 3, 3], [4, 4, 4], [5, 5, 5]]
16 }

[Code 5-32] shows how to represent retrieving PG’s payment data


page by page from a list-processing perspective.

1. range(1, Infinity)
Represents a potentially infinite job of unknown length as an
infinite iterable.
When consumed, yields increasing integers starting from 1. Each
integer represents a page number.
2. map(page => [page, page, page])
For each page number, create something like [1, 1, 1].
We’ll eventually map this to the real API call results.
3. take(5)
For demonstration, we artificially end the job after 5 tries.
We will later expand the logic to stop when no more data are
returned.
This approach allows incremental development and testing.
4. toArray()
Evaluates the iterable into a 2D array.
The result is an array-of-arrays representing page data.
This code uses the infinite iterable (range(1, Infinity)) to express
repeated jobs declaratively. We can clarify the overall structure and
gradually add logic, verifying each piece along the way.

[Code 5-33] Using flat to Unroll

1 import { fx, range } from "@fxts/core";


2
3 async function syncPayments() {
4 // 1. Retrieve PG company’s payment data (payments)
5 // - Request data page by page
6 // - Merge all pages that contain payment data into one
7
8 const payments =
9 fx(range(1, Infinity))
10 .map(page => [page, page, page])
11 .take(5)
12 .flat() // * Flatten the 2D iterator into
1D
13 .toArray();
14
15 console.log(payments);
16 // [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5]
17 }

Because our final goal is a merged payment history, we add flat to


make the result 1D.

Next, we’ll use toAsync() to handle asynchronous logic, converting


the internal iterator to an AsyncIterator and changing the map callback to
PgApi.getPayments.

[Code 5-34] Using PgApi.getPayments

1 import { fx, range } from "@fxts/core";


2
3 async function syncPayments() {
4 // 1. Retrieve PG company’s payment data (payments)
5 // - Request data page by page
6 // - Merge all pages that contain payment data into one
7
8 const payments = await // * use await to unwrap the Promise
9 fx(range(1, Infinity)) // A job list of unknown length
10 .toAsync() // * Convert to an async job
11 .map(page => PgApi.getPayments(page)) // * API call to retrieve
payment data
12 .take(5) // Assume it ends after 5 tries
(temporary)
13 .flat() // Flatten the 2D iterator into 1D
14 .toArray(); // Convert the iterator to an array
15
16 console.log(payments);
17 // Payment data request: https://pg.com/payments?page=1
18 // 3 items: PG11, PG12, PG13
19 // Payment data request: https://pg.com/payments?page=2
20 // 3 items: PG14, PG15, PG16
21 // Payment data request: https://pg.com/payments?page=3
22 // 2 items: PG17, PG18
23 // Payment data request: https://pg.com/payments?page=4
24 // 0 items: -
25 // Payment data request: https://pg.com/payments?page=5
26 // 0 items: -
27 // [
28 // { pg_uid: 'PG11', store_order_id: 1, amount: 15000 },
29 // { pg_uid: 'PG12', store_order_id: 2, amount: 25000 },
30 // { pg_uid: 'PG13', store_order_id: 3, amount: 10000 },
31 // { pg_uid: 'PG14', store_order_id: 4, amount: 25000 },
32 // { pg_uid: 'PG15', store_order_id: 5, amount: 45000 },
33 // { pg_uid: 'PG16', store_order_id: 6, amount: 15000 },
34 // { pg_uid: 'PG17', store_order_id: 7, amount: 20000 },
35 // { pg_uid: 'PG18', store_order_id: 8, amount: 30000 }
36 // ] (8 total)
37 }

We made 5 calls to fetch payment data, each returning 3, 3, 2, 0, 0


payment items, all merged into one. Currently, we use take(5) and thus
still make 2 unnecessary calls that yield no data. If there were more than 15
items total, we’d also miss some data.

To fix this, we’ll use takeWhile so we keep requesting pages only


while we get new data.

[Code 5-35] Applying takeWhile

1 import { fx, range } from "@fxts/core";


2
3 async function syncPayments() {
4 // 1. Retrieve PG company’s payment data (payments)
5 // - Request data page by page
6 // - Merge all pages that contain payment data into one
7
8 const payments = await
9 fx(range(1, Infinity))
10 .toAsync()
11 .map(page => PgApi.getPayments(page))
12 .takeWhile(({length}) => length > 0) // * Keep requesting as long
as length > 0
13 .flat()
14 .toArray();
15
16 console.log(payments);
17 // Payment data request: https://pg.com/payments?page=1
18 // 3 items: PG11, PG12, PG13
19 // Payment data request: https://pg.com/payments?page=2
20 // 3 items: PG14, PG15, PG16
21 // Payment data request: https://pg.com/payments?page=3
22 // 2 items: PG17, PG18
23 // Payment data request: https://pg.com/payments?page=4
24 // 0 items: -
25 // [
26 // { pg_uid: 'PG11', store_order_id: 1, amount: 15000 },
27 // ... omitted ...
28 // { pg_uid: 'PG18', store_order_id: 8, amount: 30000 }
29 // ] (8 total)
30 }

With takeWhile and length > 0, we see 4 attempts. As explained in


Chapter 5.2, takeWhile consumes items until the condition is false, so it
also consumes one attempt where length was 0. In this scenario,
takeUntilInclusive might be more suitable.

[Code 5-36] Applying takeUntilInclusive

1 import { fx, range } from "@fxts/core";


2
3 async function syncPayments() {
4 // 1. Retrieve PG company’s payment data (payments)
5 // - Request data page by page
6 // - Merge all pages that contain payment data into one
7
8 const payments = await
9 fx(range(1, Infinity))
10 .toAsync()
11 .map(page => PgApi.getPayments(page))
12 .takeUntilInclusive(({length}) => length < 3)
13 .flat()
14 .toArray();
15
16 console.log(payments);
17 // Payment data request: https://pg.com/payments?page=1
18 // 3 items: PG11, PG12, PG13
19 // Payment data request: https://pg.com/payments?page=2
20 // 3 items: PG14, PG15, PG16
21 // Payment data request: https://pg.com/payments?page=3
22 // 2 items: PG17, PG18
23 // [
24 // { pg_uid: 'PG11', store_order_id: 1, amount: 15000 },
25 // ... omitted ...
26 // { pg_uid: 'PG18', store_order_id: 8, amount: 30000 }
27 // ] (8 total)
28 }
[Code 5-36] uses takeUntilInclusive to create a more efficient code
path. Since each page can have up to 3 payment items, we stop as soon as
we see a page with fewer than 3 items, thus collecting 8 items total with
only 3 API calls.

Now we have code that efficiently retrieves all payment items over a
certain period. Next, we’ll see how to detect missing orders based on this
data and implement an automatic refund process.

Matching Against Commerce Platform’s Order Data

To compare the PG payment history with our commerce platform’s


order data for missing items, we extract each order ID from payments and
query the relevant orders from the commerce platform.

[Code 5-37] Querying Matching Orders

1 import { fx, range } from "@fxts/core";


2
3 async function syncPayments() {
4 // 1. Retrieve PG company’s payment data (payments)
5 // - Merge all pages containing payment data
6
7 const payments = await
8 fx(range(1, Infinity))
9 .toAsync()
10 .map(page => PgApi.getPayments(page))
11 .takeUntilInclusive(({length}) => length < 3)
12 .flat()
13 .toArray();
14
15 // 2. Compare PG payment data with commerce platform’s order data
16
17 const orders = await StoreDB.getOrders(
18 payments.map(p => p.store_order_id)
19 );
20 // [
21 // { id: 1, amount: 15000, is_paid: true },
22 // { id: 3, amount: 10000, is_paid: true },
23 // { id: 5, amount: 45000, is_paid: true },
24 // { id: 7, amount: 20000, is_paid: true },
25 // { id: 8, amount: 30000, is_paid: true },
26 // ]
27
28 // ...
29 }
First, payments contains all retrieved payment data, each specifying a
store_order_id. We pass these IDs to StoreDB.getOrders to get the list of
orders that are in a “paid” state in the commerce platform.

If orders is smaller than payments, that means some payments have


not been reflected in our commerce platform’s order database, requiring a
cancellation/refund for the missing items.

Cancelling and Refunding Missing Payments

In this simulation, payments has 8 entries, while orders returns only 5.


This indicates 3 missing orders that the commerce platform never
recognized, so we should refund those 3 payment records.

[Code 5-37a] Querying Matching Orders

1 import { fx, range } from "@fxts/core";


2
3 async function syncPayments() {
4 // ... omitted ...
5
6 // 2. Compare PG payment data with commerce platform’s order data
7
8 const orders = await StoreDB.getOrders(
9 payments.map(p => p.store_order_id)
10 );
11 // [
12 // { id: 1, amount: 15000, is_paid: true },
13 // { id: 3, amount: 10000, is_paid: true },
14 // { id: 5, amount: 45000, is_paid: true },
15 // { id: 7, amount: 20000, is_paid: true },
16 // { id: 8, amount: 30000, is_paid: true },
17 // ]
18
19 // 3. Cancel and refund missing payments
20 // - Filter out PG payments that do not match any commerce platform
order
21 // - Cancel these payment IDs (pg_uid) via the cancellation API
22
23 await fx(payments)
24 .toAsync()
25 .reject(p => orders.some(order => order.id === p.store_order_id))
26 .forEach(async p => {
27 const { message } = await PgApi.cancelPayment(p.pg_uid);
28 console.log(message);
29 });
30 // PG12: Cancellation and refund complete
31 // PG14: Cancellation and refund complete
32 // PG16: Cancellation and refund complete
33 }

[Code 5-37a] compares payments and orders and cancels/refunds any


payment that’s missing from orders.

We use reject to pick out items in payments that are not in orders.
Specifically, orders.some(order => order.id === p.store_order_id)
checks if each payment’s store_order_id is found in orders; items that
match are removed, leaving only missing payments. Then for each missing
payment, we call PgApi.cancelPayment to cancel and refund the payment.
We log the result message to confirm each missing payment was refunded.

Optimizing Time Complexity Using Hashing

The previous code uses orders.some within reject, so in the worst


case we have O(n * m) complexity, where n is the length of payments and m
is the length of orders.

We can optimize by converting orders into a hash structure so that


checking for an order_id exists in O(1), effectively reducing total time
complexity.

[Code 5-38] Switching to a Hash-Based Lookup

1 // Transform key-value array into an object for a hash structure


2 const ordersById = Object.fromEntries(
3 map(order => [order.id, true], orders) // Convert orders array into
[key, value]
4 );
5 // { 1: true, 3: true, 5: true, 7: true, 8: true }
6
7 await fx(payments)
8 .toAsync()
9 .reject(p => ordersById[p.store_order_id]) // O(1) check for matched
payments
10 .forEach(async p => {
11 const { message } = await PgApi.cancelPayment(p.pg_uid);
12 console.log(message);
13 });
We create ordersById by iterating over orders and creating
[order.id, true] pairs, then passing them to Object.fromEntries. This
produces a hash map to quickly check existence via
ordersById[p.store_order_id].

We then optimize the reject step. Instead of orders.some(...), we


do a quick O(1) hash lookup. Or, if preferred, we can use a built-in Map.

[Code 5-39] Using a Map

1 const ordersMapById = new Map(


2 map(order => [order.id, true], orders)
3 );
4
5 await fx(payments)
6 .toAsync()
7 .reject(p => ordersMapById.has(p.store_order_id)) // Use Map's has
method
8 .forEach(async p => {
9 const { message } = await PgApi.cancelPayment(p.pg_uid);
10 console.log(message);
11 });

[Code 5-39] uses JavaScript’s built-in Map instead of an object. We


transform the orders array into key-value pairs (via map(order =>
[order.id, true])) and pass them to the Map constructor. Checking if a
store_order_id is in orders is now a simple Map.has(...) call in O(1).

For large datasets, using an iterator-based map function is more


memory-efficient, and Map can directly accept an iterable, minimizing data
copying. Also, for large data, Map typically outperforms an Object.

Once familiar with list processing, you can easily transform data
structures and implement such optimizations. This is invaluable for large
datasets, where you want to optimize both performance and clarity.

Maintaining a Stable Interval for Asynchronous Work

Finally, we need to run syncPayments repeatedly at a fixed interval.


Should we consider bringing in an external library for scheduling? If you’re
comfortable with list processing and asynchronous operations, you can
implement repeated scheduling with stable time intervals using them.

[Code 5-40] shows how to control asynchronous tasks reliably with list
processing.

[Code 5-40] Safe Asynchronous Repetition

1 async function runScheduler() {


2 await fx(range(Infinity))
3 .toAsync()
4 .forEach(() => Promise.all([
5 syncPayments(),
6 delay(10000)
7 ]));
8 }

This code repeatedly executes syncPayments every 10 seconds. Here’s


how it works:

1. range(Infinity) infinite iterable

We represent a never-ending repetitive task as an infinite iterable.


Converted to asynchronous with toAsync().

2. Repeated execution with forEach

We use forEach to iterate over the infinite sequence (one iteration


= one round of tasks).
Group the asynchronous tasks with Promise.all and process
them in parallel.

3. Concurrent execution with Promise.all

syncPayments() retrieves payment data and can take a variable


amount of time.
delay(10000) is a 10-second wait.
Promise.all waits for both tasks to finish.

4. Maintaining a stable interval


If syncPayments takes longer than 10 seconds, delay(10000)
finishes first, and the actual interval is determined by how long
syncPayments took.
If syncPayments finishes in less than 10 seconds, we wait until
delay(10000) completes before starting the next loop iteration.
By adjusting the delay times inside PgApi.getPayments and
PgApi.cancelPayment code, you can verify this behavior.
This ensures a stable interval and helps manage external API load.

Hence, list processing elegantly solves the problem of repeatedly


scheduling tasks at a stable time interval. This technique is quite useful for
controlling external API calls or complying with rate-limit policies. List
processing provides stability, efficiency, maintainability, and readability for
backend operations such as data sync, crawlers, or API calls.

In particular, we effectively used Promise.all. Promise.all is a


powerful tool that lets you declaratively manage asynchronous logic. It
keeps tasks independent while running in parallel, simplifying and
clarifying the logic for complex asynchronous workflows.

The difference between Promise.all finishing logic and


Promise.race finishing logic is as intuitive as taking Math.max vs.
Math.min of the results. This declarative approach effectively manages
complex asynchronous tasks and keeps code concise and understandable.

Handling Maximum Request Size Limits Efficiently

Modern backend systems often have maximum request size limits in


external APIs, databases, or microservices. Such limits are essential for
stable service operation, but failing to manage them can lead to service
disruptions or unexpected errors.

For example, if StoreDB.getOrders can only handle 5 requests per


call, how do we safely manage that limit? Below is some sample code to
simulate such a limit and show how list processing handles it.
[Code 5-41] Adding Request-Limit Handling

1 const StoreDB = {
2 async getOrders(ids: number[]): Promise<Order[]> {
3 if (ids.length > 5) {
4 throw new Error(
5 `Too many IDs: can only request up to 5. (Received:
${ids.length})`
6 );
7 }
8 console.log(`SELECT * FROM orders WHERE IN (${ids}) AND is_paid =
true;`);
9 await delay(100);
10 return [
11 { id: 1, amount: 15000, is_paid: true },
12 { id: 3, amount: 10000, is_paid: true },
13 { id: 5, amount: 45000, is_paid: true },
14 { id: 7, amount: 20000, is_paid: true },
15 { id: 8, amount: 30000, is_paid: true },
16 ];
17 }
18 };

[Code 5-41] throws an error if the ids array exceeds 5. As a result,


calling StoreDB.getOrders with all store_order_id from payments at
once would fail if there are more than 5 IDs.

[Code 5-42] Problematic Code: Error Occurs

1 const orders = await StoreDB.getOrders(


2 payments.map(p => p.store_order_id)
3 );
4
5 // Error: Too many IDs: can only request up to 5. (Received: 8)

We’re trying to pass 8 store_order_ids at once, exceeding the limit.


We can fix this by splitting requests into smaller batches.

How much code do we need to change? List processing gives an easy


way to fine-tune request logic. [Code 5-43] shows how to handle up to 5
requests at a time using chunk.
[Code 5-43] Safely Splitting Requests: chunk and flatMap

1 const orders = await


2 fx(payments)
3 .map(p => p.store_order_id) // Extract store_order_id from each
payment
4 .chunk(5) // * Split into groups of 5
5 .toAsync() // Convert to an async iterable
6 .flatMap(StoreDB.getOrders) // * Process each group of IDs via
StoreDB.getOrders
7 .toArray(); // Convert result to an array

[Code 5-43] uses chunk and flatMap to elegantly handle request-size


limits:

1. map
Extract store_order_id from each payment. This yields an array
of IDs for the queries.
2. chunk(5)
Split the ID array into groups of 5, satisfying the request limit.
Each group has at most 5 IDs.
3. toAsync
Convert the synchronous iterable to an asynchronous one, letting
you handle asynchronous operations in a list-processing way.
4. flatMap
Pass each chunk of IDs to StoreDB.getOrders, flattening their
results into a single array.
5. toArray
Convert everything into a single array at the end.

This code respects the ID limit while querying data efficiently.


Requests are always in groups of at most 5 IDs, so we never exceed the
limit. This prevents unexpected errors and ensures stable data access.

Failure to consider such limits can cause unexpected errors in backend


systems. These issues often lurk in code, risking stability in live
environments.

For instance, some AWS SDK APIs have constraints on request size or
the number of repeated calls. For example, AWS DynamoDB’s
BatchGetItem has a maximum of 100 keys.

Similar constraints appear frequently in Payment Gateway APIs or


microservices (MSA). Properly handling them is essential for building
stable, efficient backend systems.

Counting Up Front for Efficiency

When designing a service or SDK like PgApi, providing a feature to


quickly determine the total page count (Page Count) in advance can greatly
improve efficiency. This allows you to estimate the necessary number of
requests, eliminating the need to do repeated checks with takeWhile or
takeUntilInclusive. A pre-check for page counts is generally lighter than
retrieving all payment data, saving system resources. If you can detect that
no data is needed, you can skip the requests entirely, further preventing
unnecessary tasks. Such design can optimize both the service provider (PG)
and the user (commerce platform), boosting system-wide performance.

[Code 5-44] adds a page-count API to PgApi. Before actually fetching


the payment data, this API returns the total page count.

[Code 5-44] Adding a Page-Count API

1 // Payment Gateway API


2 const PgApi = {
3 /**
4 * Returns the total number of pages for a specified time
5 * (omitting time-range args for convenience)
6 * @returns The total page count
7 */
8 async getPageCount() {
9 console.log('Page count request: https://pg.com/payments/page-count');
10 await delay(50); // Simulate a lightweight DB count
11 return pgDataPaymentsPages.length; // Return total pages stored for
payment data
12 },
13
14 /**
15 * Retrieves all payment data for a certain time
16 * (omitting time-range args for convenience)
17 * @param page The page number
18 */
19 async getPayments(page: number) {
20 console.log(`Payment data request: https://pg.com/payments?
page=${page}`);
21 await delay(500);
22
23 const payments = pgDataPaymentsPages[page - 1] ?? [];
24 console.log(
25 `${payments.length} items: ${payments.map(p => p.pg_uid).join(', ')
|| '-'}`
26 );
27
28 return payments;
29 },
30 // ... omitted ...
31 }

With getPageCount, you can precisely set the number of times you’ll
call getPayments, avoiding unneeded requests. Page-count queries cost less
than retrieving full payment data, conserving resources. Sometimes you
might detect 0 pages and skip the entire request. This design lets both sides
optimize performance.

Now, using this page-count API, the final code for retrieving all
payment data effectively looks like:

[Code 5-45] Querying Matching Orders

1 async function syncPayments() {


2 // 1. Retrieve PG company’s payment data (payments)
3
4 // Old code
5 // const payments = await
6 // fx(range(1, Infinity)) // * Replace Infinity with totalPages+1
7 // .toAsync()
8 // .map(page => PgApi.getPayments(page))
9 // .takeUntilInclusive(({length}) => length < 3) // * remove
takeUntilInclusive
10 // .flat()
11 // .toArray();
12
13 // Updated code
14 const totalPages = await PgApi.getPageCount();
15
16 const payments = await
17 fx(range(1, totalPages + 1))
18 .toAsync()
19 .map(page => PgApi.getPayments(page))
20 .flat()
21 .toArray();
22
23 // ... omitted ...
24 }
This code calls PgApi.getPageCount in advance, retrieves the total
page count, then uses range(1, totalPages + 1) to request exactly the
needed pages. We no longer need an additional condition like
takeUntilInclusive to determine when to stop, because range(1,
totalPages + 1) will definitely request the correct number of pages.

For example, if totalPages is 2, we create range(1, 3), which yields


pages 1 and 2—exactly two calls to PgApi.getPayments. If totalPages is
0, we create an empty iterator and never call PgApi.getPayments. The
result is an empty array of payments, guaranteeing no unneeded requests.

Improving Efficiency With Parallelism

We’ve just seen how a page-count API can skip unnecessary calls.
Next, we can use concurrency to further improve efficiency. Since we know
the total number of pages, there’s no need to request them sequentially. We
can request them in parallel to reduce overall time.

Before we apply this to the syncPayments example, let’s see how to


use the concurrent method. Below is code that consumes an asynchronous
iterator sequentially:

[Code 5-46] Sequential Execution

1 await fx([1, 2, 3, 4, 5, 6])


2 .toAsync()
3 .map((a) => delay(1000, a))
4 .toArray();
5 // 6 seconds
6
7 // evaluation
8 // ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐
9 // │ 1 │──│ 2 │──│ 3 │──│ 4 │──│ 5 │──│ 6 │
10 // └──┬──┘ └──┬──┘ └──┬──┘ └──┬──┘ └──┬──┘ └──┬──┘
11 // map │ │ │ │ │ │
12 // (1) (2) (3) (4) (5) (6)
13 // │ │ │ │ │ │
14 // ▼ ▼ ▼ ▼ ▼ ▼

We can add concurrent to run code in parallel. concurrent consumes


the iterator with up to the specified number of items simultaneously.
[Code 5-47] concurrent

1 await fx([1, 2, 3, 4, 5, 6])


2 .toAsync()
3 .map((a) => delay(1000, a))
4 .concurrent(2)
5 .toArray();
6 // 3 seconds
7
8 // evaluation
9 // ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐
10 // │ 1 │──│ 2 │──│ 3 │──│ 4 │──│ 5 │──│ 6 │
11 // └──┬──┘ └──┬──┘ └──┬──┘ └──┬──┘ └──┬──┘ └──┬──┘
12 // map │ │ │ │ │ │
13 // concurrent(2) (1) (1) (2) (2) (3) (3)
14 // │ │ │ │ │ │
15 // ▼ ▼ ▼ ▼ ▼ ▼
16
17 await fx([1, 2, 3, 4, 5, 6])
18 .toAsync()
19 .map((a) => delay(1000, a))
20 .concurrent(4)
21 .toArray();
22 // 2 seconds
23
24 // evaluation
25 // ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐
26 // │ 1 │──│ 2 │──│ 3 │──│ 4 │──│ 5 │──│ 6 │
27 // └──┬──┘ └──┬──┘ └──┬──┘ └──┬──┘ └──┬──┘ └──┬──┘
28 // map │ │ │ │ │ │
29 // concurrent(4) (1) (1) (1) (1) (2) (2)
30 // │ │ │ │ │ │
31 // ▼ ▼ ▼ ▼ ▼ ▼

Since we can now know how many pages to request in advance, we


can simply add concurrency to the existing syncPayments to speed up the
retrieval time.

[Code 5-48] Parallel Requests

1 async function syncPayments() {


2 // 1. Retrieve PG company’s payment data (payments)
3
4 const totalPages = await PgApi.getPageCount(); // 3
5 // (~50ms)
6
7 const payments = await
8 fx(range(1, totalPages + 1))
9 .toAsync()
10 .map(page => PgApi.getPayments(page))
11 .concurrent(totalPages) // * Request all pages in parallel
12 .flat()
13 .toArray();
14 // If totalPages=3, we do 3 getPayments calls simultaneously
15 // ~500ms
16
17 // ... omitted ...
18 }

[Code 5-36] took 1500ms, calling getPayments 3 times in sequence (500ms each). [Code
5-48] calls getPageCount once (50ms) and then requests all 3 pages in parallel
(~500ms total), for ~550ms total.

[Code 5-48] uses concurrent(totalPages) to process payment-data


requests in parallel. Because it requests each page simultaneously, total time
is drastically reduced.

Sequentially, each page’s data must be fetched in turn, so total time is


the sum of each request. In parallel, all pages are requested together,
limiting total time to the slowest request’s response.

For many pages, concurrency greatly boosts speed. This minimizes


network wait and effectively uses system resources.

Additionally, parallel requests depend on the target API’s limits (e.g.,


rate limits). If we can only make two simultaneous requests, we pass
concurrent(2):

[Code 5-49] Limiting the Parallelism

1 async function syncPayments() {


2 // 1. Retrieve PG company’s payment data (payments)
3
4 const totalPages = await PgApi.getPageCount(); // 3
5 // ~50ms
6
7 const RATE_LIMIT = 2;
8
9 const payments = await
10 fx(range(1, totalPages + 1))
11 .toAsync()
12 .map(page => PgApi.getPayments(page))
13 .concurrent(RATE_LIMIT) // * Always request at most 2 pages
concurrently
14 .flat()
15 .toArray();
16 // total of 3 getPayments calls, with at most 2 at once
17 // ~1000ms
18
19 // ... omitted ...
20 }

[Code 5-49] ensures both efficiency of parallelism and stability of


limiting concurrent requests. This approach can maximize speed on large
datasets while also respecting external API rate limits.

List-Processing-Based Async/Concurrent Programming

We’ve explored multiple list-processing techniques for tackling


common backend programming issues.

To address real problems like data mismatches, we built a Payment-


History Synchronization Scheduler that effectively and reliably
synchronizes data.

We also discussed a hash-based approach to optimize time


complexity, and how to improve efficiency in data matching for large
datasets.

By ensuring stable intervals in asynchronous work, we learned how


to manage system load (e.g., external API calls) at stable intervals,
underlining the importance of managing asynchronous logic in an intuitive,
declarative style.

We then saw how to handle maximum request-size limits—a frequent


scenario in production—safely by chunking and applying list processing.
This approach shows how these techniques serve as real practical tools.

We then discussed using an API that provides a prior count,


allowing us to skip unnecessary calls and maximize efficiency. Finally, we
explored parallelism to further reduce waiting times by leveraging
concurrency for optimal speed.

These approaches are critical for reliability and scalability in modern


backend system design. Above all, handling these problems with list
processing not only achieves performance and effective logic but also
provides a powerful way to approach complex problems in a declarative,
structured manner. The benefits span code readability, maintainability,
resource optimization, and developer productivity.

From personal experience, I’ve found that using list processing in


backend programming yields much greater benefits than an imperative
style, especially for a team environment. Controlling data flow declaratively
while naturally solving asynchronous and parallel concerns is, I believe, a
key approach that combines productivity and reliability.

5.4 Patternizing List Processing


So far, we have experienced the practical benefits of list processing by
solving various problems. In particular, list processing is a powerful tool
that can represent complex problems in a simple and declarative manner.
What’s more interesting is that these problems share a common approach—
allowing us to systematize and patternize the list-processing approach.

In this chapter, we introduce some examples of patternizing list


processing in various combinations, helping readers understand it more
structurally. This is an attempt to reorganize both widely known and
personally favored techniques and patterns into reusable solutions for
solving recurring problems. The content here does not stem from any
official theory book or standard. Rather, it is a pragmatic proposal to help
people use list processing more easily. We hope these ideas will help you
remember list-processing techniques better and apply them effectively when
needed.

Transform-Accumulate (map-reduce)

The transform-accumulate pattern is among the most widely used in


list processing. You transform the initial iterable with map, then accumulate
the result with reduce to derive a final outcome. This pattern is mainly used
when the program’s outcome is a single value (not an array), or when you
want to transform data into an object, number, string, etc. It is highly
suitable for data aggregation, transformations, and accumulation tasks,
solving various problems simply and declaratively.

Total Quantity of Products

[Code 5-50] uses the transform-accumulate pattern to extract


(transform) each product’s quantity from a product list and then accumulate
them to get the total quantity.

[Code 5-50] Total Quantity of Products

1 const totalQuantity = products =>


2 products
3 .map((product) => product.quantity)
4 .reduce((a, b) => a + b, 0);

The callback in reduce defines how to accumulate data. The function


can do more than just add numbers—it can combine or transform any
structure, including strings, objects, arrays, and even custom data types,
making reduce a powerful tool.

Converting a Query String to an Object

Below is an example of using the transform-accumulate pattern to


convert a query string into an object. A query string is a string of key-value
pairs separated by &. Converting it into an object maps each key and value
to a property.

[Code 5-51] Query String to Object

1 const queryString = "name=John&age=30&city=Seoul";


2
3 const queryObject = queryString
4 .split("&")
5 .map((param) => param.split("="))
6 .map(([key, val]) => ({ [key]: val }))
7 .reduce((a, b) => Object.assign(a, b), {});
8
9 console.log(queryObject);
10 // { name: "John", age: "30", city: "Seoul" }
Converting an Object to a Query String

Below is an example of using the transform-accumulate pattern to


convert an object’s key-value pairs into a query string. In this pattern, map
handles data transformation, and reduce traverses each element to
accumulate the final result. The object’s key-value pairs are serialized in the
form of a query string.

[Code 5-52] Object to Query String

1 const params = { name: "John", age: "30", city: "Seoul" };


2
3 const queryString =
4 Object.entries(params)
5 .map(entry => entry.map(encodeURIComponent))
6 .map(([key, value]) => `${key}=${value}`)
7 .reduce((a, b) => `${a}&${b}`);
8
9 console.log(queryString);
10 // "name=John&age=30&city=Seoul"

is used to convert an array into a string, inserting a separator


join
between the array’s values. Essentially, join is an abstraction of what we
could implement using reduce. If we apply join here, we get the following:

[Code 5-53] Object to Query String 2

1 const queryString2 =
2 Object.entries(params)
3 .map(entry => entry.map(encodeURIComponent))
4 .map((entry) => entry.join('='))
5 .join('&');
6
7 console.log(queryString2);
8 // "name=John&age=30&city=Seoul"

Using currying and pipes, we can make the same code even more
concise:

[Code 5-54] Object to Query String 3

1 const queryString3 = pipe(


2 Object.entries(params),
3 map(map(encodeURIComponent)),
4 map(join('=')),
5 join('&'),
6 );
7
8 console.log(queryString3);
9 // "name=John&age=30&city=Seoul"

Additionally, the examples of summing sequences, scores, quantities,


prices in Section 5.1 also employ the transform-accumulate pattern. In fact,
even the code in Section 5.3 that converts data into a hash or transforms a
promise-bearing iterable into an array can be regarded as the transform-
accumulate pattern, since helper functions such as Object.fromEntries
and Array.fromAsync essentially abstract operations that can be
implemented using reduce.

The transform-accumulate pattern is the most basic yet powerful list-


processing pattern for generating a final result by transforming data and
then accumulating it. It can effectively handle everything from adding
numbers to serializing strings and converting objects, providing a flexible
approach to a wide variety of problems.

Also note that code like map(map(f)) in [Code 5-54] can be useful in
many scenarios.

Nested-Transform (nested-map)

The nested-transform pattern is used in list processing when you want


to handle nested data structures or perform transformations at multiple
levels in the data. This pattern involves calling map inside another map (like
map(map(f))). It’s particularly suitable when you want to transform
external collections while also performing an internal transformation on
them.

This pattern is extremely helpful for hierarchical data such as trees or


2D arrays.
Transforming Tree Structures

The nested-transform pattern also helps when traversing a tree


structure to transform each node and its children.

[Code 5-55] tree

1 const tree = [
2 { id: 1, children: [{ id: 2 }, { id: 3 }] },
3 { id: 4, children: [{ id: 5 }] },
4 ];
5
6 const transformedTree = tree.map(({ id, children }) => ({
7 name: `parent-${id}`,
8 children: children.map(child => ({ name: `child-${child.id}` })),
9 }));
10
11 console.log(transformedTree);
12 // [
13 // { name: "parent-1", children: [ { name: "child-2" }, { name: "child-
3" } ] },
14 // { name: "parent-4", children: [ { name: "child-5" } ] }
15 // ]

Printing a Calendar (2D array join)

This example illustrates various list-processing patterns used in


handling 2D arrays. In particular, it emphasizes the process of generating
and shaping data, then converting it for an intuitive final output. We use
range, flat, chunk, map, and join to generate and print calendar data.

[Code 5-56] Generating a Calendar

1 import { pipe, flat, range, chunk, toArray, map, join } from "@fxts/core";
2
3 const getMonthEndDates = (monthEnd: Date) =>
4 monthEnd.getDay() === 6
5 ? []
6 : range(
7 monthEnd.getDate() - monthEnd.getDay(),
8 monthEnd.getDate() + 1,
9 );
10
11 const generateCalendar = (prevMonthEnd: Date, currentMonthEnd: Date) =>
12 pipe(
13 flat([
14 getMonthEndDates(prevMonthEnd), // range(29, 31) => from Sep 29 to
Sep 30
15 range(1, currentMonthEnd.getDate() + 1), // range(1, 32) => from Oct
1 to Oct 31
16 range(1, 6 - currentMonthEnd.getDay() + 1) // range(1, 3) => from
Nov 1 to Nov 2
17 ]),
18 chunk(7),
19 toArray,
20 );
21
22 const formatCalendar = (calendarWeeks: number[][]) =>
23 pipe(
24 calendarWeeks,
25 map(map(day => (day < 10 ? ` ${day}` : `${day}`))),
26 map(join(' ')),
27 join('\n'),
28 );
29
30 const renderCalendar = (year: number, month: number) =>
31 pipe(
32 generateCalendar(
33 new Date(year, month - 1, 0), // last day of the previous month
34 new Date(year, month, 0) // last day of this month
35 ),
36 formatCalendar,
37 console.log
38 );
39
40 renderCalendar(2024, 10);
41 // Output (October 2024):
42 // 29 30 1 2 3 4 5
43 // 6 7 8 9 10 11 12
44 // 13 14 15 16 17 18 19
45 // 20 21 22 23 24 25 26
46 // 27 28 29 30 31 1 2

renderCalendar Function

1. JavaScript’s new Date peculiarity

In JavaScript, new Date(year, month, day) sets month from 0


(January) to 11 (December).
By adding 1 to month and then using day=0, we get the last day of
that month.
For example, new Date(2024, 9, 0) returns Sep 30, the last day
of September.

2. Generating and Transforming the Calendar


generateCalendar: Creates date data for last month, this month,
and next month, grouping them by week.
formatCalendar: Converts a 2D array of numbers into a readable
string.

3. Composition and Reusability

renderCalendar composes generateCalendar and


formatCalendar to create and output the final calendar.
If you replace only the formatCalendar function, you could
extend it to produce HTML, CSV, or other formats.

generateCalendar Function

generateCalendar creates a monthly calendar for a specific year and


month, returning it as a 2D array grouped by weeks.

1. getDate

Returns the day of the month from a Date object.

2. getDay

Returns the day of the week (0 for Sunday, 6 for Saturday).

3. Previous Month’s Dates (getMonthEndDates)

Use range to include the last few days of the previous month.
monthEnd.getDate() → 30 (last day of September)
monthEnd.getDay() → 1 (Sep 30 is a Monday)
Start: 30 - 1 = 29
End: 30 + 1 = 31
Result: range(29, 31) => [29, 30] (the last week of September)
If monthEnd.getDay() is Saturday (6), we return an empty array
so they don’t appear in the calendar.

4. This Month’s Dates


Use range to generate days from 1 to the last day of this month.
currentMonthEnd.getDate() → 31 (October’s last day)
Start: 1
End: 31 + 1 = 32
Result: range(1, 32) => [1, 2, …, 31] (October)

5. Next Month’s Dates

Use range to fill up leftover days after this month’s last weekday.
currentMonthEnd.getDay() → 4 (Oct 31 is a Thursday)
Needed days: 6 - 4 + 1 = 3
Result: range(1, 3) => [1, 2] (the first week of November)
If the last day of this month were Saturday, we’d do 6 - 6 + 1 =
1, resulting in range(1, 1) which is empty.

6. The three range results

Previous month’s dates: [29, 30]


This month’s dates: [1, 2, ..., 31]
Next month’s dates: [1, 2]

7. Grouping by Week

Since it’s not grouped by week yet, we use flat([r1, r2, r3])
to flatten them into a 1D iterator, and chunk(7) to regroup them
into weekly arrays.

formatCalendar Function

formatCalendar converts those weekly arrays into a more readable


string layout:

1. Formatting days

map(map(day => (day < 10 ? ${day}{day}))) adds a space for


single-digit days.

2. Transforming inner arrays


map(join(' ')) merges each week into a space-separated string.

3. Final output

join('\n') merges each weekly string with a newline to produce


a final calendar format.

Key Patterns in Use

This example demonstrates how to handle 2D data by performing


transformations and accumulations at each level:

1. Flattening then regrouping

chunk(7, flat([range(), range(), range()])): merges date


ranges into 1D, then re-splits them into 7-day chunks.

2. Nested data transformations

map(map(...)): performs transformations at each nested level.

3. Level-based data accumulation

map(join(' ')) merges the inner array (week data) into a space-
separated string.
join('\n') merges weekly strings with newlines to form the
final calendar output.

This calendar example displays how list processing yields concise,


readable code that’s well-structured in patterns.

If it were written imperatively, you’d have multiple nested for loops


with complicated conditionals, plus repeated temporary variables and array
pushes. That would lengthen the code, reduce readability and
maintainability, and obscure the code’s intent.

In contrast, list processing expresses data flow declaratively, making it


easy to see how the data is handled at each step and to utilize reusable
patterns and structures.

Iterator-Effect (Iterator-forEach)

The iterator-effect pattern creates an iterator first, then consumes it


lazily to produce side effects (forEach). Often used when data is
transformed or retrieved, and you consume each piece to do a particular
task (like logging, output, or network calls). The final data is not produced;
rather, the operation itself is the goal.

Logging Transformed Data to the Console

Below is an example of using the iterator-effect pattern to transform a


list of numbers by doubling each number, then printing each value to the
console.

[Code 5-57] Logging Transformed Data

1 fx(range(5))
2 .map(x => x * 2)
3 .forEach(x => console.log(`Processed: ${x}`));

Payment Sync Scheduler Code

The code for controlling safe asynchronous repeats introduced in


Section 5.3 is also an example of the iterator-effect pattern.

[Code 5-58] Safe Asynchronous Repetition (From Code 5-40)

1 async function main() {


2 await fx(range(Infinity))
3 .toAsync()
4 .forEach(() => Promise.all([
5 syncPayments(),
6 delay(10000)
7 ]));
8 }
Similarly, the code for payment cancellation and repeated execution
uses the iterator-effect pattern. It consumes a list of payment items needing
cancellation, calling the cancellation API.

[Code 5-59] Cancel Payments (From Code 5-39)

1 await fx(payments)
2 .toAsync()
3 .reject(p => ordersMapById.has(p.store_order_id))
4 .forEach(async p => {
5 const { message } = await PgApi.cancelPayment(p.pg_uid);
6 console.log(message);
7 });

Using forEach to Isolate Side Effects

forEach is a method that returns no value, explicitly designed to


perform side effects in the callback (like logging, printing, or calling an
API). It calls a callback on each element of the array (or iterable) but
doesn’t return anything, clarifying that we separate data transformation
from the side effects.

Isolating side effects in this way helps maintainability. Predicting what


changes occur is easier, and debugging is simpler if a bug arises. For
instance, pure data transformations happen in map, filter, and reduce,
while side effects (DOM deletion, file writes, log creation, API calls, etc.)
occur in forEach.

Sometimes you need to produce a return value as well as a side effect.


In that case, you could define something like mapEffect, which works
similarly to map but explicitly includes side effects, preserving conciseness
and clarity of intent.

[Code 5-59-2] Cancel Payments (From Code 5-39)

1 await fx(payments)
2 .toAsync()
3 .reject(p => ordersMapById.has(p.store_order_id))
4 .mapEffect(p => PgApi.cancelPayment(p.pg_uid))
5 .forEach(res => console.log(res.message));
This distinction improves modularity and reusability of code,
embodying the functional programming philosophy of separating pure
functions from side effects. Thus, functions like forEach or mapEffect
intentionally allow side effects while making it explicit where they happen.

Filter-Terminate (filter-take)

The filter-terminate pattern uses filter to screen data by condition,


then uses take to select some subset of the data. It’s handy for rapidly
extracting some data from a large dataset where you only need part of it.
Because it only processes as much as needed, it’s efficient, skipping data
after meeting the condition.

find, some, every

As covered in Section 3.4, functions like find, every, and some use
filter and take to achieve an imperative break logic. We slightly rewrite
3.4’s content here to clarify the filter-take approach.

[Code 5-60] find, some, every

1 const find = <A>(f: (a: A) => boolean, iterable: Iterable<A>) =>


2 pipe(
3 iterable,
4 filter(f),
5 take(1),
6 ([found]) => found as A | undefined
7 );
8
9 const some = <A>(f: (a: A) => boolean, iterable: Iterable<A>) =>
10 pipe(
11 iterable,
12 filter(f),
13 take(1),
14 ([...arr]) => arr.length === 1,
15 );
16
17 const every = <A>(f: (a: A) => boolean, iterable: Iterable<A>) =>
18 pipe(
19 iterable,
20 reject(f),
21 take(1),
22 ([...arr]) => arr.length === 0,
23 );
find returns the first element satisfying the condition, or undefined if
none. We filter by condition, then use take(1), returning the first.
some returns true if at least one element satisfies the condition,
otherwise false. We filter by condition, take the first match, and see if
the result array length is 1.
every returns true if all elements satisfy the condition, otherwise
false. We reject the condition to find items that don’t satisfy it, take
the first mismatch, and see if the array length is zero.

Infinite-Terminate (range-take)

The infinite-terminate pattern is for extracting only the needed amount


from an infinite data stream. Using range to produce infinitely increasing
numbers (or data following a rule), then applying take to retrieve only as
many as you want. This approach is especially efficient when the data can
exceed a certain limit. In imperative terms, it resembles while-break.

Collatz Conjecture

Section 5.2’s Collatz Conjecture example is a classic case of the


infinite-terminate pattern.

[Code 5-61] collatzCount (From 5-26 ~ 5-27)

1 const nextCollatzValue = (num: number) =>


2 num % 2 === 0 // If even
3 ? num / 2
4 : num * 3 + 1;
5
6 const collatzCount = (num: number) => pipe(
7 repeatApply(nextCollatzValue, num), // Infinitely repeat the Collatz
operation
8 zip(range(1, Infinity)), // Combine with an infinite counter
via zip
9 find(([, val]) => val === 1), // Consume until we reach 1
10 collatz => collatz!,
11 head, // from [cnt, val], return cnt
12 );
13
14 console.log(collatzCount(4)); // 2
We create infinite iterators with repeatApply, range(1, Infinity),
etc. We then stop at some point using find. Under the hood, find uses
take.

Retrieving Payment History Until Data Appears

Section 5.3’s example—repeatedly requesting pages until no more data


remains—also fits the infinite-terminate pattern. We use range to get
infinite page numbers and takeUntilInclusive to stop under a certain
condition.

[Code 5-62] Stopping with takeUntilInclusive in an Infinite Page Loop (From 5-36)

1 const payments = await


2 fx(range(1, Infinity))
3 .toAsync()
4 .map(page => PgApi.getPayments(page))
5 .takeUntilInclusive(({length}) => length < 3)
6 .flat()
7 .toArray();

Split-Flatten (chunk-flat)

The split-flatten pattern divides data into fixed-size chunks and then
flattens them (flat) into a desired shape. This pattern is useful when you
need to restructure data or process large data in consistent chunks and then
merge the results. For instance, it can be used for an API with request-size
limits or for paging.

Applying to an API With Request Limits

Below we use chunk to split requests into groups of five, process each
group via StoreDB.getOrders, and then flatten (flat) to produce a single
array. This approach handles large data effectively while respecting the
limit.
[Code 5-63] Applying a Request-Size Limit (From 5-43)

1 const orders = await


2 fx(payments)
3 .map(p => p.store_order_id)
4 .chunk(5) // split requests into groups of 5
5 .toAsync()
6 .map(StoreDB.getOrders)
7 .flat() // flatten
8 .toArray();

map(StoreDB.getOrders).flat() in [Code 5-63] is also an example


of transform-flatten, explained next.

Transform-Flatten (map-flat)

The transform-flatten pattern transforms data via map and then flattens
it (flat) into a continuous data flow. It helps flatten nested data into a
single level or when each element is transformed into new data and merged
into a single structure.

Merging Comments and Replies Into One

[Code 5-64] merges comments with their replies and uses flat to
flatten them into a single array.

[Code 5-64] Flattening Comments and Replies

1 const comments = [
2 {
3 id: 1, text: "First comment", replies: [
4 { id: 11, text: "Reply 1-1" }
5 ]
6 },
7 {
8 id: 2, text: "Second comment", replies: []
9 },
10 {
11 id: 3, text: "Third comment", replies: [
12 { id: 31, text: "Reply 3-1" },
13 { id: 32, text: "Reply 3-2" }
14 ]
15 }
16 ];
17
18 // Merge each comment and its replies, then flatten into one array
19 fx(comments)
20 .map(({ id, text, replies }) => [{ id, text }, ...replies])
21 .flat()
22 .forEach(console.log);
23
24 // [
25 // { id: 1, text: "First comment" },
26 // { id: 11, text: "Reply 1-1" },
27 // { id: 2, text: "Second comment" },
28 // { id: 3, text: "Third comment" },
29 // { id: 31, text: "Reply 3-1" },
30 // { id: 32, text: "Reply 3-2" }
31 // ]

We can then handle comments and replies on the same level.

Summation After Flattening a Nested Data Structure

[Code 5-65] is from Section 5.1. The pattern of map(f).flat() is used


so frequently that a function like flatMap often exists for convenience.

[Code 5-65] (From 5.1)

1 const totalHighScorers = teams


2 .flatMap(team => team.players) // Flatten the team's 2D structure into
players
3 .map(player => player.score)
4 .reduce((a, b) => a + b, 0);
5
6 const totalQuantity = products
7 .flatMap(prd => prd.options) // Flatten each product's array of
options
8 .map(opt => opt.quantity)
9 .reduce((a, b) => a + b, 0);

Combine-Accumulate (zip-reduce)

The combine-accumulate pattern involves combining multiple


iterables via zip, then iterating over them to accumulate (reduce) a final
result.
Creating an Object From keys and values

Using zip on two arrays (keys and values) to pair them, then turning
each pair into an object key-value via accumulation.

[Code 5-66] zip((keys, values)) (From 5-23)

1 const keys = ['name', 'job', 'location'];


2 const values = ['Marty', 'Programmer', 'New York'];
3
4 const object =
5 fx(zip(keys, values))
6 .map(([key, val]) => ({ [key]: val }))
7 .reduce((a, b) => Object.assign(a, b), {});
8
9 // { name: 'Marty', job: 'Programmer', location: 'New York' }

Assigning Unique IDs to a List

[Code 5-67] uses range(Infinity) and zip to give unique IDs to each
item. toArray traverses the iterable, so effectively it’s a combine-
accumulate pattern as well.

[Code 5-67] Assigning Unique IDs to a List

1 const items = ["Apple", "Banana", "Cherry"];


2
3 const itemsWithIds = pipe(
4 zip(range(Infinity), items),
5 map(([id, item]) => ({ id, item })),
6 toArray
7 );
8
9 console.log(itemsWithIds);
10
11 // [
12 // { id: 0, item: "Apple" },
13 // { id: 1, item: "Banana" },
14 // { id: 2, item: "Cherry" },
15 // ]
Hash-Match (reduce-map)

The hash-match pattern is used in list processing to build or query data


efficiently using a hash (key-value map). It’s used for tasks such as creating
a map of data by keys, grouping, counting, or transforming. indexBy or
groupBy are common examples. Note that both essentially rely on reduce
under the hood.

This pattern is especially helpful when you need quick lookups or want
to restructure data. Building a hash-based structure provides O(n) or even
O(1) lookups for certain tasks, improving performance.

Matching posts and users

The following example uses indexBy to convert users into a hash


structure, then efficiently match them with posts so that each post is
augmented with its author’s information.

[Code 5-68] Matching posts and users

1 const users = [
2 { id: 1, name: "Alice" },
3 { id: 2, name: "Bob" },
4 ];
5
6 const posts = [
7 { id: 1, title: "FP", user_id: 1 },
8 { id: 2, title: "OOP", user_id: 2 },
9 { id: 3, title: "MPP", user_id: 2 },
10 ];
11
12 const usersById = indexBy(user => user.id, users);
13
14 const postsWithUsers = posts.map(post => ({
15 ...post,
16 user: usersById[post.user_id],
17 }));
18
19 console.log(postsWithUsers);
20 // [
21 // { id: 1, title: "FP", user_id: 1, user: { id: 1, name: "Alice" } },
22 // { id: 2, title: "OOP", user_id: 2, user: { id: 2, name: "Bob" } },
23 // { id: 3, title: "MPP", user_id: 2, user: { id: 2, name: "Bob" } },
24 // ]
indexBy transforms iterable data into a hash structure (object) keyed
by the function’s return. The callback must provide a unique key. In [Code
5-68], we key by user.id and store each user object at that key. This allows
efficient user lookup when merging users into posts.

Matching posts and comments

Here we match posts and comments by linking each post with the
comments that belong to it. We use groupBy to group comments by post_id,
then match them with posts.

[Code 5-69] Matching posts and comments

1 const comments = [
2 { id: 1, text: "Great post!", post_id: 1 },
3 { id: 2, text: "Very informative.", post_id: 1 },
4 { id: 3, text: "Thanks for sharing!", post_id: 2 },
5 ];
6
7 const commentsByPostId = groupBy(comment => comment.post_id, comments);
8
9 const postsWithComments = posts.map(post => ({
10 ...post,
11 comments: commentsByPostId[post.id] || [],
12 }));
13
14 console.log(postsWithComments);
15 // [
16 // {
17 // id: 1, title: "FP", user_id: 1, comments: [
18 // { id: 1, text: "Great post!", post_id: 1 },
19 // { id: 2, text: "Very informative.", post_id: 1 }
20 // ]
21 // },
22 // {
23 // id: 2, title: "OOP", user_id: 2, comments: [
24 // { id: 3, text: "Thanks for sharing!", post_id: 2 }
25 // ]
26 // },
27 // { id: 3, title: "MPP", user_id: 2, comments: [] },
28 // ]

groupBy transforms data into a hash structure keyed by the callback’s


return, grouping all elements that share the same key in an array. Here we
group by comment.post_id and store each comment in
commentsByPostId[post.id]. This is an efficient way to retrieve and attach
comments to each post.

Conceptual Summary of Each Type of List-Processing


Function

Finally, let’s categorize list-processing functions along the following


lines. Keeping this in mind, you can design list-processing pipelines
systematically: deciding “when transformation happens,” “where data is
actually consumed,” and “how to compute the final result,” to write more
efficient and readable code:

1. Lazy Intermediate Operations

They delay operations until the results are actually needed, and by
themselves, they do not produce a final result.
Examples: map, filter, zip.

2. Short-Circuit Intermediate Operations

They skip reading further data once a certain condition is met,


thus avoiding unnecessary operations.
Examples: take, takeWhile, takeUntilInclusive.

3. Terminal Operations

They consume the iterable entirely (or conditionally) to produce a


final result (either a single value or a specific element).
Once called, lazy operations are resolved and an actual iteration
occurs.
Examples: find, every, some, reduce.

4. Fold/Reduce

A type of terminal operation that aggregates the entire sequence


into one value.
Examples: reduce, groupBy, indexBy, also Promise.all or
Array.fromAsync are conceptually in this category.
5. Side Effects

These operations modify external state (output, logging, file


writes, etc.). Typically they “do something” with the data and
then end, returning no new data structure.
Representative function: forEach.

The various list-processing patterns introduced in this chapter are


powerful and practical tools for solving many problems. Each is
independently useful, but they can also be combined for flexibility in
tackling complex scenarios. Familiarity with these patterns helps you
predict code behavior and communicate logic clearly while enabling unit
testing, reusability, and a productive, maintainable environment. Ultimately,
list processing offers a great way to handle problems more clearly and with
fewer mistakes.

5.5 Summary
Handling Real-World Data

Functional programming is a powerful tool for enhancing consistency


and predictability by applying the same patterns to diverse data structures.
Clearly separating tasks using functions like filter, map, and reduce
reveals the role of each step and greatly improves readability. By
implementing each data processing step independently, you increase code
reusability and make it easier to modify or add new logic, thus improving
maintainability. Additionally, being able to inspect intermediate data at each
step facilitates debugging and helps design complex logic in a more concise
and clear manner. This approach simultaneously raises code quality and
productivity, enabling intuitive and efficient problem-solving.

Applying to More Problems

Functions like pipe, zip, range, takeWhile, and takeUntilInclusive


play an important role in effectively extending functional programming
concepts to various imperative problems.
pipe is a powerful tool for declaratively expressing workflow by
chaining multiple functions, enhancing both readability and
expressiveness.
zip and range are useful for combining data or generating dynamic
numeric sequences, offering an alternative to imperative patterns such
as loops and indices.
Functions like take, takeWhile, and takeUntilInclusive are more
than simple utilities for extracting a portion of data. They are potent
tools for determining and controlling the execution range of iterative
tasks, efficiently managing iterator consumption according to
conditions, and minimizing unnecessary computations to optimize
performance.

When you effectively leverage and combine a wide range of list


processing functions, you gain a powerful means to solve even complex
problems in a simple and intuitive way.

Backend Asynchronous Programming

Functional programming and list processing can effectively address


complex issues encountered in backend asynchronous programming. They
allow you to structure real-world challenges such as scheduler design, data
inconsistency resolution, efficient data matching, maintaining intervals
between asynchronous tasks, load management, request size limitations,
and parallelism control.

This approach not only solves problems but also provides a powerful
means of writing declarative and clear code. As a result, it increases code
readability and maintainability, optimizes system resource usage, and
significantly boosts development productivity. In modern backend systems,
list processing techniques have become a crucial way to ensure reliability
and scalability.

Patternizing List Processing

List processing is a powerful method for representing complex


problems in a simple, declarative manner. By patternizing, recombining,
and extending frequently used techniques, you can increase code
predictability, facilitate testing, reinforce reusability, and ultimately
maximize productivity in your development environment. List processing
gives you the ability to structure problems and solve them more clearly and
succinctly.
6. Multi-Paradigm Programming
Most modern programming languages have become multi-paradigm
languages. Now, it is possible to take a multi-paradigm approach that
involves choosing and combining various paradigms—such as functional,
object-oriented, and imperative—depending on the situation. Because the
types of problems we encounter are extremely diverse, developers need not
be bound to any single paradigm. This freedom in approach not only
diversifies code style but also offers a practical strategy for solving complex
data structures and a wide range of real-world scenarios with code that
delivers higher readability, maintainability, and scalability.

Such a multi-paradigm perspective goes beyond simply “providing a


variety of features” and fundamentally changes the way we view problems.
The functional paradigm elegantly handles data transformations through
higher-order functions and list processing, the object-oriented paradigm
helps clearly express complex hierarchical structures or states, and the
imperative paradigm aids comprehension with flow control and intuitive
constructs. By appropriately leveraging the strengths of each paradigm in
the right context, the problem-solving process becomes more flexible, and
the code structure naturally aligns with its intended purpose.

In this chapter, we will explore how to strategically combine the


diverse features offered by the language to address challenging problems—
such as complex concurrency control, nested template processing, and
stable backend environment configuration—through elegant and intuitive
code. We will examine how functional, object-oriented, and imperative
paradigms can harmoniously blend to handle a wider range of problems in
an effective and sensible manner.
6.1 Building an HTML Template Engine
Tagged Templates

Tagged Templates are a powerful tool that allow you to use template
literals more flexibly. Unlike regular template literals, Tagged Templates
process both the template strings and the inserted values through a user-
defined function. This lets you manipulate strings or generate specialized
outputs in various ways.

[Code 6-1] Using Tagged Templates

1 function upper(strs: TemplateStringsArray, ...vals: string[]) {


2 console.log(strs); // ["a: ", ", b: ", "."]
3 console.log(vals); // ["a", "b"]
4 return strs[0]
5 + vals[0].toUpperCase()
6 + strs[1]
7 + vals[1].toUpperCase()
8 + strs[2];
9 }
10
11 const a = 'a';
12 const b = 'b';
13
14 const result = upper`a: ${a}, b: ${b}.`;
15 console.log(result); // a: A, b: B.

1. When the expression upper`a: ${a}, b: ${b}.` is evaluated,


the upper function is executed. The function receives the template
literal’s static strings and dynamically inserted values as a string array
(strs) and a values array (vals).

2. The template literal is split at the boundaries between the static


strings and the dynamic values. The values inserted inside ${} go into
vals, and the preceding and following segments of static text go into
strs.

strs: ["a: ", ", b: ", "."]


vals: ["a", "b"]

3. The size of strs is always one element larger than that of vals.
Even if you change the template literal to a: ${a}, b: ${b} and
remove the final ., strs becomes ["a: ", ", b: ", ""].
A template literal always includes the last segment of static text
(which might be empty), so strs.length is always vals.length
+ 1.
Even if you change the template literal to ${a}${b}, strs
becomes ["", "", ""] and vals becomes ["a", "b"], so strs
remains one element longer than vals.

4. You then combine the elements in strs and vals in order, taking
one element at a time from each based on the same index. During this
process, you intercept the value of vals and convert it to uppercase
before including it in the final output string.

The Tagged Templates technique separates the template literal,


enabling flexible string handling. You can use it for various purposes such
as string manipulation, internationalization, or security checks (for instance,
preventing SQL injection or escaping characters to prevent XSS).

Implementing with List Processing

Now let’s build an HTML template engine using Tagged Templates. In


[Code 6-2], we match the lengths of strs (the fixed string array) and vals
(the array of inserted values), then use the zip function to combine the two
arrays and return tuples via an iterator.

[Code 6-2] The html function

1 import { pipe, zip, toArray } from "@fxts/core";


2
3 function html(strs: TemplateStringsArray, ...vals: string[]) {
4 vals.push(''); // Add an empty string to match the length of strs
5 return pipe(
6 zip(strs, vals), // Create a tuple iterator by zipping strs and vals
in order
7 toArray // Evaluate the tuple iterator and return it as an
array
8 );
9 }
10
11 const a = 'A',
12 b = 'B',
13 c = 'C';
14
15 const result = html`<b>${a}</b><i>${b}</i><em>${c}</em>`;
16
17 console.log(result);
18 // [["<b>", "A"], ["</b><i>", "B"], ["</i><em>", "C"], ["</em>", ""]]

Next, we add the flat operation to our zip-reduce pattern so that after
flattening the tuples, we can use reduce to accumulate them into a single
string.

[Code 6-3] Implementing with zip-flat-reduce

1 import { pipe, zip, flat, reduce } from "@fxts/core";


2
3 function html(strs: TemplateStringsArray, ...vals: string[]) {
4 vals.push(''); // Add an empty string to match the length
of strs
5 return pipe(
6 vals, // Utilize currying so it runs as zip(strs)
(vals)
7 zip(strs), // Create a tuple iterator by zipping strs
and vals in order
8 flat, // Flatten the tuple structure into a single
iterator
9 reduce((a, b) => a + b), // Accumulate into a single string
10 );
11 }
12
13 const a = 'A',
14 b = 'B',
15 c = 'C';
16
17 const result = html`<b>${a}</b><i>${b}</i><em>${c}</em>`;
18
19 console.log(result);
20 // <b>A</b><i>B</i><em>C</em>

1. We use zip to combine strs and vals into a tuple iterator of the form
[str, val].
By using the curried version of zip, we do pipe(vals,
zip(strs)) instead of zip(strs, vals).
2. We then use flat to convert the tuple iterator into a single-value
iterator.
A structure like [[str1, val1], [str2, val2]] becomes
[str1, val1, str2, val2].
3. Using reduce, we accumulate all the flattened values into one string.
4. The final result is an HTML string (<b>A</b><i>B</i><em>C</em>).

Replacing push with concat

In [Code 6-3], we used push to match the length of vals to that of


strs. However, as discussed in section 3.4, we can solve this using concat
instead.

Whereas push modifies the existing array, concat returns a lazily


evaluated iterator without modifying the original array, so you can achieve
the same result without side effects. Moreover, because you only traverse
the single appended value when needed, there’s essentially no performance
penalty compared to push. There’s also no extra memory overhead from
creating an entirely new array or copying all primitive values.

[Code 6-4] Using concat and Arrow Functions for Conciseness

1 import { pipe, zip, flat, reduce, concat } from "@fxts/core";


2
3 // A single-expression html function (arrow function)
4 const html = (strs: TemplateStringsArray, ...vals: string[]) =>
5 pipe(
6 concat(vals, ['']), // Match length with strs using concat instead of
push
7 zip(strs),
8 flat,
9 reduce((a, b) => a + b)
10 );
11
12 const a = 'A',
13 b = 'B',
14 c = 'C';
15
16 const result = html`<b>${a}</b><i>${b}</i><em>${c}</em>`;
17 console.log(result);
18 // <b>A</b><i>B</i><em>C</em>

With [Code 6-4], we lazily connect vals and [''] via concat, so we
can match the length of strs without using push. In this particular scenario,
vals is re-created each time, so using push('') wouldn’t really count as a
side effect that alters external state. However, as your code becomes more
complex, or when you need more statements, concat becomes increasingly
useful.
The main advantage of this change isn’t so much about reducing side
effects; it’s that everything is now written as expressions, letting us define
the html function as a concise arrow function. As a result, the code
increasingly takes on a functional programming style, which helps with
reuse and extensibility.

When we express everything as expressions, there’s no chance for


value mutation or references across different statements. As a result,
predictability and stability are enhanced, and it becomes easier to isolate
and test specific expressions. These constraints help build more trustworthy
code.

[Code 6-4a] shows how you can replace concat with a similar
function, append.

[Code 6-4a] Replacing concat with the Similar append

1 import { pipe, zip, flat, reduce, append } from "@fxts/core";


2
3 const html = (strs: TemplateStringsArray, ...vals: string[]) =>
4 pipe(
5 vals,
6 append(''), // Create a new iterator by appending '' to vals
7 zip(strs), // Same as zip(strs, appendedVals)
8 flat,
9 reduce((a, b) => a + b)
10 );

The append function works similarly to concat and also supports lazy
evaluation, generating elements only when needed. Moreover, the intuitive
function name append and its support for currying make the code more
declarative and intuitive, further highlighting the strengths of functional
programming.

Preventing XSS (Cross Site Scripting)

XSS is a technique whereby malicious scripts are inserted into a


webpage to harm other users viewing that page. For example, if user input
is directly inserted into HTML, attackers can run arbitrary JavaScript code
(e.g., via <script> tags). To prevent this, you need to convert (escape)
characters that might be interpreted as HTML syntax into a safe format.
Escaped HTML is treated as plain text by the browser, preventing script
code from executing and thus thwarting XSS attacks.

Below is an example from a file called escapeHtml.ts. It provides a


function escapeHtml that replaces characters with special meaning in
HTML (&, <, >, ", ', `) with corresponding HTML entities (&amp;, &lt;,
&gt;, &quot;, &#x27;, &#x60;). You can safely include user input in HTML
by using this method.

[Code 6-5] escapeHtml.ts

1 const escapeMap = {
2 '&': '&amp;',
3 '<': '&lt;',
4 '>': '&gt;',
5 '"': '&quot;',
6 "'": '&#x27;',
7 '`': '&#x60;',
8 };
9
10 const source = '(?:' + Object.keys(escapeMap).join('|') + ')';
11 const testRegexp = RegExp(source);
12 const replaceRegexp = RegExp(source, 'g');
13
14 function escapeHtml(val: unknown): string {
15 const string = `${val}`;
16 return testRegexp.test(string)
17 ? string.replace(replaceRegexp, (match) => escapeMap[match])
18 : string;
19 }
20
21 export { escapeHtml };

In [Code 6-5], the function constructs a regular expression using


escapeMap to replace special characters with HTML entities. It checks the
input string and performs the conversion only if needed. The returned string
can be safely rendered in HTML without allowing malicious scripts to run,
thus preventing potential XSS attacks.

Below is an example test of escapeHtml.


[Code 6-6] Testing escapeHtml

1 import { escapeHtml } from './escapeHtml';


2
3 console.log(escapeHtml('<script>alert("XSS")</script>'));
4 // &lt;script&gt;alert(&quot;XSS&quot;)&lt;/script&gt;
5
6 console.log(escapeHtml('Hello & Welcome! "Have" a nice day <3'));
7 // Hello &amp; Welcome! &quot;Have&quot; a nice day &lt;3

From the output, you can see that characters like < and > become
entities and are not interpreted as scripts by the browser.

We can prevent XSS attacks in strings generated by the html function


by converting any vals to their escaped representations using escapeHtml.
We can achieve this by applying escapeHtml to each element in vals with a
map function.

[Code 6-7] Applying escapeHtml in html

1 import { pipe, zip, flat, reduce, append, map } from "@fxts/core";


2 import { escapeHtml } from "./escapeHtml";
3
4 const html = (strs: TemplateStringsArray, ...vals: unknown[]) =>
5 pipe(
6 vals,
7 map(escapeHtml), // Apply escapeHtml to each value in vals (XSS
prevention)
8 append(''),
9 zip(strs),
10 flat,
11 reduce((a, b) => a + b)
12 );
13
14 const a = '<script>alert("XSS")</script>';
15 const b = 'Hello & Welcome!';
16
17 console.log(html`
18 <ul>
19 <li>${a}</li>
20 <li>${b}</li>
21 </ul>
22 `);
23 // <ul>
24 // <li>&lt;script&gt;alert(&quot;XSS&quot;)&lt;/script&gt;</li>
25 // <li>Hello &amp; Welcome!</li>
26 // </ul>
From the output, you’ll notice that <script> tags and quotes are
converted to HTML entities. Since the browser treats them as plain text,
malicious scripts do not run and the page is safely rendered.

Note that we changed the type of vals from string[] to unknown[]


because escapeHtml takes an unknown parameter and then converts it to a
string. This means the html function can handle all kinds of values that can
be converted to strings, making it more generic overall.

Enabling Component-Style Development with Nested


Data

[Code 6-8] shows an example of calling another component


(menuHtml) inside a single template (html). Splitting code into components
can greatly improve maintainability and reusability. However, in this code,
even the HTML snippet returned by menuHtml is being escaped by the html
function.

[Code 6-8] Problem: All component-separated HTML is escaped

1 type Menu = {
2 name: string;
3 price: number;
4 };
5
6 const menuHtml = ({ name, price }: Menu) => html`<li>${name} (${price})
</li>`;
7
8 const menu: Menu = { name: 'Choco Latte & Cookie', price: 8000 };
9
10 const a = '<script>alert("XSS")</script>';
11 const b = 'Hello & Welcome!';
12
13 const result = html`
14 <ul>
15 <li>${a}</li>
16 <li>${b}</li>
17 ${menuHtml(menu)}
18 ${html`<li>${html`<b>3-Level Nesting</b>`}</li>`}
19 </ul>
20 `;
21
22 console.log(result);
23 // Current output:
24 // <ul>
25 // <li>&lt;script&gt;alert(&quot;XSS&quot;)&lt;/script&gt;</li>
26 // <li>Hello &amp; Welcome!</li>
27 // &lt;li&gt;Choco Latte &amp; Cookie (8000)&lt;/li&gt;
28 // &lt;li&gt;&lt;b&gt;3-Level Nesting&lt;/b&gt;&lt;/li&gt;
29 // </ul>
30 //
31 // Desired output (expecting the component-separated HTML to not be
escaped):
32 // <ul>
33 // <li>&lt;script&gt;alert(&quot;XSS&quot;)&lt;/script&gt;</li>
34 // <li>Hello &amp; Welcome!</li>
35 // <li>Choco Latte &amp; Cookie (8000)</li>
36 // <li><b>3-Level Nesting</b></li>
37 // </ul>

In this example, menuHtml might have been expected to return


<li>Choco Latte &amp; Cookie (8000)</li>. However, the html
function treats the result returned by menuHtml as a simple string and
applies escaping. Consequently, the intended <li>...</li> tag is
interpreted as text characters, making component-based development
impossible.

This happens because the html function considers every incoming


value a potential XSS attack vector and escapes it by default. Under the
current structure, there’s no way to specify that “This value has already
been safely handled, so there’s no need to escape it.”

How can we keep HTML returned by a component function from


being escaped, including cases of multiple nesting like <b>3-Level
Nesting</b>? In the next section, we’ll look at an idea to solve this more
fundamentally, evolving the html function to better support component-
based development.

Solve Structural Issues with OOP, Solve Logic Issues


with FP

Up to now, the html function has been a straightforward


implementation that takes a string template and returns a string. However,
in supporting nested HTML structures and component-based development,
we run into two main problems:
1. Hierarchical Structure: Attempting to express nested components in
HTML with simple string concatenation makes it difficult to manage
the resulting output.
2. Selective Escaping: Not all values need escaping; sometimes certain
values should be used as-is without being escaped.

This combination of nested structure and conditional transformation


logic is not trivial. Especially when the nesting depth could be two times,
three times, or more, we need a recursive approach that traverses data down
to its deepest levels. If we try to handle this by impulsively adding if
statements or while loops, the code quickly becomes unwieldy and difficult
to maintain.

So how should we approach this?

First, let’s change our existing html function so that it returns an


instance of a class Html.

[Code 6-9] Returning an instance of Html

1 const html = (strs: TemplateStringsArray, ...vals: unknown[]) => new


Html(strs, vals);

Now, the html function creates a new Html instance using strs and
vals. That means html no longer returns a simple string; it returns an Html
object.

Next, let’s define the Html class. We’ll keep the functional pipeline
logic—map, append, zip, flat, reduce—from before, but replace the
map(escapeHtml) part with map(val => this.escape(val)). The escape
method checks whether val is an Html instance or just a normal value, and
if needed, processes it recursively so that deeply nested structures also work
correctly.

[Code 6-10] The Html class

1 class Html {
2 constructor(
3 private strs: TemplateStringsArray,
4 private vals: unknown[]
5 ) {}
6
7 private escape(val: unknown) {
8 return val instanceof Html
9 ? val.toHtml() // If val is an Html instance, recursively call
toHtml()
10 : escapeHtml(val); // If val is just a normal value, use escapeHtml
11 }
12
13 toHtml() {
14 return pipe(
15 this.vals,
16 map(val => this.escape(val)),
17 append(''),
18 zip(this.strs),
19 flat,
20 reduce((a, b) => a + b)
21 );
22 }
23 }

Here, toHtml() shows the integration of structure (class) and logic


(functional). The class (structure) holds data (strs, vals), and the
functional pipeline (logic) determines how to escape and combine those
strings. Also note how the escape() method calls toHtml() recursively if
the value is an Html instance, cleanly handling multiple levels of nesting.

Let’s see how it works in practice.

[Code 6-11] Resolving the nesting issue with html

1 const a = '<script>alert("XSS")</script>';
2 const b = 'Hello & Welcome!';
3 const menu: Menu = { name: 'Choco Latte & Cookie', price: 8000 };
4
5 const result = html`
6 <ul>
7 <li>${a}</li>
8 <li>${b}</li>
9 ${menuHtml(menu)}
10 ${html`<li>${html`<b>3-Level Nesting</b>`}</li>`}
11 </ul>
12 `;
13
14 console.log(result.toHtml());
15 // <ul>
16 // <li>&lt;script&gt;alert(&quot;XSS&quot;)&lt;/script&gt;</li>
17 // <li>Hello &amp; Welcome!</li>
18 // <li>Choco Latte &amp; Cookie (8000)</li>
19 // <li><b>3-Level Nesting</b></li>
20 // </ul>

The output shows that not only is a single nesting level handled, but
multiple nesting—like <b>3-Level Nesting</b>—is also dealt with
correctly through recursion. Structurally, we used an object-oriented
approach via the Html class to clarify the nesting. Logically, the functional
pipeline ensures stable and predictable escaping and string concatenation.

In real-world data structures, objects and arrays are often nested many
layers deep, and you frequently need partial transformations or
combinations. The pattern we applied here can be used for more than just
HTML template engines—think CSV or JSON data processing, or complex
table rendering logic with multiple hierarchy levels.

Throughout this process, we’ve used the principle “Use OOP for
structural issues, and FP for logic issues.” Complex hierarchical problems
become more manageable when clearly expressed with classes, while
escaping and string combination logic is simpler and more predictable with
functional paradigms. Combining OOP (structure) and FP (logic) in this
way helps solve complicated problems in a straightforward, predictable
manner, and contributes to writing more extensible, maintainable code.

Creating HTML Strings from Arrays

So far, we’ve focused on key-value nested data or single values in the


html template engine. However, in practice, arrays often appear as well—
such as when you need to render multiple menus at once.

[Code 6-12] attempts to render a list of menus inside a <ul>.

[Code 6-12] Rendering a menu list

1 type Menu = {
2 name: string;
3 price: number;
4 };
5
6 const menuHtml = ({ name, price }: Menu) => html`<li>${name} (${price})
</li>`;
7
8 const menus: Menu[] = [
9 { name: 'Americano', price: 4500 },
10 { name: 'Cappuccino', price: 5000 },
11 { name: 'Latte & cookie set', price: 8000 },
12 ];
13
14 const menuBoardHtml = (menus: Menu[]) => html`
15 <div>
16 <h1>Menu list</h1>
17 <ul>
18 ${menus.map(menuHtml).reduce((acc, a) => acc + a.toHtml(), '')}
19 </ul>
20 </div>
21 `;
22
23 console.log(menuBoardHtml(menus).toHtml());
24 // <div>
25 // <h1>Menu list</h1>
26 // <ul>
27 // &lt;li&gt;Americano (4500)&lt;/li&gt;
28 // &lt;li&gt;Cappuccino (5000)&lt;/li&gt;
29 // &lt;li&gt;Latte &amp; cookie set (8000)&lt;/li&gt;
30 // </ul>
31 // </div>

In [Code 6-12], we use .map(menuHtml).reduce((acc, a) => acc +


a.toHtml(), '') to generate pure HTML strings like <li>...</li> and
then inject them back into the outer html template. The problem is that the
html template engine classifies all incoming data as “untrusted,”
automatically escaping it.

1. toHtml() results in a simple string

a.toHtml() returns a plain string containing HTML tags.


When you feed that string into the outer html function, the
function regards it as ordinary text and escapes < and > again.
As a result, <li> becomes &lt;li&gt;, repeatedly.

2. Lack of distinction between “safe data” and raw strings

In the current design, there’s no way to communicate “This is


already safe HTML” to the upper-level html function.
The html function simply escapes every input, so any data that’s
been converted to a string at some point is treated as a candidate
for escaping.
Merely concatenating strings via .reduce() doesn’t help separate
“already safe HTML” from “values that still need escaping.” In this
approach, there’s no way to preserve or signal an “already safe” status when
you bubble up the string.

Adding Objects Functionally via +

However, with a little functional insight, we can tackle this problem


from a new perspective: rather than concatenating strings, we can
accumulate multiple Html instances into a single Html instance. In fact, we
caught a glimpse of this idea in [Code 6-11].

[Code 6-12a] shows an example of using reduce to combine Html


instances directly, instead of merging them into a single string.

[Code 6-12a] Reducing Html Instances

1 const menuBoardHtml2 = (menus: Menu[]) => html`


2 <div>
3 <h1>Menu list</h1>
4 <ul>
5 ${menus.map(menuHtml).reduce((a, b) => html`${a}${b}`)}
6 </ul>
7 </div>
8 `;
9
10 console.log(menuBoardHtml2(menus).toHtml());
11 // <div>
12 // <h1>Menu list</h1>
13 // <ul>
14 // <li>Americano (4500)</li>
15 // <li>Cappuccino (5000)</li>
16 // <li>Latte &amp; cookie set (8000)</li>
17 // </ul>
18 // </div>

We finally see the <li> tags rendering correctly. By combining Html


instances instead of raw strings, we preserve the “already safe HTML”
status and cleanly handle nested data and escaping logic.

html`${a}${b} works like a + b, merging two Html values to produce


anew Html. Crucially, we’re not concatenating strings, but combining Html
instances in a way that preserves the safety status of already-sanitized data.
This is a reminder that any value can be accumulated with reduce if
we provide the method or expression that combines those values.

Moving Array Handling into the Class for Convenience

[Code 6-10a] enhances developer convenience by adding a combine


method inside the Html class. When an array is encountered, combine
internally performs something like reduce((a, b) => html`${a}${b}`,
html``), automatically merging the array into a single, safe Html string.
Developers can simply pass an array without needing extra logic.

[Code 6-10a] Modifying Html

1 class Html {
2 constructor(
3 private strs: TemplateStringsArray,
4 private vals: unknown[]
5 ) {}
6
7 private combine(vals: unknown) {
8 return Array.isArray(vals)
9 ? vals.reduce((a, b) => html`${a}${b}`, html``)
10 : vals;
11 }
12
13 private escape(val: unknown) {
14 return val instanceof Html
15 ? val.toHtml()
16 : escapeHtml(val);
17 }
18
19 toHtml() {
20 return pipe(
21 this.vals,
22 map(val => this.escape(this.combine(val))),
23 append(''),
24 zip(this.strs),
25 flat,
26 reduce((a, b) => a + b)
27 );
28 }
29 }

With this change, [Code 6-12b] shows that simply passing an array
merges it into an appropriate, safe HTML string. Developers no longer need
to write a separate reduce step. Just pass something like
menus.map(menuHtml), and all elements are automatically combined into
one Html instance.

[Code 6-12b] Passing Arrays to menuBoardHtml

1 const menuBoardHtml = (menus: Menu[]) => html`


2 <div>
3 <h1>Menu list</h1>
4 <ul>
5 ${menus.map(menuHtml)}
6 </ul>
7 </div>
8 `;
9
10 console.log(menuBoardHtml(menus).toHtml());
11 // Result:
12 // <div>
13 // <h1>Menu list</h1>
14 // <ul>
15 // <li>Americano (4500)</li>
16 // <li>Cappuccino (5000)</li>
17 // <li>Latte &amp; cookie set (8000)</li>
18 // </ul>
19 // </div>

Now you can pass array data, and <li>...</li> tags are rendered
correctly in a secure manner. This kind of improvement simplifies the code
while keeping a functional style that naturally handles various data
structures.

Abstracting into a Higher-Order Function

If you refactor the internal logic of toHtml into a higher-order


function, you’ll achieve much greater reusability and flexibility. This is a
concise and effective solution when you can modify behavior with a single
function.

[Code 6-13] illustrates this strategy by extracting the pipeline logic


from toHtml into a new higher-order function called fillTemplate, which
takes a transformation function from the outside.

Now both the toHtml method and the earlier upper function can rely
on fillTemplate.
[Code 6-13] fillTemplate

1 function fillTemplate<T>(
2 strs: TemplateStringsArray,
3 vals: T[],
4 transform: (val: T) => string
5 ) {
6 return pipe(
7 vals,
8 map(transform),
9 append(''),
10 zip(strs),
11 flat,
12 reduce((a, b) => a + b)
13 );
14 }
15
16 class Html {
17 constructor(
18 private strs: TemplateStringsArray,
19 private vals: unknown[]
20 ) {}
21
22 private combine(vals: unknown) {
23 return Array.isArray(vals)
24 ? vals.reduce((a, b) => html`${a}${b}`, html``)
25 : vals;
26 }
27
28 private escape(val: unknown) {
29 return val instanceof Html
30 ? val.toHtml()
31 : escapeHtml(val);
32 }
33
34 toHtml() {
35 return fillTemplate(
36 this.strs,
37 this.vals,
38 val => this.escape(this.combine(val)),
39 );
40 }
41 }
42
43 const html = (strs: TemplateStringsArray, ...vals: unknown[]) =>
44 new Html(strs, vals);
45
46 function upper(strs: TemplateStringsArray, ...vals: string[]) {
47 return fillTemplate(
48 strs,
49 vals,
50 val => val.toUpperCase()
51 );
52 }
Previously, toHtml explicitly handled the pipe, map, zip, and reduce
calls. Now that logic is tucked away in fillTemplate, so in the Html class
you only need to define the transform function. For instance, you combine
nested Html objects in combine, then escape them in escape, and pass that
to fillTemplate as val => this.escape(this.combine(val)).

By leveraging higher-order functions in functional programming, you


can recompose and reuse logic with minimal effort. You can easily create a
variety of template functions—whether for HTML rendering or uppercase
conversion—just by supplying different transform functions to
fillTemplate.

Also, the Html class example demonstrates how recursion-based


structure and functional pipelines can work together, showing that object
orientation and functional programming can be complementary. Through
this multi-paradigm approach, you can elegantly solve challenges like
nested data structures or complex string transformations, providing flexible
and scalable solutions for various scenarios.

[Code 6-14] shows an example of using html and upper implemented


with fillTemplate.

[Code 6-14] Using html and upper with fillTemplate

1 const menuBoardHtml = (menus: Menu[]) => html`


2 <div>
3 <h1>Menu list</h1>
4 <ul>
5 ${menus.map(menuHtml)}
6 </ul>
7 </div>
8 `;
9
10 console.log(menuBoardHtml(menus).toHtml());
11 // Result:
12 // <div>
13 // <h1>Menu list</h1>
14 // <ul>
15 // <li>Americano (4500)</li>
16 // <li>Cappuccino (5000)</li>
17 // <li>Latte &amp; cookie set (8000)</li>
18 // </ul>
19 // </div>
20
21 const a = 'a';
22 const b = 'b';
23
24 console.log(
25 upper`a: ${a}, b: ${b}.`
26 );
27 // a: A, b: B.

From the outputs of both menuBoardHtml and upper, you can see that
our higher-order function-based abstraction works smoothly.

Building a Small Front-End Development Library

Having built an HTML template engine, let’s look at a simple front-


end development library that uses it, briefly touching on a Web API (vanilla
JS) approach. [Code 6-15] shows a View class that uses our Html engine. It
demonstrates object-oriented patterns such as generics for data type control,
an abstract method for defining templates, a clear render() call to create
DOM elements, and more.

[Code 6-15] The View class

1 abstract class View<T> {


2 private _element: HTMLElement | null = null;
3
4 constructor(public data: T) {}
5
6 element() {
7 if (this._element === null) {
8 throw new Error("You must call render() before accessing the
element.");
9 } else {
10 return this._element;
11 }
12 }
13
14 abstract template(): Html;
15
16 render(): HTMLElement {
17 const wrapEl = document.createElement('div');
18 wrapEl.innerHTML = this.template().toHtml();
19 this._element = wrapEl.children[0] as HTMLElement;
20 this._element.classList.add(this.constructor.name);
21 this.onRender();
22 return this._element;
23 }
24
25 protected onRender() {}
26 }

Here’s what’s going on in the View class:

1. Generics for Data Type Specification

The class is declared abstract class View<T>, using a type


parameter T. This lets you specify the data type the view handles,
ensuring type safety whenever you use this.data.

2. Template Method Pattern

abstract template(): Html; is an abstract method. Concrete


subclasses of View must implement this method to define the
HTML structure for rendering. Our Html engine converts that
structure into a string, which is then turned into actual DOM
elements in the browser.

3. Rendering Logic

render() calls this.template() to get an Html instance,


converts it to a string via toHtml(), and injects that string into a
temporary div (wrapEl).
It then assigns the first child element of wrapEl to _element,
making it accessible via element().
classList.add(this.constructor.name) adds the constructor
name to the class attribute.
onRender() is a hook method for any post-render actions, which
subclasses can override.

4. Error Handling

If you call element() before render(), it throws an error. This


enforces a clear contract about the rendering order and handles
unexpected states gracefully.

Together, this View class forms a basic foundation that unifies generics
for data, template definition, an explicit rendering process, optional post-
render logic, and simple error handling.

We’ll discuss the rationale behind making onRender() protected, along


with details about public, private, and protected access modifiers,
in section 7.1. Refer to that chapter for a clearer understanding of
design intentions and practical usage of access modifiers.

Below is a minimal app that uses the Html template engine and the
View class.

[Code 6-16] Example: UserView

1 type User = {
2 name: string;
3 age: number;
4 };
5
6 class UserView extends View<User> {
7 template(): Html {
8 return html`
9 <div>
10 ${this.data.name} (${this.data.age})
11 <button>x</button>
12 </div>
13 `;
14 }
15
16 protected override onRender() {
17 this.element()
18 .querySelector('button')!
19 .addEventListener('click', () => this.remove());
20 }
21
22 private remove() {
23 this.element().remove();
24 alert(`Removed ${this.data.name}`);
25 }
26 }
27
28 const users: User[] = [
29 { name: 'Marty', age: 40 },
30 { name: 'Jenna', age: 34 },
31 { name: 'Ethan', age: 31 },
32 ];
33
34 console.log(
35 new UserView(users[0]).render().outerHTML
36 );
37 // <div class="UserView">
38 // Marty (40)
39 // <button>x</button>
40 // </div>
41
42 users
43 .map(user => new UserView(user))
44 .map(view => view.render())
45 .forEach(element => document.body.append(element));
46
47 console.log(document.body.innerHTML);
48 // <div class="UserView">
49 // Marty (40)
50 // <button>x</button>
51 // </div>
52 // <div class="UserView">
53 // Jenna (34)
54 // <button>x</button>
55 // </div>
56 // <div class="UserView">
57 // Ethan (31)
58 // <button>x</button>
59 // </div>

In [Code 6-16], we have three users—Marty, Jenna, Ethan. We create


UserView instances with their data, log the HTML string to the console, and
then display the elements on screen with map, forEach, and render(). Each
element has an x button that, when clicked, removes the DOM element and
shows an alert indicating which user was removed.

This code demonstrates an object-oriented design. The View class uses


Html to generate the HTML string, then manages the creation of DOM
elements and triggers developer-implemented post-render logic
(onRender()). Through element(), each view can access the underlying
HTMLElement from the Web API to run methods like remove(), and it can
display alert messages based on its own data.

Throughout section 6.1, we’ve seen how small functions and classes
can be combined to solve problems step by step, ultimately forming a
cohesive solution. This incremental approach—building from simple
building blocks—leads to efficient, extensible results.

The Opportunities Offered by Multi-Paradigm Languages

The approaches we covered in this chapter exemplify the opportunities


provided by multi-paradigm languages. Modern mainstream languages
support functional, object-oriented, and imperative paradigms together,
enabling us to flexibly choose or mix paradigms for specific problems. This
is more than mere syntactic convenience; it enriches your entire problem-
solving strategy.

If we had tried to implement everything in this chapter under a single


paradigm, it would have been much harder to handle complex requirements
like nested structures or transformation logic. In contrast, by leveraging the
advantages of a multi-paradigm language, we can tackle structural
complexity with OOP and handle transformation logic with FP, resulting in
cleaner, more predictable code. Such an approach empowers you to handle
more problems effectively and lays a foundation for flexible strategies that
aren’t bound to a single paradigm.

The features offered by modern languages—such as classes, first-class


functions, iterables, and sets of list-processing functions—are more than
just conveniences. They carry patterns, concepts, features, and mindsets that
have evolved over decades in object-oriented, functional, and imperative
programming traditions. By combining these legacies appropriately, we can
do more than implement fleeting library or framework-specific hacks. We
can apply proven, time-tested solutions at a fundamental level. In turn, our
code moves beyond “it works” and stands on solid, well-established
programming wisdom—a valid argument for writing “good code.”

By richly using these linguistic and paradigmatic foundations, multi-


paradigm approaches deliver stable, scalable solutions for diverse
challenges and serve as a robust basis for future development.

6.2 Handling Concurrency with a Multi-


Paradigm Approach
This section explores how to effectively handle concurrency using a
multi-paradigm approach. In section 4.2, we discussed the implementation
of a function named executeWithLimit, which takes an array of functions
that return Promises and a limit value, then groups and executes them
sequentially in units of that limit. We also looked at how that logic was
restructured using a functional programming paradigm.

Here, we will create a new function, runTasksWithPool, which has


slightly different requirements. Instead of simply chunking tasks by a
certain size (limit) and running them in sequence, we want to maintain a
specified number of concurrent tasks (the “pool size”) and continue running
tasks as any individual task completes. We will first let ChatGPT implement
runTasksWithPool in an imperative style, then reconstruct it using a multi-
paradigm approach to ensure safe and clear concurrency handling.

Revisiting executeWithLimit

Before implementing runTasksWithPool, let’s briefly review the


executeWithLimit function from section 4.2. This function controls
concurrency load by batching functions into groups of size limit and
running each group with Promise.all.

[Code 6-17] Imperative executeWithLimit

1 async function executeWithLimit<T>(


2 fs: (() => Promise<T>)[],
3 limit: number
4 ): Promise<T[]> {
5 const results: T[] = [];
6
7 for (let i = 0; i < fs.length; i += limit) {
8 const batchPromises = [];
9 for (let j = 0; j < limit && (i + j) < fs.length; j++) {
10 batchPromises.push(fs[i + j]());
11 }
12
13 const batchResults = await Promise.all(batchPromises);
14 results.push(...batchResults);
15 }
16
17 return results;
18 }

[Code 6-17] shows a traditional imperative implementation of


executeWithLimit. The functions are broken into chunks of size limit and
run sequentially, so you can control the number of Promises processed at a
time.
In section 4.2, we reconstructed this logic using functional
programming. Below is that functional implementation.

[Code 6-18] Functional executeWithLimit

1 const executeWithLimit = <T>(fs: (() => Promise<T>)[], limit: number):


Promise<T[]> =>
2 fx(fs)
3 .map(f => f())
4 .chunk(limit)
5 .map(ps => Promise.all(ps))
6 .to(fromAsync)
7 .then(arr => arr.flat());

Here, fx returns an FxIterable defined for the purposes of the book.

Next, [Code 6-19] shows how we can achieve the same logic even
more concisely using the FxTS library’s fx function.

[Code 6-19] Functional executeWithLimit Using FxTS

1 import { fx } from "@fxts/core";


2
3 const executeWithLimit = <T>(fs: (() => Promise<T>)[], limit: number):
Promise<T[]> =>
4 fx(fs)
5 .toAsync()
6 .map(f => f())
7 .concurrent(limit)
8 .toArray();

All these examples work as intended and showcase different ways to


control concurrency load. Now, let’s look at runTasksWithPool, which can
maintain a specified number of tasks running at once and keep adding new
tasks as others finish. Just like before, we’ll first see ChatGPT’s imperative-
style implementation, then transition to a multi-paradigm approach for a
clearer, more extensible solution.

runTasksWithPool Implemented Imperatively by ChatGPT

While executeWithLimit simply runs asynchronous tasks in fixed


chunks of size limit, the new function, runTasksWithPool, involves more
advanced state management. We want tasks to run concurrently in a “pool”
of size poolSize, adding a new task whenever one completes. We requested
ChatGPT to implement this function.

The Question to ChatGPT

Please make a function that accepts fs and maintains exactly poolSize


concurrent tasks. If one task finishes, another task should start in its
place.
async function runTasksWithPool<T>(fs: (() => Promise<T>)
[], poolSize: number): Promise<T[]> {}

ChatGPT’s Answer

Below is an implementation of runTasksWithPool following the given


function signature. It takes an array of asynchronous functions fs and
ensures that only poolSize tasks run at the same time. When a task
finishes, a waiting task is started.

[Code 6-20] ChatGPT’s runTasksWithPool

1 async function runTasksWithPool<T>(


2 fs: (() => Promise<T>)[],
3 poolSize: number
4 ): Promise<T[]> {
5 const results: T[] = [];
6 const activePromises: Promise<void>[] = [];
7
8 for (let i = 0; i < fs.length; i++) {
9 const taskFactory = fs[i];
10 // Execute each task, store its result by index,
11 // and remove it from activePromises when done
12 const p = taskFactory()
13 .then((fetchedValue) => {
14 // Assign the result to the results array
15 results[i] = fetchedValue;
16 })
17 .then(() => {
18 // Remove the completed promise from activePromises
19 const removeIndex = activePromises.indexOf(p);
20 if (removeIndex > -1) {
21 activePromises.splice(removeIndex, 1);
22 }
23 });
24
25 // Add the current task to the list of active promises
26 activePromises.push(p);
27
28 // If we’ve hit the pool size limit, wait for one task to finish
29 if (activePromises.length >= poolSize) {
30 // Wait for whichever task finishes first to reduce the number of
active tasks
31 await Promise.race(activePromises);
32 }
33 }
34
35 // After the loop, wait for any remaining tasks to finish, then return
results
36 await Promise.all(activePromises);
37
38 return results;
39 }

Here’s how this code works:

1. results array
Stores the Promise results from each element in fs, indexed so that
results are managed in the original order regardless of execution
timing.

2. activePromises array
Tracks currently running Promises. When a task finishes, that promise
is removed from the array.

3. Loop-based task execution


Inside the for loop, each task (taskFactory) is called, and the result is
stored in results[i] via a then chain.
A subsequent then removes the associated promise from
activePromises.
All asynchronous logic is handled inline, without a separate runTask
function.

4. Concurrent execution control


Every time a new task is added to activePromises, if the array length
reaches poolSize, we call Promise.race(activePromises) and wait
for one task to finish, thus ensuring that no more than poolSize tasks
are running simultaneously.
5. Waiting for completion
After exiting the for loop, some tasks may still be running.
Promise.all(activePromises) ensures these remaining tasks finish
before returning the final results.

[Code 6-21] tests the runTasksWithPool function. The console logs let
you check the intended timing of events.

[Code 6-21] Checking Its Operation

1 function createAsyncTask(name: string, ms: number): () => Promise<string>


{
2 return () =>
3 new Promise(resolve => {
4 console.log(`Started: ${name}`);
5 setTimeout(() => {
6 console.log(`Finished: ${name}`);
7 resolve(name);
8 }, ms);
9 });
10 }
11
12 const tasks = [
13 createAsyncTask("A", 1000),
14 createAsyncTask("B", 500),
15 createAsyncTask("C", 800),
16 createAsyncTask("D", 300),
17 createAsyncTask("E", 1200),
18 ];
19
20 const poolSize = 2;
21 const results = await runTasksWithPool(tasks, poolSize);
22 console.log("Results:", results);

Thoughts on ChatGPT’s Answer

While ChatGPT’s runTasksWithPool function does fulfill our


requirements, it can be quite challenging to read and track. As tasks are
repeatedly added and removed in order to maintain up to poolSize
concurrent tasks, the interplay of states (finished tasks vs. queued tasks)
becomes complicated. Understanding and maintaining such code can be
difficult.
runTasksWithPool Implemented via a Multi-Paradigm
Approach

Now let’s see how to solve this problem in a multi-paradigm way for
more clarity. We want to express a complex concurrency control mechanism
in a way that is easier to understand and maintain.

[Code 6-22] shows a runTasksWithPool function implemented with


only the language’s built-in features—no additional libraries or complex
helper functions. Using array methods (map, findIndex, splice) and a
simple custom class (TaskRunner), we can write a clear concurrency control
logic in a fairly imperative style without losing readability.

[Code 6-22] runTasksWithPool in a Multi-Paradigm Style

1 class TaskRunner<T> {
2 private _promise: Promise<T> | null = null;
3 private _isDone = false;
4 get promise() { return this._promise ?? this.run(); }
5 get isDone() { return this._isDone; }
6
7 constructor(private f: () => Promise<T>) {}
8
9 async run() {
10 if (this._promise) {
11 return this._promise;
12 } else {
13 return this._promise = this.f().then(res => {
14 this._isDone = true;
15 return res;
16 });
17 }
18 }
19 }
20
21 async function runTasksWithPool<T>(
22 fs: (() => Promise<T>)[],
23 poolSize: number
24 ): Promise<T[]> {
25 const tasks = fs.map(f => new TaskRunner(f));
26
27 let pool: TaskRunner<T>[] = [];
28 for (const nextTask of tasks) {
29 // Add tasks to the pool until we reach poolSize
30 pool.push(nextTask);
31 if (pool.length < poolSize) continue;
32 // Start the tasks in the current pool and wait for one to finish
33 await Promise.race(pool.map(task => task.run()));
34 // Remove the completed task
35 pool.splice(pool.findIndex(task => task.isDone), 1);
36 }
37
38 // Collect any results that have already completed
39 return Promise.all(tasks.map(task => task.promise));
40 }

This example demonstrates how a multi-paradigm approach can


implement concurrency limits in a clean, intuitive way. It blends OOP
structure (the TaskRunner class), array methods (map, findIndex, splice),
and imperative constructs (for...of loop, await) naturally.

1. Introducing a TaskRunner class

Each asynchronous task is wrapped in a TaskRunner that neatly


manages its Promise state and completion status (isDone).
TaskRunner stores promise and isDone, making it easy for
external code to check each task’s status.
The run() method clearly separates task startup logic, ensuring
consistent creation of Promises and updates to the completion
state.
This encapsulation makes state management more straightforward
and improves readability.

2. Using array methods (map, findIndex, splice)

map(task => task.run()) allows object-oriented TaskRunner


instances to be orchestrated in a functional style.
findIndex and splice provide a clean imperative method to
remove completed tasks from the pool.
By combining array methods properly, the logic to manage the
pool (adding tasks or removing completed ones) remains simple
and clear.

3. Imperative control flow (for...of, await)

A for...of loop is used to add tasks to the pool one by one,


waiting for a task to finish with await Promise.race(...) once
the pool is full.
This flow makes the timing of “when to start the next task” and
“when to remove the finished task” quite obvious.
While we could consider a functional approach using operators
like take or reduce, we chose not to here. Mixing an object-
oriented pattern with a functional approach, while simplifying the
control flow imperatively, can often increase readability and
maintainability.
Sometimes it’s simpler and more appropriate to mix in imperative
code. Adopting a flexible stance—picking or omitting paradigms
as needed—helps keep code both maintainable and extensible.

This example blends object-oriented (classes), functional (higher-order


functions and iteration), and imperative (loops and control statements)
paradigms smoothly. By using the TaskRunner class to clearly manage
Promise state, array-based functional methods to declaratively express
transformations, and imperative loops for a straightforward flow, we’ve
elegantly solved a complex asynchronous concurrency issue. This shows
how a multi-paradigm approach can maximize code quality and
development efficiency by choosing the thinking and syntax best suited to
each part of the problem.

Just for fun, here’s a more “over-the-top” one-liner:

[Code 6-23] An Overly Concise Expression

1 // await Promise.race(pool.map(task => task.run()));


2 // pool.splice(pool.findIndex(task => task.isDone), 1);
3
4 pool.splice(await Promise.race(pool.map((task, i) => task.run().then(() =>
i))), 1);

Making the Concurrency (Load) Size Dynamically


Adjustable

How would we adapt this to dynamically change the concurrency load


size? In that scenario, a class-based architecture can be more advantageous
than a single function, because classes centralize both state and logic,
making dynamic concurrency adjustments or resource reallocation clearer
and more intuitive.

Below is a TaskPool class that adapts the core logic of


runTasksWithPool.

[Code 6-24] TaskPool in a Multi-Paradigm Style

1 class TaskPool<T> {
2 private readonly tasks: TaskRunner<T>[];
3 private readonly pool: TaskRunner<T>[] = [];
4 public poolSize: number;
5
6 constructor(fs: (() => Promise<T>)[], poolSize: number) {
7 this.tasks = fs.map(f => new TaskRunner(f));
8 this.poolSize = poolSize;
9 }
10
11 setPoolSize(poolSize: number) {
12 this.poolSize = poolSize;
13 }
14
15 private canExpandPool() {
16 return this.pool.length < this.poolSize;
17 }
18
19 async runAll() {
20 const { pool, tasks } = this;
21
22 let i = 0;
23 const { length } = tasks;
24 while (i < length) {
25 const nextTask = tasks[i];
26 pool.push(nextTask);
27 const isNotLast = ++i < length;
28 if (isNotLast && this.canExpandPool()) continue;
29 await Promise.race(pool.map(task => task.run()));
30 pool.splice(pool.findIndex(task => task.isDone), 1);
31 }
32
33 return Promise.all(tasks.map(task => task.promise));
34 }
35 }

[Code 6-24] reveals a well-structured multi-paradigm concurrency


solution. The TaskPool class has the following features:

1. Class-based state management


TaskPool encapsulates tasks, pool, and poolSize in one place.
This makes it straightforward to handle new requirements like
dynamically changing concurrency or adding additional logic
later on.

2. Flexible logic based on state changes

We use isNotLast to continue only if it’s not the last item.


If we’ve reached the last item, we don’t call continue, so even if
poolSize changes, no further pool expansion happens once we
finish adding tasks.
canExpandPool() checks whether the pool can accept new tasks
based on the pool’s current state, making the intent clear. Within
the class, state is managed consistently, so you can implement
dynamic concurrency or resource reallocation logic all in one
spot.
The setPoolSize() method allows external code to update the
concurrency limit on the fly.

3. Clear, combined approach using imperative, OOP, and


functional paradigms

Imperative flow (while loops, await, Promise.race), OOP state


management (TaskPool class and TaskRunner instances), and
some functional array methods (map, findIndex, splice) work
together seamlessly to keep the code both concise and easy to
read.

With this setup, you can freely adjust poolSize by calling


setPoolSize() according to external resources or real-time decisions.

Note that [Code 6-24] didn’t start out as a class from day one—it
began as a simple runTasksWithPool function. Only when new
requirements like setPoolSize() came along did we introduce a class to
handle this complexity more gracefully. Starting small with a simple
function and incrementally moving to classes or higher-level abstractions as
needed can help avoid overly complex design upfront, improving both team
productivity and code quality.
Handling Infinite, Ongoing Tasks

If you need to throttle the load of a potentially infinite loop of tasks


using TaskPool, you can rework the internal logic to accept an iterator of
tasks, leveraging the lazy nature of iterators to support infinite tasks.

[Code 6-25] TaskPool Supporting Iterable Iterators

1 function* map<A, B>(


2 f: (value: A) => B, iterable: Iterable<A>
3 ): IterableIterator<B> {
4 for (const value of iterable) {
5 yield f(value);
6 }
7 }
8
9 class TaskPool<T> {
10 private readonly taskIterator: IterableIterator<TaskRunner<T>>;
11 private readonly pool: TaskRunner<T>[] = [];
12 public poolSize: number;
13
14 // (1) Change (() => Promise<T>)[] to Iterable<() => Promise<T>>
15 constructor(fs: Iterable<() => Promise<T>>, poolSize: number) {
16 this.taskIterator = map(f => new TaskRunner(f), fs); // (2) Use the
iterable map
17 this.poolSize = poolSize;
18 }
19
20 setPoolSize(poolSize: number) {
21 this.poolSize = poolSize;
22 }
23
24 private canExpandPool() {
25 return this.pool.length < this.poolSize;
26 }
27
28 async runAll() {
29 const { pool, taskIterator } = this;
30 const tasks: TaskRunner<T>[] = [];
31
32 while (true) { // (3) Modify loop approach
33 const { done, value: nextTask } = taskIterator.next();
34 if (!done) {
35 pool.push(nextTask);
36 tasks.push(nextTask);
37 if (this.canExpandPool()) continue;
38 }
39 if (done && pool.length === 0) break;
40 await Promise.race(pool.map(task => task.run()));
41 pool.splice(pool.findIndex(task => task.isDone), 1);
42 }
43
44 return Promise.all(tasks.map(task => task.promise));
45 }
46 }

Let’s summarize the main changes from [Code 6-24] to [Code 6-25]:

1. Changing fs type

Before: constructor(fs: (() => Promise<T>)[], poolSize:


number)
After: constructor(fs: Iterable<() => Promise<T>>,
poolSize: number)
To support infinite tasks, fs is no longer a fixed array but an
Iterable or IterableIterator.

2. Modifying how this.tasks is initialized

Before: this.tasks = fs.map(f => new TaskRunner(f))


After: this.taskIterator = map(f => new TaskRunner(f),
fs)
The map generator function takes the iterable fs and yields
TaskRunner instances, creating an iterable iterator.

3. Changing the runAll method

We now call taskIterator.next() to fetch tasks one by one.


Each fetched nextTask is added to both pool and tasks.
When next() returns done === true (no more tasks to fetch) and
the pool is empty, the loop ends.
With an infinite iterator, the loop theoretically never ends unless
you break it based on other conditions.

In [Code 6-24], we processed a fixed array of tasks (type (() =>


Promise<T>)[]) until all were completed. In [Code 6-25], we use an
iterable (Iterable<() => Promise<T>>) in a while(true) loop, fetching
new tasks via taskIterator.next(). When there are no more tasks (done
=== true), we wait until any still-running tasks in the pool are finished,
then exit. Thus, TaskPool can flexibly handle an unbounded stream of tasks
while controlling concurrency load.
Below is a conceptual example that shows how you might throttle the
load of a continuous web-crawling scenario. This uses an infinite iterator of
pages.

[Code 6-26] Conceptual Example with an Infinite Iterator

1 import { map, range, delay } from "@fxts/core";


2
3 async function crawling(page: number) {
4 console.log(`Starting analysis of page ${page}`);
5 await delay(5_000);
6 console.log(`Finished saving page ${page}`);
7 return page;
8 }
9
10 void new TaskPool(
11 map(page => () => crawling(page), range(Infinity)),
12 5
13 ).runAll();

This example can iterate over infinitely many pages but caps the
crawling concurrency at 5 tasks. That way, it can continuously gather data
without overloading resources. By combining classes, imperative style,
functional style, and iterators, we gain both flexibility and maintainability
in the code.

runAllSettled

Finally, let’s wrap up by adding a runAllSettled() method similar to


Promise.allSettled(). It will wait for all tasks to complete, returning an
array describing success/failure for each one. Meanwhile, TaskPool
continues controlling the load using poolSize, ensuring tasks run
concurrently without overwhelming resources.

[Code 6-27] Adding runAllSettled

1 class TaskPool<T> {
2 private readonly taskIterator: IterableIterator<TaskRunner<T>>;
3 private readonly pool: TaskRunner<T>[] = [];
4 public poolSize: number;
5
6 constructor(fs: Iterable<() => Promise<T>>, poolSize: number) {
7 this.taskIterator = map(f => new TaskRunner(f), fs);
8 this.poolSize = poolSize;
9 }
10
11 setPoolSize(poolSize: number) {
12 this.poolSize = poolSize;
13 }
14
15 private canExpandPool() {
16 return this.pool.length < this.poolSize;
17 }
18
19 private async run(errorHandle: (err: unknown) => unknown) {
20 const { pool, taskIterator } = this;
21 const tasks: TaskRunner<T>[] = [];
22
23 while (true) {
24 const { done, value: nextTask } = taskIterator.next();
25 if (!done) {
26 pool.push(nextTask);
27 tasks.push(nextTask);
28 if (this.canExpandPool()) continue;
29 }
30 if (done && pool.length === 0) break;
31 await Promise.race(pool.map(task => task.run())).catch(errorHandle);
32 pool.splice(pool.findIndex(task => task.isDone), 1);
33 }
34
35 return tasks.map(task => task.promise);
36 }
37
38 async runAll() {
39 return Promise.all(await this.run(err => Promise.reject(err)));
40 }
41
42 async runAllSettled() {
43 return Promise.allSettled(await this.run(() => undefined));
44 }
45 }

In [Code 6-25], we handled all logic directly in runAll(). Now, we’ve


factored the shared logic into a run() method, and let runAll() and
runAllSettled() call it.

1. Extracting shared logic

run() manages the core logic: adding tasks to the pool, limiting
concurrency to poolSize, etc.

2. runAll() behavior
runAll() runs tasks to completion with Promise.all, failing
immediately if any individual task fails—mimicking the default
Promise.all behavior.

3. runAllSettled() behavior

runAllSettled() uses Promise.allSettled, collecting results


from all tasks (both successful and failed). Any failures do not
stop ongoing tasks.

We also updated error handling:

1. Helper function and catch in run()

run() now takes an errorHandle: (err: unknown) => unknown


callback.
This lets us switch up the error handling strategy from the
outside.
We use .catch(errorHandle) on Promise.race(pool.map(task
=> task.run())).

2. Error handling in runAll()

runAll() passes err => Promise.reject(err) to errorHandle.


If a task throws an error within Promise.race(...), we reject the
entire promise chain.
This ensures runAll() matches Promise.all in failing
immediately when any one task fails.

3. Error handling in runAllSettled()

runAllSettled() passes () => undefined, effectively ignoring


or hiding errors so that Promise.race(...) doesn’t fail when any
task’s run() throws.
Tasks keep running to the end, and we only gather results
(success/failure) through Promise.allSettled() at the finish.
Thus, the TaskPool class can easily toggle between “fail fast”
(runAll()) and “run everything even if some fail” (runAllSettled()),
simply by choosing which method to call.

Below is some example code:

[Code 6-28] runAllTest, runAllSettledTest

1 const tasks = [
2 createAsyncTask("A", 1000),
3 () => createAsyncTask("B", 500)().then(() => Promise.reject('no!')),
4 createAsyncTask("C", 800),
5 createAsyncTask("D", 300),
6 createAsyncTask("E", 1200),
7 ];
8
9 async function runAllTest() {
10 try {
11 const result = await new TaskPool(tasks, 2).runAll();
12 console.log(result); // This line will not execute
13 } catch (e) {
14 // If any task fails, control flows here
15 console.log(e); // "no!"
16 }
17 }
18
19 await runAllTest();
20
21 async function runAllSettledTest() {
22 const result = await new TaskPool(tasks, 2).runAllSettled();
23 console.log(result);
24 // [
25 // {status: "fulfilled", value: "A"},
26 // {status: "rejected", reason: "no!"},
27 // {status: "fulfilled", value: "C"},
28 // {status: "fulfilled", value: "D"},
29 // {status: "fulfilled", value: "E"}
30 // ]
31 }
32
33 await runAllSettledTest();
34
35 async function runAllTest2() {
36 try {
37 const task = (page: number) => () =>
38 page === 7
39 ? Promise.reject(page)
40 : crawling(page);
41
42 await new TaskPool(map(task, range(Infinity)), 5).runAll();
43 } catch (e) {
44 // If any task fails, we stop infinite processing and end up here
45 console.log(`Crawling failed at some point! (Page ${e})`);
46 // e.g. "Crawling failed at some point! (Page 7)"
47 }
48 }
49
50 await runAllTest2();
51
52 await delay(10_000);
53 console.log('------------');
54
55 async function runAllSettledTest2() {
56 const task = (page: number) => () =>
57 page === 7
58 ? Promise.reject(page)
59 : crawling(page);
60
61 const taskPool = new TaskPool(map(task, range(Infinity)), 5);
62
63 // Even if a task fails midway, keep going until the infinite queue ends
64 void taskPool.runAllSettled();
65
66 // After 10 seconds, change poolSize from 5 to 10
67 setTimeout(() => {
68 taskPool.setPoolSize(10);
69 }, 10_000);
70 }
71
72 void runAllSettledTest2();

is a fine example of combining object-oriented, functional,


TaskPool
and imperative paradigms:

1. Managing state like poolSize and receiving external commands is


best done in an object-oriented style.
2. Using for...of, while, if, await, and continue can sometimes be
simpler and clearer than forcing a purely functional flow.
3. Functional, declarative methods and values—like map, findIndex,
Promise, all, race—are incorporated to keep the code concise and
readable.
4. Iterators provide structural extensibility to handle infinite iterations
safely.

Notably, we used a function-passing strategy (errorHandle) to


implement runAllSettled(). This approach enabled us to faithfully
replicate Promise.all and Promise.allSettled behaviors and manage
errors flexibly. Fully leveraging language capabilities grants powerful,
adaptable solutions.
In summary, the examples in this chapter demonstrate how mixing
paradigms can simplify problems that would otherwise become overly
complicated if restricted to a purely functional or stateless design. A purely
functional approach might demand complex abstractions to track state,
while a purely imperative approach might struggle with handling state
changes and external interactions cleanly. In complex scenarios, adopting a
multi-paradigm strategy proves highly practical—allowing us to combine
paradigms even within a single function, applying one paradigm to certain
parts and another to others, to achieve productivity, maintainability,
readability, and extensibility all at once.

6.3 Summary
Solve Structural Issues with OOP, Solve Logic Issues
with FP

When dealing with complex or nested data and hierarchical structures,


you can leverage the object-oriented paradigm to establish a clear structure.
Conversely, for pure logic tasks like data transformations or list processing,
the functional paradigm lets you implement them predictably and reliably.
This division of labor greatly improves code readability and maintenance.

Boldly Choose the Right Paradigm for Each Problem

It’s far more efficient to flexibly combine object-oriented, functional,


and imperative paradigms as needed rather than forcing a single paradigm
onto complex problems. By choosing whichever paradigm naturally fits
your problem and making the most of that paradigm’s strengths, you can
neatly handle intricate requirements.

Record State in Objects and Treat Them as Values

The object-oriented paradigm clarifies concerns and models domain


concepts in an intuitive manner. Consequently, your code structure becomes
more logical and predictable. Classes and objects bundle data (state) and the
behaviors that alter that data (methods) into a single abstraction,
systematically managing changes and hiding details that other parts of the
system don’t need to worry about.

Such an object-oriented design succinctly expresses complex state


changes, improves maintainability, and strengthens code clarity and
reusability. For example, classes like TaskRunner and TaskPool that handle
concurrency control demonstrate how retaining state in objects and
managing it in a straightforward way secures a clear code flow.
Furthermore, combining this object-oriented approach with function-based
transformation logic naturally merges pure, predictable data processing with
structured, comprehensible state management—ultimately leading to stable
and readable code.

Notify Others of Changes and Communicate

Separated objects can be designed to notify the outside world of any


changes in their state via events. This allows other objects or logic to detect
and properly react to these changes, making it easier to build a well-
structured system that adapts as situations evolve. Taking advantage of first-
class functions, iterators, generators, and other features found in a multi-
paradigm language lets you implement such interactions even more
elegantly. As a result, you can express complex component interactions in a
clear and easily understandable way.

7. Object-Oriented Front-End Development


and Multi-Paradigm Approaches in Practice
Modern client-side development often makes extensive use of reactive
libraries that handle UI rendering and updates on the developer’s behalf.
These libraries allow you to write UI code in a declarative manner,
minimize direct manipulation of the DOM or browser technologies, and
enable large portions of UI programming using only JavaScript’s built-in
objects. Clearly, these features help improve development convenience and
productivity.
Nevertheless, creating real-time editing tools that provide rich
functionality and a high-quality user experience (UX)—such as Google
Calendar, Figma, or spreadsheet applications—still requires object-oriented
programming skills. In building such applications, developers often need to
work directly with UI elements via an OOP-based SDK, manage system
resources or platform capabilities in a cohesive structure, and code with a
firm understanding of how these components interrelate. Moreover, object-
oriented approaches are highly beneficial when it comes to modularizing
and managing extensive feature sets. OOP offers a robust conceptual
framework—encapsulation, abstraction, inheritance, polymorphism—plus a
body of proven design patterns and a rich history of SDKs that have been
well tested in industry for a long time.

These object-oriented programming patterns and designs are not


limited to web browser environments. They are applied consistently in
client-side programming for iOS, Android, macOS, Windows, and other
operating systems and platforms. Native features on most of these platforms
are largely designed around the object-oriented paradigm, aiming for high
usability in software development. Once you master these foundational
technologies, you can write code in a more fundamental way without
relying on a particular library or framework, and you can broaden your
programming scope far more easily.

In recent years, front-end development has expanded beyond simply


making web pages work, stretching into areas such as Node.js–based server
programming, file systems, cloud platforms like AWS, and even DevOps.
Many of these SDKs and development platforms are also built upon object-
oriented foundations. Consequently, gaining experience with the object-
oriented Web API provided by browsers makes it much easier to expand to
other platforms and technology stacks, providing a springboard for creating
more sophisticated software.

In this chapter, we will implement a simple application using


TypeScript and the Web API, building on these ideas of platform SDK–
based programming and object-oriented programming. We will also
incorporate multi-paradigm approaches and applications into our OOP
design and implementation to deepen our understanding in a hands-on way.
What we mean by platform SDK–based programming is a
development methodology in which the developer directly calls system-
level APIs and uses the natively provided libraries—essentially focusing on
more fundamental technologies. This approach is similar, for instance, to
the iOS or Android native development model, which operates on top of the
official SDK provided by the operating system. The philosophy behind
these SDKs and patterns emphasizes explicitly controlling how and when a
UI should change, rather than automatically managing UI rendering. Such
an approach is advantageous for performance-focused applications,
allowing you to leverage performance gains to deliver a higher level of user
experience.

In the end, solid foundational skills lead to outstanding versatility. By


employing this method, you can absorb a broader technical spectrum and
lay a strong foundation for expanding your work into other areas.

7.1 Building a Settings App


The Web API is a fundamental tool for programming software running
in web browser environments. For instance, the Web API includes DOM
manipulation, the Fetch API, and more, which you can study in detail via
the WHATWG Standards documents (https://spec.whatwg.org/) or the
MDN Web Docs (https://developer.mozilla.org/docs/Web/API).

In this chapter, we’ll create a simple application (a settings app


featuring ON/OFF toggle functionality for Wi-Fi, Bluetooth, etc.) based on
the Web API. Many of the Web APIs we’ll utilize are already designed in an
object-oriented manner. Modern web applications have relied heavily on a
variety of libraries for mapping data to views, managing state, and
providing reactive features. These libraries promote code modularity and
facilitate a component-based approach to program design.

Recently, however, ES6 modules, advanced bundling tools, and the


widespread adoption of static typing via TypeScript have made it possible
to implement a sufficiently object-oriented and extensible design on top of
well-designed Web API interfaces without a separate library. By leveraging
long-established and validated object-oriented patterns and ideas from the
broader field of software development, you can write more robust and
maintainable code.

Let’s build a simple settings application, step by step, using the Web
API, and explore object-oriented design and assembly methods based on
this approach.

Information on Examples and Video Lectures

For more details on setting up the environment and using the example
code, refer to the “Installing the Environment and Example Code”
section in the front of the book.
On the GitHub page (https://github.com/marpple/multi-paradigm-
programming), we continually update various materials—including
video lecture information and example code corresponding to the
content of this book.
The rune-ts html function and View used in these examples are an
extension of the html function and View class we implemented in
Chapter 6. They maintain the same role while supporting the
development of simple frontend applications based on the Web API in
an object-oriented design style. This allows convenient learning,
application, and practice of object-oriented programming design and
implementation.

SwitchView

Using the extended html function and View class from rune-ts—based
on the HTML template engine and View class we implemented in Chapter 6
—we’ll take a look at a simple switch UI component called SwitchView.
Because SwitchView follows the View conventions, calling its render()
method returns the actual DOM element. This makes it easy to insert the
rendered <button> element into the screen using
document.querySelector('#body')!.append().

[Code 7-1] defines SwitchView and renders it. Depending on the on


property of the data object, it conditionally applies class="on" to the
<button> element to visually represent whether the switch is in the ON
state.

[Code 7-1] Rendering SwitchView

1 import { html, View } from "rune-ts";


2
3 class SwitchView extends View<{ on: boolean }> {
4 override template() {
5 return html`
6 <button class="${this.data.on ? 'on' : ''}">
7 <span class="toggle"></span>
8 </button>
9 `;
10 }
11 }
12
13 export function main() {
14 console.log(
15 new SwitchView({ on: true }).toHtml()
16 );
17 // <button class="SwitchView on">
18 // <span class="toggle"></span>
19 // </button>
20
21 document.querySelector('#body')!.append(
22 new SwitchView({ on: false }).render()
23 );
24 }

When you run the code, the console prints an HTML string that
includes on in the class attribute, while the screen displays a button
representing the switch in the OFF state (no on class).

Next, let’s make the switch toggle ON/OFF when clicking the button.

[Code 7-2] Click on SwitchView

1 class SwitchView extends View<{ on: boolean }> {


2 override template() {
3 return html`
4 <button class="${this.data.on ? 'on' : ''}">
5 <span class="toggle"></span>
6 </button>
7 `;
8 }
9
10 protected override onRender() {
11 this.element().addEventListener('click', () =>
12 this.setOn(!this.data.on) // the opposite of the current state
13 );
14 }
15
16 setOn(bool: boolean) {
17 this.data.on = bool; // update data state
18 this.element().classList.toggle('on', bool); // toggle the class to
reflect UI changes
19 }
20 }

In this example, within the onRender() method, we access the


<button> element via this.element() and register a click event listener.
When it’s clicked, we call this.setOn(!this.data.on) to flip the current
switch state. The setOn(bool: boolean) method updates the internal
this.data.on state and toggles the 'on' class on the DOM element to
instantly reflect the UI change. Now, each button click toggles the visual
appearance of the switch between ON and OFF.

public and protected

Notice that the setOn(bool: boolean) method is declared as public.


The public access modifier means that code external to the class can freely
call this method. This allows you to change the state of a SwitchView
whenever needed in your program logic. Making the method public allows
it to easily interact with other modules, parent containers, or external state
management logic.

By contrast, the onRender() method is declared protected. This


access modifier allows access from within the class or its subclasses that
extend it, but not from outside the class instance. This achieves benefits
like:

1. A method designed for extending class internals

onRender() is intended to be used by the View class (and any


subclasses) for tasks like UI initialization and event binding. By
allowing it to be overridden or extended through inheritance
while preventing direct external access, we clarify that this
method is part of the internal view rendering process.
2. Preventing external calls

onRender() is a lifecycle hook method automatically called


during the view rendering process, controlled by internal logic in
the View class. If it were public, external code could arbitrarily
call it, potentially causing unexpected UI changes or state issues.
Declaring it as protected means external code cannot directly
call it, preserving the consistency and stability of the rendering
process.

In summary, the difference between public and protected is as


follows: a public method can be freely called from outside, making it
suitable for interaction with external logic. Meanwhile, protected methods
can only be accessed from within the class or subclasses, making them
suitable for lifecycle hooks that should remain internal. This clarifies that
onRender() is part of the view rendering process, and prevents external
code from inappropriately interfering with that process.

SettingItemView

SettingItemView is a view for displaying a single settings item (such


as Wi-Fi), based on the Setting interface, which has a title and an on
state. This component shows text for the item name and a SwitchView that
visually indicates the item’s state.

[Code 7-3] Rendering SettingItemView

1 class SwitchView extends View<{ on: boolean }> {


2 // ... omitted ...
3 }
4
5 interface Setting {
6 title: string;
7 on: boolean;
8 }
9
10 class SettingItemView extends View<Setting> { // this.data is of type
Setting
11 override template() {
12 return html`
13 <div>
14 <span class="title">${this.data.title}</span>
15 ${new SwitchView(this.data)}
16 </div>
17 `;
18 }
19 }
20
21 export function main() {
22 const setting = { title: 'Wi-Fi', on: false };
23
24 document.querySelector('#body')!.append(
25 new SettingItemView(setting).render()
26 );
27 }

In the template() method of SettingItemView, we display the setting


item’s name via this.data.title and directly pass this.data to
SwitchView to render the switch state based on the on value. This results in
a coherent UI component that displays both the item’s title and switch state.

By creating a setting object and passing it to a new SettingItemView


instance in the main() function, we can render that settings item onto the
actual DOM. This provides the user with a Wi-Fi setting entry and a switch
they can easily inspect and toggle.

SettingListView

is a view that renders multiple SettingItemView


SettingListView
components in list form, based on an array of Setting objects (Setting[]).

[Code 7-4] Rendering SettingListView

1 class SwitchView extends View<{ on: boolean }> {


2 // ... omitted ...
3 }
4
5 interface Setting {
6 title: string;
7 on: boolean;
8 }
9
10 class SettingItemView extends View<Setting> {
11 override template() {
12 return html`
13 <div>
14 <span class="title">${this.data.title}</span>
15 ${new SwitchView(this.data)}
16 </div>
17 `;
18 }
19 }
20
21 class SettingListView extends View<Setting[]> {
22 override template() {
23 return html`
24 <div>
25 ${this.data.map(setting => new SettingItemView(setting))}
26 </div>
27 `;
28 }
29 }
30
31 export function main() {
32 const settings: Setting[] = [
33 { title: 'Wi-Fi', on: false },
34 { title: 'Bluetooth', on: true },
35 { title: 'Sound', on: false },
36 ];
37
38 document.querySelector('#body')!.append(
39 new SettingListView(settings).render()
40 );
41 }

In the template() method of SettingListView, we call map() on


this.data to create a new SettingItemView instance for each Setting
object, which we then place directly into the template.

This way, SettingListView can display multiple settings items (like


Wi-Fi, Bluetooth, Sound) at once, and each item uses SettingItemView and
SwitchView to provide an independently structured UI. By instantiating and
rendering a SettingListView, we can easily implement an intuitive and
modular set of settings items.

Each SwitchView within SettingItemView toggles on click, allowing


the user to easily switch each setting on or off.

SettingPage

[Code 7-5] defines a view component called SettingPage. It includes


a header at the top (containing a title and a global switch view) and a main
body area (which holds a list of multiple setting items), thus structuring the
overall layout of the settings page.
[Code 7-5] Rendering SettingPage

1 class SettingPage extends View<Setting[]> {


2 override template() {
3 return html`
4 <div>
5 <div class="header">
6 <h2>Setting</h2>
7 ${new SwitchView({ on: false })}
8 </div>
9 <div class="body">
10 ${new SettingListView(this.data)}
11 </div>
12 </div>
13 `;
14 }
15 }
16
17 export function main() {
18 const settings: Setting[] = [
19 { title: 'Wi-Fi', on: false },
20 { title: 'Bluetooth', on: true },
21 { title: 'Sound', on: false },
22 ];
23
24 document.querySelector('#body')!.append(
25 new SettingPage(settings).render()
26 );
27 }

In the template() method of SettingPage, we create a


SettingListView based on this.data and show it in the body area. We
also add a separate SwitchView in the header.

When we instantiate SettingPage with settings in main(), it


displays all the setting items (Wi-Fi, Bluetooth, Sound) at once, and
clicking on each item’s switch toggles its state.

Adding a Global Toggle Feature

We want the header SwitchView to toggle all switches in the body area
ON or OFF at once when clicked.
Implementing a toggleAll Method

First, we define a toggleAll method so that it updates all the


SwitchViews displayed in the body at once.

[Code 7-6] toggleAll

1 class SettingItemView extends View<Setting> {


2 switchView = new SwitchView(this.data); // (3)
3
4 override template() {
5 return html`
6 <div>
7 <span class="title">${this.data.title}</span>
8 ${this.switchView} <!-- (3) -->
9 </div>
10 `;
11 }
12 }
13
14 class SettingListView extends View<Setting[]> {
15 itemViews = this.data.map(setting => new SettingItemView(setting)); //
(2)
16
17 override template() {
18 return html`
19 <div>
20 ${this.itemViews} <!-- (2) -->
21 </div>
22 `;
23 }
24 }
25
26 class SettingPage extends View<Setting[]> {
27 listView = new SettingListView(this.data); // (1)
28
29 override template() {
30 return html`
31 <div>
32 <div class="header">
33 <h2>Setting</h2>
34 ${new SwitchView({ on: false })}
35 </div>
36 <div class="body">
37 ${this.listView} <!-- (1) -->
38 </div>
39 </div>
40 `;
41 }
42
43 toggleAll(bool: boolean) {
44 this.listView.itemViews.forEach(itemView =>
itemView.switchView.setOn(bool));
45 }
46 }

[Code 7-6] reworks the structure so that when SettingPage calls


toggleAll(bool: boolean), it can collectively update the state of all
SwitchViews in the body (SettingListView).

1. SettingPage holds the listView instance

When SettingPage is created, it uses this.data to create a


SettingListView instance (new SettingListView(this.data))
and stores it in this.listView.
In the template, that instance is rendered with ${this.listView}.
This allows easy access to listView from toggleAll.

2. SettingListView holds an itemViews array

SettingListView creates a SettingItemView instance for each


Setting in this.data and stores these in an itemViews array.
The template uses ${this.itemViews} for rendering.
This allows SettingPage to directly access each
SettingItemView via listView.itemViews.

3. SettingItemView maintains a switchView property

SettingItemView creates a new SwitchView(this.data) from


its this.data and stores it in this.switchView.
The template references ${this.switchView} for rendering.
Thus, each itemView’s switchView can be directly controlled
from outside via itemView.switchView.

With these changes, in SettingPage’s toggleAll(bool: boolean)


method, iterating over the this.listView.itemViews array and calling
each itemView’s switchView.setOn(bool) toggles all switches in the body.

You can make this logic more efficient by using filter within
toggleAll.
[Code 7-6a] Using filter

1 toggleAll(bool: boolean) {
2 this.listView.itemViews
3 .filter(itemView => itemView.data.on !== bool)
4 .forEach(itemView => itemView.switchView.setOn(bool));
5 }

In [Code 7-6a], toggleAll(bool: boolean) uses filter to skip items


that already have the desired state (bool), applying setOn(bool) only to
those requiring a change. This reduces unnecessary updates and makes
overall processing more efficient.

[Code 7-7] Trying It Out

1 export function main() {


2 const settings: Setting[] = [
3 { title: 'Wi-Fi', on: false },
4 { title: 'Bluetooth', on: true },
5 { title: 'Sound', on: false },
6 ];
7
8 const settingPage = new SettingPage(settings);
9 window.settingPage = settingPage; // temporarily store it on window for
console access
10
11 document.querySelector('#body')!.append(
12 settingPage.render()
13 );
14 }

By assigning the SettingPage instance to window.settingPage in


[Code 7-7], you can run commands like settingPage.toggleAll(true) in
the browser console to test toggling all switches ON or OFF. This will turn
all SwitchViews ON if you type settingPage.toggleAll(true), and OFF
if you type settingPage.toggleAll(false).

Inter-Component Communication via Custom Events

Instead of manually adding a click event in the header’s SwitchView to


call toggleAll, we can design the SwitchView itself to dispatch a custom
toggled event whenever its state changes.
Thus, the SwitchView will handle its own state change notifications in
a structure that’s more fitting and clarifies the event communication design.

Communicating with SwitchView via Events

The following example uses the Web API’s custom events so that
SwitchView dispatches a toggled event each time its state changes.

[Code 7-8] Emitting a toggled Custom Event in SwitchView

1 class SwitchView extends View<{ on: boolean }> {


2 override template() {
3 return html`
4 <button class="${this.data.on ? 'on' : ''}">
5 <span class="toggle"></span>
6 </button>
7 `;
8 }
9
10 protected override onRender() {
11 this.element().addEventListener('click', () =>
this.setOn(!this.data.on));
12 }
13
14 setOn(bool: boolean) {
15 this.data.on = bool;
16 this.element().classList.toggle('on', bool);
17 // Create and dispatch a custom event
18 const event = new CustomEvent('toggled', {bubbles: true, detail:
this.data});
19 this.element().dispatchEvent(event);
20 }
21 }

In [Code 7-8], SwitchView updates its on state internally when clicked,


and after doing so, dispatches a toggled custom event externally. This lets
any external component or parent view that uses SwitchView detect this
event and trigger logic such as the toggleAll method.

The setOn(bool: boolean) method updates this.data.on and


toggles the 'on' class on the DOM element to show the new state
visually.
Immediately after the state change, we create event and call
this.element().dispatchEvent(event).
The bubbles: true option allows the event to propagate up to parent
DOM elements, making it easy for higher-level views or pages to catch
the toggled event. The detail field contains the current this.data
state so that the event handler can conveniently access the latest switch
state.

This makes SwitchView a more robust UI component that not only


displays a switch, but also clearly notifies external code of its state changes
via events.

[Code 7-9] shows how SettingPage listens for the toggled event and
prints the header’s switch state to the console using detail. It then calls
this.toggleAll(bool);.

[Code 7-9] Listening for toggled Events in SettingPage

1 class SettingPage extends View<Setting[]> {


2 listView = new SettingListView(this.data);
3 toggleAllView = new SwitchView({ on: false });
4
5 override template() {
6 return html`
7 <div>
8 <div class="header">
9 <h2>Setting</h2>
10 ${this.toggleAllView}
11 </div>
12 <div class="body">
13 ${this.listView}
14 </div>
15 </div>
16 `;
17 }
18
19 protected override onRender() {
20 this.toggleAllView.element().addEventListener('toggled', (e) => {
21 const bool = e.detail.on;
22 console.log('header:', bool);
23 // If you turn the header SwitchView on => header: true, off =>
false
24 this.toggleAll(bool);
25 // The state of all SwitchViews in the list is changed
26 });
27 }
28
29 toggleAll(bool: boolean) {
30 this.listView.itemViews
31 .filter(itemView => itemView.data.on !== bool)
32 .forEach(itemView => itemView.switchView.setOn(bool));
33 }
34 }

Here, when the header’s toggleAllView changes (dispatching the


toggled event), we call toggleAll(bool) to update the state of all
SwitchViews in the body. Thus, toggling the header’s SwitchView ON or
OFF controls the state of all list items at once, achieving the intended,
intuitive “global” settings control.

Since TypeScript does not automatically infer types for custom events,
you need an explicit type assertion, as follows:

[Code 7-9a] CustomEvent Assertion

1 protected override onRender() {


2 this.toggleAllView.element().addEventListener('toggled', (e) => {
3 const customEvent = e as CustomEvent<{ on: boolean }>;
4 const bool = customEvent.detail.on;
5 console.log('header:', bool);
6 this.toggleAll(bool);
7 });
8 }

In [Code 7-9a], const customEvent = e as CustomEvent<{ on:


boolean }>; asserts that the passed event (e) is a CustomEvent with detail
of type { on: boolean }. The IDE and compiler will then treat detail.on
as a boolean, allowing you to handle the value reliably. We’ll later introduce
helper methods to handle custom events more conveniently and type-safely.

Synchronizing the Global Toggle State with the Body

Our current implementation allows toggling all items via the header’s
SwitchView. But if all items are currently ON and we manually turn OFF
just one item in the body, we’d like the header’s SwitchView to reflect that
OFF state automatically.

[Code 7-10] attempts to handle this requirement.


[Code 7-10] syncToggleAllView

1 class SettingPage extends View<Setting[]> {


2 listView = new SettingListView(this.data);
3 toggleAllView = new SwitchView({ on: false });
4
5 override template() {
6 return html`
7 <div>
8 <div class="header">
9 <h2>Setting</h2>
10 ${this.toggleAllView}
11 </div>
12 <div class="body">
13 ${this.listView}
14 </div>
15 </div>
16 `;
17 }
18
19 protected override onRender() {
20 this.toggleAllView.element().addEventListener('toggled', (e) => {
21 const customEvent = e as CustomEvent<{ on: boolean }>;
22 const bool = customEvent.detail.on;
23 console.log('header:', bool); // (1)
24 this.toggleAll(bool);
25 });
26
27 this.listView.element().addEventListener('toggled', () => {
28 this.syncToggleAllView();
29 });
30 }
31
32 toggleAll(bool: boolean) {
33 this.listView.itemViews
34 .filter(itemView => itemView.data.on !== bool)
35 .forEach(itemView => itemView.switchView.setOn(bool));
36 }
37
38 syncToggleAllView() {
39 const bool = this.listView.itemViews.every(itemView =>
itemView.data.on);
40 console.log('body:', bool); // returns true if every Setting in
listView is on
41 this.toggleAllView.setOn(bool);
42 }
43 }

This code tries to update the states: when the header SwitchView
changes, all items in the body are updated, and vice versa, when an item in
the body changes, the header SwitchView is updated. But in practice, it
doesn’t behave as intended and can exhibit very complex or unpredictable
patterns.

For instance, (1) console.log('header:', bool); might run multiple


times, the behavior might differ depending on which SwitchView you click,
and overall it’s hard to grasp where or how the code loops back.

The core problem is that both the body’s and the header’s SwitchView
fire a toggled event whenever their state changes, influencing each other in
something close to a loop. When you click a body SwitchView,
syncToggleAllView runs, which changes the header SwitchView
(toggleAllView). This triggers a toggled event on the header SwitchView
again, calling toggleAll, which changes the body’s SwitchViews—leading
to more toggled events, and so on.

Strictly speaking, this forms a near-infinite loop. In practice,


.filter(itemView => itemView.data.on !== bool) eventually stops the
cycle, resulting in weird or fragile behavior rather than a true infinite loop.
The entire logic ends up not matching your intent, and the code is too
unpredictable to understand or maintain.

How can we fix this? Would a simple if statement be enough? Or do


we add a parameter to SwitchView’s setOn method so that it changes the
state without firing an event in certain cases?

[Code 7-8a] Attempting a “Silent” Event Option

1 class SwitchView extends View<{ on: boolean }> {


2 override template() {
3 return html`
4 <button class="${this.data.on ? 'on' : ''}">
5 <span class="toggle"></span>
6 </button>
7 `;
8 }
9
10 protected override onRender() {
11 this.element().addEventListener('click', () =>
this.setOn(!this.data.on));
12 }
13
14 setOn(bool: boolean, isSilent?: boolean) {
15 this.data.on = bool;
16 this.element().classList.toggle('on', bool);
17 if (isSilent) {
18 return; // If it must be silent, don't fire the event
19 }
20 const event = new CustomEvent('toggled', {bubbles: true, detail:
this.data});
21 this.element().dispatchEvent(event);
22 }
23 }

[Code 7-8a] adds an isSilent option to the setOn method, so external


code can command it to “just change the state without firing an event.” But
this approach isn’t a complete structural fix. It may temporarily avoid the
problem, but it makes the code more complex, and it’s hard to keep
consistent if the component is used in many places or more requirements
emerge later. This kind of workaround can damage maintainability and
extensibility in the long run.

Why Do Our Events Keep Looping, Causing Side


Effects?

Why do these repeated loops and side effects keep happening? Could
restricting the event flow to one direction, or introducing a store or central
event bus, or adopting a particular library solve the issue? Not necessarily.
A deeper structural solution exists.

Changing How SwitchView Emits Events

Without modifying any other code externally, we can solve this by


redesigning the event structure in SwitchView alone. Fire the event only
when the user directly interacts (clicks) to change the state, and do not fire
an event when the state is changed programmatically. This prevents
unintended event loops and complicated interactions.

[Code 7-11] Revised Event Logic in SwitchView

1 class SwitchView extends View<{ on: boolean }> {


2 override template() {
3 return html`
4 <button class="${this.data.on ? 'on' : ''}">
5 <span class="toggle"></span>
6 </button>
7 `;
8 }
9
10 protected override onRender() {
11 // previous
12 // this.element().addEventListener('click', () => {
13 // this.setOn(!this.data.on);
14 // });
15 this.element().addEventListener('click', () => this.toggle());
16 }
17
18 // new
19 private toggle() {
20 this.setOn(!this.data.on);
21 const event = new CustomEvent('toggled', {bubbles: true, detail:
this.data});
22 this.element().dispatchEvent(event);
23 }
24
25 setOn(bool: boolean) {
26 this.data.on = bool;
27 this.element().classList.toggle('on', bool);
28 // The lines that dispatched the event have been moved into toggle()
29 // const event = new CustomEvent('toggled', { bubbles: true, detail:
this.data });
30 // this.element().dispatchEvent(event);
31 }
32 }

Originally, SwitchView fired a toggled event each time its state


changed (setOn(bool)), causing an event to fire whenever the program
logic changed its state. This triggered accidental event loops or complex
interactions.

The revised code:

1. An event only when the user interacts

Only when the user clicks the button does the toggle() method
run, within which the toggled event is fired. In other words, the
event occurs solely in response to user interaction.
We also declared toggle() as private, so it’s only invoked by
the internal logic (the click event handler).

2. No event for programmatic state changes


setOn(bool) still changes the SwitchView state, but doesn’t fire
an event. If the program logic modifies the SwitchView state, it
silently changes state without notifying external code.

We’ve cleanly separated “state changes” from “event firing.” An event


is fired only for direct user interaction on SwitchView, clarifying
interactions with the header and body, while preventing pointless event
loops or excessive cross-references.

Now, whichever SwitchView you click, everything works as intended.


If you toggle the header’s SwitchView, all items in the body toggle. If you
turn off one item in the body, the header toggles to reflect it.

We solved the problem fundamentally by adjusting only one


component, SwitchView. Ultimately, events should be used to notify
external components of “state changes not known to them,” which is
exactly what we’ve done here.

private

Let’s briefly examine the private access modifier. The keyword


private disallows direct access to the member (method or property) from
outside the class. Only the class itself can use private members; subclasses
or external code cannot.

We declared the toggle() method as private to clarify SwitchView’s


internal logic. We only want it executed upon user clicks; it shouldn’t be
called directly from external code.

1. Internal Logic Protection

Declaring toggle() as private prevents external code from


calling it arbitrarily and messing with the state or event flow in
unusual ways. This improves component stability and
predictability.

2. Clear Intent
Using private clarifies that the method is part of the class’s
internal implementation and is not meant to be externally
accessed. This improves code readability and maintainability.

3. Role Separation

The setOn(bool) method remains public, allowing external code


to change the SwitchView state if needed.
The private toggle() method is responsible for user clicks that
change state and fire events.
This distinction in accessibility clarifies the difference between
“user-triggered event” vs. “programmatic state change.”

By marking toggle() as private, we keep SwitchView’s internal flow


cleaner, and the class design more consistent.

With that brief detour, let’s continue discussing how we finalized the
event design.

Finding the Right Design Principles

We saw how modifying the event design structure alone can solve the
problem fundamentally. But where do these design guidelines originate, and
what references can we consult? Rather than checking only libraries or
related articles, it’s good to read standard documents, well-known design
patterns, and stable event-handling approaches used by existing platforms
or frameworks.

For example, the Web API specs or iOS SDK docs—such as those
from WHATWG or W3C—can teach you the essential concepts of event
bubbling, capturing, and propagation control in the DOM event model. This
helps you handle fundamental infinite loop or structural issues, and build a
stable, maintainable event design.

[Code 7-12] demonstrates how <input type="checkbox"> handles a


change event. When the user genuinely clicks the checkbox to change its
state, a change event occurs, but if JavaScript programmatically changes
the checked state, the event does not fire.
[Code 7-12] An Example of <input type="checkbox"> Event Behavior

1 <input type="checkbox">
2
3 <script>
4 const checkbox = document.querySelector('input[type="checkbox"]');
5
6 // A change event is fired when the user physically clicks to
7 // change the checkbox’s state
8 checkbox.addEventListener('change', () => {
9 console.log('User changed:', checkbox.checked);
10 });
11
12 // Programmatically changing the checked state does not fire a change
event
13 setTimeout(() => {
14 checkbox.checked = true; // no change event
15 console.log('Programmatically changed:', checkbox.checked);
16 }, 2000);
17 </script>

Here, clicking the checkbox prints “User changed: true/false,” whereas


in the setTimeout block, forcing checkbox.checked = true changes the
checkbox visually but no change event is fired. You’ll see only
“Programmatically changed: true” in the console.

This is the browser’s default behavior, showing that an event is only


emitted when the user physically alters the element. It aligns with the
approach we took in SwitchView—an event should be emitted for actual
user interactions, giving the event more meaning than a mere state change.
It’s an excellent example of well-established design that has avoided
infinite loops for years.

It’s always advisable to learn from these proven, time-tested


techniques. A standard checkbox is already well designed to avoid infinite
loops, and it demonstrates the principle for all to see.

Perhaps our repeated loops and side effects came about because, when
we first encountered a small issue, we didn’t pinpoint its fundamental
cause. Instead, we took a patchwork approach, such as simply blocking the
problem with an if statement or indiscriminately introducing libraries for
event or state/flow management—all while the system’s structure was off-
kilter. Alternatively, we might have tried to unify our entire logic with a
pub/sub architecture, reactive programming, or Observables—even in
situations unsuited to event-driven handling—only to expose the program to
scattered event handling and make it harder to manage. Without
understanding the fundamental cause of these infinite loops, repeatedly
adding, removing, or swapping out libraries can’t truly resolve the core
issues.

By contrast, referencing Web technology standard documents,


iOS/AOS SDK examples, and established design principles not only solves
single issues but also builds a stronger technological foundation. This
approach will help produce robust, flexible designs and implementations for
whatever situations arise.

A Type-Safe Pattern for Custom Event Communication

rune-ts provides a design for handling custom events in a more type-


safe way under TypeScript. Changing the code as in [Code 7-13] removes
the reliance on string event names, automatically infers the detail type for
custom events, and makes the code more concise and stable.

[Code 7-13] Defining and Dispatching a Type-Safe Custom Event

1 import { CustomEventWithDetail, html, View } from "rune-ts";


2
3 type Toggle = { on: boolean; };
4
5 class Toggled extends CustomEventWithDetail<Toggle> {}
6
7 class SwitchView extends View<Toggle> {
8 // ... omitted ...
9
10 private toggle() {
11 this.setOn(!this.data.on);
12 // before
13 // const event = new CustomEvent('toggled', { bubbles: true, detail:
this.data });
14 // this.element().dispatchEvent(event);
15 this.dispatchEvent(Toggled, {bubbles: true, detail: this.data});
16 }
17
18 // ... omitted ...
19 }
rune-ts uses the CustomEventWithDetail<T> class to clearly define
the type of custom events. Also, the View class’s dispatchEvent method
can be used instead of this.element() and new CustomEvent, resulting in
more concise code. Meanwhile, detail is strictly of type Toggle, providing
safe and clear event handling.

[Code 7-14] Registering a Type-Safe Custom Event

1 class SettingPage extends View<Setting[]> {


2 // ... omitted ...
3
4 protected override onRender() {
5 // before
6 // this.toggleAllView.element().addEventListener('toggled', (e) => {
7 // const customEvent = e as CustomEvent<{ on: boolean }>;
8 // this.toggleAll(customEvent.detail.on);
9 // });
10 this.toggleAllView.addEventListener(Toggled, e =>
this.toggleAll(e.detail.on));
11 // [e: Toggled], [e.detail: Toggle] is inferred
12
13 // ... omitted ...
14 }
15
16 // ... omitted ...
17 }

In [Code 7-14], a type-safe event listener is registered for


toggleAllView using the Toggled custom event class.
this.toggleAllView.addEventListener(Toggled, e =>
this.toggleAll(e.detail.on)) indicates to the compiler that e is of type
Toggled, so e.detail is of type Toggle. This means you can safely
reference detail.on without any cast, and you also avoid relying on a
string-based event name. The code is more concise and less prone to
runtime errors caused by typos or changed event names.

A Reusable SwitchView Component

[Code 7-15] shows the final SwitchView code. It can be used anywhere
as a generic UI component, much like <input type="checkbox">. You
could store it in a separate directory for use in diverse scenarios. The user
can flip it ON or OFF, and it notifies external code of state changes via a
custom event. Also, external code can programmatically change the state by
calling setOn, allowing flexible interaction.

[Code 7-15] SwitchView

1 class SwitchView extends View<Toggle> {


2 override template() {
3 return html`
4 <button class="${this.data.on ? 'on' : ''}">
5 <span class="toggle"></span>
6 </button>
7 `;
8 }
9
10 protected override onRender() {
11 this.addEventListener('click', () => this.toggle());
12 }
13
14 private toggle() {
15 this.setOn(!this.data.on);
16 this.dispatchEvent(Toggled, {bubbles: true, detail: this.data});
17 }
18
19 setOn(bool: boolean) {
20 this.data.on = bool;
21 this.element().classList.toggle('on', bool);
22 }
23 }

Before we created SettingPage, we didn’t have SwitchView. Now we


do, and it’s a reusable component. If SwitchView had been prepared
beforehand, the team developing the settings app could have completed it
much faster and easier.

How Paradigms Create Reactive Code

[Code 7-16] shows the finished app built with the SettingItemView,
SettingListView, and SettingPage classes. We haven’t used any separate
reactive library or framework—just the language’s built-in classes, template
literals, Array methods, and Web API.

But if you look closely, you’ll notice something interesting: in


SettingItemView, SettingListView, and SettingPage, there’s hardly any
code that directly manipulates the DOM or deals with screen updates.
[Code 7-16] SettingItemView, SettingListView, SettingPage

1 type Setting = {
2 title: string;
3 on: boolean;
4 };
5
6 class SettingItemView extends View<Setting> {
7 switchView = new SwitchView(this.data);
8
9 override template() {
10 return html`
11 <div>
12 <span class="title">${this.data.title}</span>
13 ${this.switchView}
14 </div>
15 `;
16 }
17 }
18
19 class SettingListView extends View<Setting[]> {
20 itemViews = this.data.map(setting => new SettingItemView(setting));
21
22 override template() {
23 return html`
24 <div>
25 ${this.itemViews}
26 </div>
27 `;
28 }
29 }
30
31 class SettingPage extends View<Setting[]> {
32 private listView = new SettingListView(this.data);
33 private toggleAllView = new SwitchView({on: this.isAllOn()});
34
35 override template() {
36 return html`
37 <div>
38 <div class="header">
39 <h2>Setting</h2>
40 ${this.toggleAllView}
41 </div>
42 <div class="body">
43 ${this.listView}
44 </div>
45 </div>
46 `;
47 }
48
49 protected override onRender() {
50 this.toggleAllView.addEventListener(Toggled, e =>
this.toggleAll(e.detail.on));
51 this.listView.addEventListener(Toggled, () =>
this.syncToggleAllView());
52 }
53
54 toggleAll(on: boolean) {
55 this.listView.itemViews
56 .filter((itemView) => itemView.data.on !== on)
57 .forEach(itemView => itemView.switchView.setOn(on));
58 }
59
60 syncToggleAllView() {
61 this.toggleAllView.setOn(this.isAllOn());
62 }
63
64 isAllOn() {
65 return this.listView.itemViews.every(itemView => itemView.data.on);
66 }
67 }

These classes only manipulate data, call methods on the data model,
and change state. Screen updates follow naturally. SwitchView does contain
a single line that changes DOM classes, but if SwitchView were already
built, the team making this settings app would effectively be developing in
a reactive programming environment, without any external libraries. From
the settings app perspective, one SwitchView is delivering the effect of a
large reactive framework.

Of course, you could use a library for reactive programming, but this
example shows that it’s possible to incrementally form reactive code simply
by leveraging the language paradigm and structural design. Appropriate
paradigms and language features can yield strong componentization and
modularity, eventually leading to a declarative, reactive property emerging
naturally. The result is code with minimal DOM manipulation that’s more
abstract and productive. Each component has an optimal rendering logic,
acting as a reusable, reactive software element—maintaining performance
and rendering optimization while establishing a reactive programming
environment.

In modern TypeScript/JavaScript, which has evolved into a


multiparadigm language integrating object-oriented and functional
paradigms, TypeScript’s static type system, improved Web APIs
(VanillaJS), browser standardization, and the ES module system all create a
richer and more reliable foundation for frontend app development.
Building upon this, a multiparadigm approach that iteratively
composes components and abstraction can gradually yield more declarative,
reactive code. DOM manipulation code becomes more abstract, and each
component emerges as a reusable, reactive software piece with its own
optimized rendering logic. This is a good example of how you can build a
reactive programming environment while retaining performance and
rendering optimizations.

7.2 Building a Todo App


In this chapter, we’ll look at how to build a Todo app. A Todo app may
seem simple, but when you actually try implementing one, it includes more
features and complexity than you might expect, making it a surprisingly
tricky example. The Todo app we’ll build supports the following features:

Registering a new Todo item


Deletion
Marking an item as completed
Marking all items as completed at once
Filtering to show only incomplete or only complete items

As we implement these features step by step, we’ll create a Todo app


with features suitable for a real-world application rather than a simple
example. By doing so, we’ll go beyond the stage of merely combining
simple UI components, and gain experience in systematically handling
more complex logic like state management, event handling, data filtering,
and display.

Additionally, in Chapter 7.2 and the following 7.3, we will delve


deeply into advanced object-oriented programming concepts and
techniques, which can be somewhat lengthy and complex. However, these
concepts are widely used design patterns in various GUI development kits
—such as iOS SDK or Android SDK—and can help broaden your thinking
and construct more robust software designs. By learning them, you can
acquire various insights and know-how that you can effectively apply in
real development scenarios.
CheckView

Let’s first create the CheckView component that will act as a checkbox
in our Todo app. This component operates based on Toggle data and
displays a check mark inside a circular shape depending on the toggle state.
The user can click the checkbox to change its state, and when the state
changes, it fires a Toggled event so that a parent component or container
can detect the change and respond appropriately.

Because the logic is mostly similar to that of SwitchView implemented


in the previous chapter, we’ll start by taking the SwitchView code and only
tweak the template part a bit.

[Code 7-17] Implementing CheckView

1 class CheckView extends View<Toggle> {


2 override template() {
3 return html`
4 <span class="${this.data.on ? 'on' : ''}"></span>
5 `;
6 }
7
8 protected override onRender() {
9 this.element().addEventListener('click', () => this.toggle());
10 }
11
12 private toggle() {
13 this.setOn(!this.data.on);
14 this.dispatchEvent(Toggled, { bubbles: true, detail: this.data });
15 }
16
17 setOn(bool: boolean) {
18 this.data.on = bool;
19 this.element().classList.toggle('on', bool);
20 }
21 }
22
23 export function main() {
24 document.querySelector('#body')!.append(
25 new CheckView({ on: true }).render()
26 );
27 }

In [Code 7-17], the CheckView class takes Toggle-type data, displays


the state, and changes its state in response to a click event. When you call
render(), a checked-on checkbox shape appears on the screen; each click
inverts the toggle state.

Abstract Class and Inheritance

We implemented both SwitchView and CheckView and noticed that


both components share very similar logic for having an on/off toggle state
and flipping it. Such duplication impairs maintainability and extensibility,
so it’s best to gather common logic in one place.

In [Code 7-18], we define an abstract class called ToggleView that


manages the toggle logic in a single place. ToggleView does not provide a
template; it only handles the toggle state and fires the Toggled event. This
way, subclasses such as SwitchView or CheckView can inherit from
ToggleView and focus on each one’s template and style, while reusing the
common functionality. We’ve also added convenience code in the
constructor to set a default state.

[Code 7-18] ToggleView

1 abstract class ToggleView extends View<Toggle> {


2 constructor(data?: Toggle) {
3 super(data ?? { on: false });
4 }
5
6 protected override onRender() {
7 this.addEventListener('click', () => this.toggle());
8 }
9
10 private toggle() {
11 this.setOn(!this.data.on);
12 this.dispatchEvent(Toggled, { bubbles: true, detail: this.data });
13 }
14
15 setOn(bool: boolean) {
16 this.data.on = bool;
17 this.element().classList.toggle('on', bool);
18 }
19 }

Here’s the layout of [Code 7-18]:


Abstract Class: ToggleView is marked abstract, so you cannot
instantiate it directly. Instead, it’s intended for use via inheritance in
subclasses.
Common Logic Extraction: The toggle() or setOn(bool) methods,
as well as the click event handler registration, are parts that any
toggleable view can share.
Default Value: In the constructor, if data is not provided, we use {
on: false } as the default, improving convenience.

Headless UI

Now subclasses like SwitchView or CheckView can inherit from


ToggleView and define only their own template. This reduces code
duplication, improves maintainability, and readability.

[Code 7-19] SwitchView, CheckView

1 class SwitchView extends ToggleView {


2 override template() {
3 return html`
4 <button class="${this.data.on ? 'on' : ''}">
5 <span class="toggle"></span>
6 </button>
7 `;
8 }
9 }
10
11 class CheckView extends ToggleView {
12 override template() {
13 return html`
14 <span class="${this.data.on ? 'on' : ''}"></span>
15 `;
16 }
17 }
18
19 console.log(new SwitchView().toHtml());
20 // <button class="SwitchView">
21 // <span class="toggle"></span>
22 // </button>
23
24 console.log(new CheckView({ on: true }).toHtml());
25 // <span class="CheckView on"></span>

In [Code 7-19], implementing SwitchView and CheckView is now


extremely simple. Each subclass reuses the logic that ToggleView provides
while applying its own UI layout and style.

By defining ToggleView as an abstract class that contains only the


common logic without providing a template, we can use it as a kind of
Headless UI. Headless UI means a pattern that only handles core logic
such as state management and event handling, without directly providing UI
representation (template, style).

This approach lets you easily extend various types of toggleable


components, cuts down on code duplication, boosts maintainability, and
clearly separates UI rendering from logic.

TodoItemView

Next, let’s use CheckView to implement TodoItemView, which shows a


single Todo item. This component takes Todo-type data, showing a title
(title) and whether the item is completed (completed).

[Code 7-20] TodoItemView

1 type Todo = {
2 title: string;
3 completed: boolean;
4 };
5
6 class TodoItemView extends View<Todo> {
7 private checkView = new CheckView({ on: this.data.completed });
8
9 override template() {
10 return html`
11 <div>
12 ${this.checkView}
13 <span class="title">${this.data.title}</span>
14 </div>
15 `;
16 }
17
18 protected override onRender() {
19 this.addEventListener(Toggled, e => this.syncCompleted());
20 }
21
22 private syncCompleted() {
23 this.data.completed = this.checkView.data.on;
24 }
25
26 setCompleted(bool: boolean) {
27 this.checkView.setOn(bool);
28 this.syncCompleted();
29 }
30 }
31
32 export function main() {
33 const todo: Todo = { title: 'Coding', completed: false };
34
35 document.querySelector('#body')!.append(
36 new TodoItemView(todo).render()
37 );
38 }

Now the screen shows a Todo item, initially with completed: false,
meaning the checkbox is unchecked. When the user checks that box, the
completed state changes to true, and the display changes accordingly.

Domain–UI Naming Mismatch Management Through


Hierarchical Encapsulation

In [Code 7-20], TodoItemView manages data: Todo with a completed


state, which is synchronized to the on state of CheckView. When the user
clicks the checkbox, the completed value is kept in sync.

In the SettingItemView example, we deliberately used a Setting type


with an on: boolean; property to match the SwitchView data structure.
However, in a real project, the property names that UI components use are
often different from those used in the business logic (domain).

1. Domain-Focused Naming

completed is a domain-friendly name in the Todo model to


represent whether a task is done. This property name can be
tightly associated with the business logic or database fields.
This way, in the model layer, you can handle and manage tasks
easily with the intuitive name completed.

2. UI-Focused Naming

Meanwhile, a UI component like CheckView simply denotes its


visible check status as on/off from a screen perspective. This
name is relevant only at the UI level and needn’t be forced to
match the domain logic.
This is akin to how you can’t rename the property checked of a
built-in <input type="checkbox"> to something else. Similarly,
CheckView keeps its own name (on) and doesn’t rename it to
completed for domain reasons.
Thus, the UI preserves its own terminology, and the mapping to
domain terms is handled in a separate layer such as
TodoItemView. This avoids unnecessary coupling and clarifies
responsibilities.

Such naming differences between domain model and UI logic are


natural. [Code 7-20] handles them in the following ways:

1. Bidirectional Synchronization

TodoItemView has methods (syncCompleted, setCompleted) to


synchronize completed and on. Internally, it maps domain
attributes to UI properties.
Thus, external code using TodoItemView never needs to consider
the UI-level on property; it just calls a domain-friendly method
like setCompleted(true).

2. Encapsulation for Simplicity

Inside TodoItemView, completed and the on state of CheckView


map to each other, so external code only deals with the completed
property. The UI update logic (on) or the fact that a CheckView is
used remains hidden.
If you later replaced CheckView with another component that
provides a checkbox interface, no external changes are needed, as
long as the outward interface remains the same.

3. Maintaining Independence

The domain model (Todo) and UI component (CheckView) are


designed from different viewpoints, but TodoItemView acts as a
mediator bridging them. If you need to modify domain logic or
replace UI components, you can do so with minimal updates,
because the system is built around these separate concerns.

TodoListView

TodoListView displays multiple TodoItemViews. It accepts data:


Todo[] and creates a TodoItemView instance for each Todo, rendering them
all in a list. It thus easily displays the entire Todo list.

[Code 7-21] TodoListView

1 class TodoListView extends View<Todo[]> {


2 itemViews = this.data.map(todo => new TodoItemView(todo));
3
4 override template() {
5 return html`
6 <div>
7 ${this.itemViews}
8 </div>
9 `;
10 }
11 }
12
13 export function main() {
14 const todos: Todo[] = [
15 { title: 'Coding', completed: false },
16 { title: 'Dinner', completed: true },
17 { title: 'Test', completed: false },
18 ];
19
20 document.querySelector('#body')!.append(
21 new TodoListView(todos).render()
22 );
23 }

In [Code 7-21], we generate a TodoItemView for each entry in todos


and list them in the template. This displays multiple Todo items on the
screen, each managing its own state via the associated TodoItemView.

Generalizing with a Generic Class

TodoListViewlooks quite similar to SettingListView—both share a


lot of code. Just as we used ToggleView to remove repetition in SwitchView
and CheckView, we might similarly want to generalize XxxListView to
reduce duplication.

Previously, SwitchView and CheckView had identical logic except for


the template(), and both used the same data model (Toggle). Therefore,
we could easily factor out the common code into ToggleView. But
TodoListView and SettingListView are different. Both handle lists but use
different data models (Todo vs. Setting) and different item-view classes
(TodoItemView vs. SettingItemView). We need a generic class to abstract
them.

Both TodoListView and SettingListView have a data model and a


corresponding item view. For example, TodoListView uses Todo and
TodoItemView, while SettingListView uses Setting and
SettingItemView.

What if we had a ListView from which you could inherit, specifying


which item view to use so that you can easily create a new XxxListView?
This would greatly reduce code duplication and yield a simpler structure.

[Code 7-22] shows what it would look like if such a ListView were
already implemented, and how we might use it.

[Code 7-22] TodoItemView

1 class TodoListView extends ListView<TodoItemView> {


2 ItemView = TodoItemView;
3 }

If we can define TodoListView so succinctly, the code for ListView


might look like this:

[Code 7-23] ListView<IV>

1 abstract class ListView<IV extends View<object>> extends View<IV['data']


[]> {
2 abstract ItemView: new (item: IV['data']) => IV;
3
4 private _itemViews: IV[] | null = null;
5
6 get itemViews() {
7 if (this._itemViews === null) {
8 this._itemViews = this.data.map(item => new this.ItemView(item));
9 }
10 return this._itemViews;
11 }
12
13 override template() {
14 return html`
15 <div>
16 ${this.itemViews}
17 </div>
18 `;
19 }
20 }

The Generic Class – ListView<IV>

is an abstract class that extends View<IV['data'][]>, and it


ListView
manages multiple item views. The generic parameter IV represents an item
view. Internally, ListView infers the actual data type IV['data'] that each
item view manages and then manages the entire data set (i.e., this.data) as
an array of that type.

[Code 7-23a] Defining the Generic Type

1 abstract class ListView<IV extends View<object>> extends View<IV['data'][]>


{}

Here’s an explanation of the type definition:

1. IV extends View<object>

IV must be a class extending View<object>.


For instance, TodoItemView extends View<Todo> or
SettingItemView extends View<Setting> qualify.

2. extends View<IV['data'][]>

ListView extends View<IV['data'][]>, so ListView’s internal


data (this.data) is automatically recognized as an array of
IV['data'].
For example, if IV is TodoItemView, which extends View<Todo>,
then ListView<TodoItemView> has this.data of type Todo[].
IV = TodoItemView
IV[‘data’] = TodoItemView['data'] = Todo
IV[‘data’][] = TodoItemView['data'][] = Todo[]

3. class ListView<IV extends View<object>> extends


View<IV['data'][]>

Compare this with [Code 7-21], where TodoListView was class


TodoListView extends View<Todo[]>. Our new ListView
definition lines up with that approach.

Below is an example verifying that type inference works well inside


the methods of TodoListView when it’s defined as
ListView<TodoItemView>.

[Code 7-23b] Inferring Todo[] Type

1 class TodoListView extends ListView<TodoItemView> {


2 ItemView = TodoItemView;
3
4 logAll() {
5 this.data.forEach( // [this.data: Todo[]]
6 todo => console.log(todo.title, todo.completed) // [todo: Todo]
7 );
8 }
9 }

Abstract Property: ItemView

[Code 7-23c] The XxxItemView Constructor

1 abstract ItemView: new (data: IV['data']) => IV;

Declaring this property as abstract enforces that any concrete


subclass must specify ItemView.
ItemView is of type new (data: T) => IV, i.e. a constructor that takes
data of type T and returns an instance of IV.
For example, ItemView = SettingItemView; means that the code can
dynamically create instances by doing new this.ItemView(data)
internally.

_itemViews for Caching Item Views

[Code 7-23d] private _itemViews

1 private _itemViews: IV[] | null = null;

Initially set to null to indicate we haven’t created any item views yet.
The getter will instantiate them the first time it’s accessed, then store
the result for subsequent reuse.

The itemViews Getter

[Code 7-23e] get itemViews()

1 get itemViews() {
2 if (this._itemViews === null) {
3 this._itemViews = this.data.map(item => new this.ItemView(item));
4 }
5 return this._itemViews;
6 }

this.data is View<IV['data'][]> inherited from View, so it’s an


array of IV['data'].
The first time we access itemViews (if _itemViews is null), we create
item view instances for each data item via map, store them in
_itemViews, and return them.
The second time, we just return _itemViews.

Why Implement itemViews as a Getter?

By implementing itemViews as a getter, ListView can define a field in


a child class like ItemView = TodoItemView; in a concise and explicit way.
Detailed points:
1. Class Inheritance and Field Initialization

In JavaScript/TypeScript, when a subclass is instantiated, the


parent class constructor runs first. Then the child class constructor
runs, initializing its fields. So at the time the parent constructor
runs, the child class’s fields aren’t yet initialized.

2. Child-Class Field Initialization Timing

A concrete class like TodoListView that extends ListView sets


ItemView = TodoItemView;. This assignment occurs after the
parent constructor finishes.
Hence, if we try to directly instantiate item views in the parent
constructor, ItemView might not exist yet.

3. Lazy Initialization

By postponing actual instantiation until the getter is accessed, we


ensure the child class’s ItemView field is already initialized. So
we can safely reference this.ItemView to create item-view
instances.

4. Improved Expressiveness

This “one-step delayed” technique is an implementation that


respects the class initialization sequence. The parent class
(ListView) does not directly reference ItemView during
construction, and defers resource preparation until the child
class’s fields are initialized. This often improves structural clarity.

5. Avoiding Unnecessary Initialization

Furthermore, we can delay instance creation until needed,


avoiding the overhead if it’s not actually required. In some
situations, this can enhance efficiency.
Overriding the template Method

[Code 7-23f] override template()

1 override template() {
2 return html`
3 <div>
4 ${this.itemViews}
5 </div>
6 `;
7 }

We override View’s template() method to define how ListView is


rendered.
As a result, each item view is rendered in a <div>, with ListView
wrapping them as a container.

Inheriting from the ListView Class

By encapsulating the common logic of repeatedly rendering item


views in this abstract ListView, we can easily implement various list-type
views like TodoListView or SettingListView. We can also extend
ListView with additional methods or properties for handling item views,
thereby evolving classes with a similar purpose and increasing productivity.

[Code 7-23g] Using ListView to implement SettingListView, TodoListView

1 class SettingListView extends ListView<SettingItemView> {


2 ItemView = SettingItemView;
3 }
4
5 class TodoListView extends ListView<TodoItemView> {
6 ItemView = TodoItemView;
7 }

When you inherit, all you need to do is specify the ItemView


constructor and type. The common iterative rendering logic is handled by
ListView. This reduces duplication and creates a more flexible, extensible
structure for adding new list-type views.
TodoPage

Let’s now implement TodoPage, which uses TodoListView to provide


features like marking all Todo items completed at once or adding new
Todos. Below is all the code from SettingPage’s “toggle all items” logic,
slightly modified (1), (2), (3), (4) to suit the Todo app.

[Code 7-24] TodoPage

1 class TodoPage extends View<Todo[]> {


2 listView = new TodoListView(this.data);
3 toggleAllView = new CheckView({on: this.isAllOn()}); // (1) changed to
CheckView
4
5 override template() {
6 return html`
7 <div>
8 <div class="header">
9 ${this.toggleAllView}
10 <input type="text">
11 </div>
12 <div class="body">
13 ${this.listView}
14 </div>
15 </div>
16 `;
17 }
18
19 protected override onRender() {
20 this.toggleAllView.addEventListener(Toggled, e =>
this.toggleAll(e.detail.on));
21 this.listView.addEventListener(Toggled, () =>
this.syncToggleAllView());
22 }
23
24 toggleAll(on: boolean) {
25 this.listView.itemViews
26 .filter(itemView => itemView.data.completed !== on) // (2) changed
to completed
27 .forEach(itemView => itemView.setCompleted(on)); // (3) changed to
setCompleted(on)
28 }
29
30 syncToggleAllView() {
31 this.toggleAllView.setOn(this.isAllOn());
32 }
33
34 isAllOn() {
35 return this.listView.itemViews.every(
36 itemView => itemView.data.completed // (4) changed to completed
37 );
38 }
39 }
40
41 export function main() {
42 const todos: Todo[] = [
43 { title: 'Coding', completed: false },
44 { title: 'Dinner', completed: true },
45 { title: 'Test', completed: false },
46 ];
47
48 document.querySelector('#body')!.append(
49 new TodoPage(todos).render()
50 );
51 }

In the header, there’s a CheckView for marking all Todo items


completed simultaneously and an input element for entering new Todos.
The body area uses TodoListView to display each Todo item. Any item
that’s still incomplete can be toggled to complete by clicking its checkbox.
Likewise, if the user checks the header checkbox, all items are marked
completed.

Removing Duplication via Inter-Object Communication


Based on Interfaces

The operation of TodoPage is nearly identical to the SettingPage, and


we see a similar pattern of duplication throughout [Code 7-24]. This time,
we’ll try to solve the duplication by using an interface rather than
inheritance, to enhance structural flexibility.

Reusing Existing Code

The following example directly extracts the “toggle all items” logic
and “manage the overall toggle state from the body” code from
SettingPage into a class called TogglePageController.

[Code 7-25] TogglePageController

1 class TogglePageController {
2 // Needs abstraction: SettingListView, SwitchView
3 listView = new SettingListView(this.data);
4 toggleAllView = new SwitchView({on: this.isAllOn()});
5
6 // error: no parent class to override
7 override template() {
8 return html`
9 <div>
10 <div class="header">
11 <h2>Setting</h2>
12 ${this.toggleAllView}
13 </div>
14 <div class="body">
15 ${this.listView}
16 </div>
17 </div>
18 `;
19 }
20
21 // error: no parent class to override
22 protected override onRender() {
23 this.toggleAllView.addEventListener(Toggled, e =>
this.toggleAll(e.detail.on));
24 this.listView.addEventListener(Toggled, () =>
this.syncToggleAllView());
25 }
26
27 toggleAll(on: boolean) {
28 this.listView.itemViews
29 .filter(itemView => itemView.data.on !== on) // needs abstraction
30 .forEach(itemView => itemView.switchView.setOn(on)); // needs
abstraction
31 }
32
33 syncToggleAllView() {
34 this.toggleAllView.setOn(this.isAllOn());
35 }
36
37 isAllOn() {
38 return this.listView.itemViews.every(itemView => itemView.data.on);
39 }
40 }

In [Code 7-25], TogglePageController is a class that was extracted


from the relevant logic of “toggle all items” in SettingPage. But:

1. Error Using override Keyword

override is used when overriding a method from a parent class,


but TogglePageController does not extend any class.
So override template() or override onRender() cause
TypeScript compile errors.

2. Need for Abstraction around SettingListView and


SwitchView
Currently, TogglePageController references SettingListView
and SwitchView directly.
This means TogglePageController depends on a SettingPage-
specific implementation.
For reusability in TodoPage or other “toggleable pages,” we need
a more abstract interface or superclass, not a direct reference to
particular classes.
For instance, we might rely on something like ListView that
SettingListView inherits, and SwitchView might be replaced
with ToggleView or an interface.

Creating a togglePage Member Field

We’ll address the first problem (the override error) and go a step
further:

We remove template() entirely, since template creation is not the


TogglePageController’s responsibility.
We move the logic from onRender() to the constructor(), so we can
perform any event setup when the controller is initialized.
Rather than having listView and toggleAllView as fields directly in
TogglePageController, we let the object togglePage hold those
fields. This design gives TogglePageController more flexibility in
controlling a togglePage passed to it.

Below is a revised code example.

[Code 7-25a] Creating a togglePage Member Field

1 class TogglePageController {
2 constructor(private togglePage: View<object>) {
3 const { toggleAllView, listView } = this.togglePage; // TS2339: ...
4 toggleAllView.data.on = this.isAllOn();
5 toggleAllView.addEventListener(Toggled, e =>
this.toggleAll(e.detail.on));
6 listView.addEventListener(Toggled, () => this.syncToggleAllView());
7 }
8
9 toggleAll(on: boolean) {
10 // TS2339: Property 'listView' does not exist on type View<object>
11 this.togglePage.listView.itemViews
12 .filter(itemView => itemView.data.on !== on)
13 .forEach(itemView => itemView.switchView.setOn(on));
14 }
15
16 syncToggleAllView() {
17 // TS2339: Property 'toggleAllView' does not exist on type
View<object>
18 this.togglePage.toggleAllView.setOn(this.isAllOn());
19 }
20
21 isAllOn() {
22 // TS2339: Property 'listView' does not exist on type View<object>
23 return this.togglePage.listView.itemViews.every(itemView =>
itemView.data.on);
24 }
25 }

Here, togglePage is now a member field in TogglePageController,


and we’ve removed template() from TogglePageController. We also
removed the listView and toggleAllView fields from the controller,
relocating the initialization logic to the constructor. This cleanly separates
roles between TogglePageController and togglePage.

But now we get TS2339: Property ... does not exist on type
View<object> because TypeScript interprets View<object> as having no
toggleAllView or listView property.

Defining a TogglePage Interface

We’d like togglePage to be either SettingPage or TodoPage or any


page that has a toggleAllView and a listView. We can do so as follows:

1. Define an Interface

Let’s define an interface so that TypeScript knows togglePage


must indeed have toggleAllView and listView. We can extend
View<object>.

2. Implementation in Concrete Classes

SettingPage or TodoPage can implement that interface (or extend


a base class) so that TogglePageController can be reused
without errors.
Below is an incomplete but more advanced code snippet. In [Code 7-
25b], you could use your IDE to figure out type inference. We’ll be explicit
here to show the example to readers.

[Code 7-25b] Defining the TogglePage Interface

1 type ExtractItemView<T> = T extends ListView<infer IV> ? IV : never;


2
3 interface TogglePage<LV extends ListView<ExtractItemView<LV>>> {
4 toggleAllView: ToggleView; // parent class of SwitchView and CheckView
5 listView: LV; // parent class of SettingListView and TodoListView
6 }
7
8 class TogglePageController<LV extends ListView<ExtractItemView<LV>>> {
9 constructor(private togglePage: TogglePage<LV>) {
10 const toggleAllView: ToggleView = this.togglePage.toggleAllView;
11 const listView: LV = this.togglePage.listView;
12 toggleAllView.data.on = this.isAllOn();
13 toggleAllView.addEventListener(Toggled, e =>
this.toggleAll(e.detail.on));
14 listView.addEventListener(Toggled, () => this.syncToggleAllView());
15 }
16
17 toggleAll(on: boolean) {
18 this.togglePage.listView.itemViews
19 .filter((itemView: ExtractItemView<LV>) => itemView.data.on !== on)
20 // TS2339: Property 'data' does not exist on type
ExtractItemView<LV>
21 .forEach((itemView: ExtractItemView<LV>) =>
itemView.switchView.setOn(on));
22 // TS2339: Property 'switchView' does not exist on type
ExtractItemView<LV>
23 }
24
25 syncToggleAllView() {
26 this.togglePage.toggleAllView.setOn(this.isAllOn());
27 }
28
29 isAllOn() {
30 return this.togglePage.listView.itemViews.every(
31 (itemView: ExtractItemView<LV>) => itemView.data.on
32 );
33 // TS2339: Property 'data' does not exist on type ExtractItemView<LV>
34 }
35 }

About ExtractItemView<T>

Before discussing TogglePage<LV> and TogglePageController, let’s


first see what ExtractItemView<T> is.
[Inside Code 7-25b] ExtractItemView<T>

1 type ExtractItemView<T> = T extends ListView<infer IV> ? IV : never;

This type alias uses conditional types and the infer keyword to extract
the IV from a ListView<IV>.
For example, if LV is SettingListView, which is effectively
ListView<SettingItemView>, ExtractItemView<LV> infers to
SettingItemView.

Hence, ExtractItemView<T> checks if T is ListView<IV>, returning


the IV if so. This helps TogglePageController<LV> safely use
ExtractItemView<LV> for the item view’s concrete type, so SettingPage or
TodoPage each infers SettingItemView or TodoItemView, respectively.

TogglePage<LV> and TogglePageController

Returning to TogglePage<LV> and TogglePageController, we see


that:

TogglePage<LV> is an interface specifying the contract for a toggleable


page. By making toggleAllView be ToggleView, the page is free to
use SwitchView or CheckView. And listView is typed as LV, restricting
it to a matching ListView type.
TogglePageController’s constructor param is TogglePage<LV>. This
solves the errors in [Code 7-25a] because referencing toggleAllView
and listView is valid now.

However, references to itemView.data.on or


itemView.switchView.setOn(on) still cause type errors. That’s because
ExtractItemView<LV> alone can’t decode exactly what properties and
methods exist, and the “check logic” differs between SettingItemView
(itemView.data.on, itemView.switchView.setOn(...)) and
TodoItemView (itemView.data.completed,
itemView.setCompleted(...)).
One possible fix is to define an additional interface in the ListView
item specifying “what method sets its checked state,” so that both
TodoItemView and SettingItemView implement it. But that imposes “The
TogglePageController constraints must also be known by TodoItemView,”
increasing complexity.

getItemViewOn, setItemViewOn in TogglePage

We’ll choose a more flexible approach: we delegate the logic of


“which field to access for the check state” and “how to toggle it” to
TodoPage or SettingPage. Then TogglePageController doesn’t need to
know the details.

[Code 7-25c] Adding getItemViewOn, setItemViewOn to TogglePage

1 type ExtractItemView<T> = T extends ListView<infer IV> ? IV : never;


2
3 export interface TogglePage<LV extends ListView<ExtractItemView<LV>>> {
4 toggleAllView: ToggleView;
5 listView: LV;
6 getItemViewOn(itemView: ExtractItemView<LV>): boolean;
7 setItemViewOn(itemView: ExtractItemView<LV>, bool: boolean): void;
8 }

We define two methods in the interface:

1. getItemViewOn(itemView: ExtractItemView<LV>): boolean;

Returns whether the itemView is “checked.”


For example, if itemView is SettingItemView, we can implement
return itemView.data.on.

2. setItemViewOn(itemView: ExtractItemView<LV>, bool:


boolean): void;

Sets the “checked” state of itemView.


If itemView is a TodoItemView, we might do
itemView.setCompleted(bool).
Finalizing TogglePageController with TogglePage

Even though TogglePage is merely an interface describing “must


provide these methods,” we can use them in [Code 7-25d] as if already
implemented.

[Code 7-25d] Using getItemViewOn, setItemViewOn in TogglePageController

1 export class TogglePageController<LV extends


ListView<ExtractItemView<LV>>> {
2 constructor(private togglePage: TogglePage<LV>) {
3 const toggleAllView: ToggleView = this.togglePage.toggleAllView;
4 const listView: LV = this.togglePage.listView;
5 toggleAllView.data.on = this.isAllOn();
6 toggleAllView.addEventListener(Toggled, e =>
this.toggleAll(e.detail.on));
7 listView.addEventListener(Toggled, () => this.syncToggleAllView());
8 }
9
10 toggleAll(on: boolean) {
11 const { listView, getItemViewOn, setItemViewOn } = this.togglePage;
12 listView.itemViews
13 .filter(itemView => getItemViewOn(itemView) !== on)
14 .forEach(itemView => setItemViewOn(itemView, on));
15 }
16
17 syncToggleAllView() {
18 this.togglePage.toggleAllView.setOn(this.isAllOn());
19 }
20
21 isAllOn() {
22 return this.togglePage.listView.itemViews.every(
23 this.togglePage.getItemViewOn
24 );
25 }
26 }

Here, TogglePageController calls togglePage.getItemViewOn and


togglePage.setItemViewOn.

1. getItemViewOn(itemView)

In toggleAll or isAllOn,
togglePage.getItemViewOn(itemView) checks whether the item
view is currently checked.

2. setItemViewOn(itemView, on)
toggleAll uses togglePage.setItemViewOn(itemView, on) to
modify the item’s checked state.

Hence, TogglePageController doesn’t need to know any detail about


how the item view or data model works. All it needs is the interface
methods from TogglePage. This design has the following benefits:

1. Flexible Extensibility

Each page (TodoPage, SettingPage, etc.) only needs to specify


“which field to treat as checked.” The logic changes or page
additions are easy.

2. Reduced Code Duplication

TogglePageController does not specifically rely on any item


view or data model, so it can be reused for multiple pages.

3. Hidden Implementation Details

The specifics of “which property or method to manipulate in


itemView” are hidden behind the interface.
TogglePageController doesn’t care about that, making its
structure simpler.

In sum, by just defining getItemViewOn and setItemViewOn in


TogglePage, each page (TodoPage, SettingPage, etc.) implements the
check logic in its own method, while TogglePageController delegates all
the detail to that page. This combination yields code reusability and
flexibility.

Making TodoPage Implement TogglePage

Now let’s have TodoPage and SettingPage each implement


TogglePage so they become togglable pages.
[Code 7-26] Making TodoPage a TogglePage<TodoListView>

1 class TodoPage extends View<Todo[]> implements TogglePage<TodoListView> {


2 toggleAllView = new CheckView();
3 listView = new TodoListView(this.data);
4
5 getItemViewOn(itemView: TodoItemView): boolean {
6 return itemView.data.completed;
7 }
8
9 setItemViewOn(itemView: TodoItemView, bool: boolean): void {
10 return itemView.setCompleted(bool);
11 }
12
13 private togglePageController = new TogglePageController(this);
14
15 override template() {
16 return html`
17 <div>
18 <div class="header">
19 ${this.toggleAllView}
20 <input type="text">
21 </div>
22 <div class="body">
23 ${this.listView}
24 </div>
25 </div>
26 `;
27 }
28 }

1. implements TogglePage<TodoListView>

TodoPage must meet the contract for a togglable page.

2. Properties and Methods

toggleAllView: uses CheckView to handle mass on/off.


listView: uses TodoListView to show a list of Todo items.
getItemViewOn(...): returns itemView.data.completed to
check if it’s currently completed.
setItemViewOn(...): calls itemView.setCompleted(bool) to
change that state.

3. private togglePageController = new


TogglePageController(this)
We pass this into the controller, so it can set up event listeners
and run logic for toggling.
So TodoPage focuses on UI layout, while the controller handles
the toggle logic, free of duplication.

[Code 7-27] Making SettingPage a TogglePage<SettingListView>

1 class SettingPage extends View<Setting[]> implements


TogglePage<SettingListView> {
2 toggleAllView = new SwitchView();
3 listView = new SettingListView(this.data);
4
5 getItemViewOn(itemView: SettingItemView): boolean {
6 return itemView.data.on;
7 }
8
9 setItemViewOn(itemView: SettingItemView, bool: boolean): void {
10 return itemView.switchView.setOn(bool);
11 }
12
13 private togglePageController = new TogglePageController(this);
14
15 override template() {
16 return html`
17 <div>
18 <div class="header">
19 <h2>Setting</h2>
20 ${this.toggleAllView}
21 </div>
22 <div class="body">
23 ${this.listView}
24 </div>
25 </div>
26 `;
27 }
28 }

Here SettingPage also uses TogglePageController, providing:

toggleAllView: a SwitchView
listView: a SettingListView
getItemViewOn(...): returns itemView.data.on
setItemViewOn(...): calls itemView.switchView.setOn(bool)
togglePageController: calls new TogglePageController(this),
delegating the check logic to the controller
We’ve removed duplication from TodoPage and SettingPage by
moving it into TogglePageController. Running the changed code shows
both pages working as before.

From a GoF Design Pattern Perspective – The Strategy


Pattern

The structure of TodoPage with TogglePageController is quite


similar to the Strategy pattern in the GoF. TogglePageController serves
as the “Context” that carries methods to perform the check logic, while
various pages that implement TogglePage provide the different “Strategies”
for how the check state is read or written:

Context (TogglePageController): Has methods for the check logic


but delegates “how to read/write the check state” externally.
Strategy (TogglePage): Each page implements “how to read/write the
check state” differently.

Knowing these standard patterns can clarify your code’s separation of


roles, facilitate team communication, and improve code consistency.

Inter-Object Communication with First-Class Functions –


A “Callback Injection” Pattern

Traditional GoF design patterns typically revolve around “class


inheritance” or “interface implementation” for extensibility or replacement.
But in today’s languages, which incorporate functional programming
features, passing a lambda (callback) as a parameter can be a simpler way
to swap or extend logic.

Using function arguments can drastically reduce coupling and keep the
code simpler. The community sometimes calls this structure “function
(callback) injection.”
A First-Class Function Approach in ToggleListController

[Code 7-28] shows ToggleListController using first-class function


callbacks to handle “is an item view checked?” and “how to set that check,”
provided from outside.

[Code 7-28] ToggleListController

1 export class ToggleListController<


2 TV extends ToggleView,
3 LV extends ListView<ExtractItemView<LV>>
4 > {
5 constructor(
6 public toggleAllView: TV,
7 public listView: LV,
8 private getItemViewOn: (itemView: ExtractItemView<LV>) => boolean,
9 private setItemViewOn: (itemView: ExtractItemView<LV>, bool: boolean)
=> void
10 ) {
11 this.toggleAllView.data.on = this.isAllOn();
12 this.toggleAllView.addEventListener(Toggled, (e) =>
this.toggleAll(e.detail.on));
13 this.listView.addEventListener(Toggled, () =>
this.syncToggleAllView());
14 }
15
16 toggleAll(bool: boolean) {
17 this.listView.itemViews
18 .filter((itemView) => this.getItemViewOn(itemView) !== bool)
19 .forEach(itemView => this.setItemViewOn(itemView, bool));
20 }
21
22 syncToggleAllView() {
23 this.toggleAllView.setOn(this.isAllOn());
24 }
25
26 isAllOn() {
27 return this.listView.itemViews.every(this.getItemViewOn);
28 }
29 }

Here, we solve the problem with no classic OOP interface or method


overriding. The class’s internal logic does not know which property in
itemView to treat as “checked,” or how to modify it. It only calls the
callback functions.

ToggleListController is decoupled from any UI or data model, and


it can be reused for various “checkable” items (TodoItemView,
SettingItemView, etc.) by changing the callbacks.

Linking TodoPage with ToggleListController via Lambdas

As in [Code 7-29], we can define small lambdas in TodoPage that


specify “which field to read/write for the check state” and simply pass them
to ToggleListController.

[Code 7-29] TodoPage & ToggleListController

1 class TodoPage extends View<Todo[]> {


2 private toggleListController = new ToggleListController(
3 new CheckView(),
4 new TodoListView(this.data),
5 (itemView) => itemView.data.completed, // getItemViewOn
6 (itemView, bool) => itemView.setCompleted(bool) // setItemViewOn
7 );
8
9 override template() {
10 return html`
11 <div>
12 <div class="header">
13 ${this.toggleListController.toggleAllView}
14 <input type="text">
15 </div>
16 <div class="body">
17 ${this.toggleListController.listView}
18 </div>
19 </div>
20 `;
21 }
22 }

TodoPageno longer needs to implement methods for toggle logic. The


crucial bit (reading/writing the check state) is just two lambdas, so
TodoPage is simpler and more clearly decoupled. Of course, if we had too
many lambdas, it could hurt readability, but for simpler connecting logic, it
can keep the code minimal and separations clear.

In addition, the lambdas in [Code 7-29] automatically infer itemView


is TodoItemView, so we can safely use data.completed and
setCompleted().
Linking SettingPage with ToggleListController via Lambdas

[Code 7-29a] does the same for SettingPage: we provide two lambdas
for “which field to treat as checked and how to change it.” SettingPage
itself no longer needs any separate logic method.

[Code 7-29a] SettingPage & ToggleListController

1 class SettingPage extends View<Todo[]> {


2 private toggleAllView = new SwitchView();
3 private listView = new SettingListView(this.data);
4
5 private toggleListController = new ToggleListController(
6 this.toggleAllView,
7 this.listView,
8 (itemView) => itemView.data.on, // getItemViewOn
9 (itemView, bool) => itemView.switchView.setOn(bool) // setItemViewOn
10 );
11
12 override template() {
13 return html`
14 <div>
15 <div class="header">
16 ${this.toggleAllView}
17 <input type="text">
18 </div>
19 <div class="body">
20 ${this.listView}
21 </div>
22 </div>
23 `;
24 }
25 }

We don’t need to implement any method in SettingPage for the toggle


logic. Two simple lambdas do the job, and the code is neat with minimal
coupling. Moreover, ToggleListController can still function if
SettingPage’s fields are declared private, unlike TogglePageController,
where SettingPage needed them public.

A Multiparadigm Code Design

GoF (Gang of Four)’s “Design Patterns: Elements of Reusable Object-


Oriented Software” was published in 1994, primarily targeting C++ and
Smalltalk environments commonly used at the time. Since Java was
officially released in 1996, and its introduction of first-class functions came
much later with Java 8 in 2014, combining modern functional programming
techniques (e.g., lambdas) with these patterns was not feasible back then.

Today, modern programming languages not only support object-


oriented programming but also serve as functional programming languages.
Solving problems using an object-oriented approach doesn’t mean
functional techniques should be excluded, and there’s no reason to avoid
using classes for problems where functional programming is the focus.

Hence, some GoF design patterns can be greatly simplified by passing


around a function instead of encapsulating commands or strategies in
objects. For example, Command or Strategy can often be replaced by a
single lambda, while Observer or Template Method can be recast in a
functional style for shorter, more direct code.

OOP meticulously models object interactions in terms of state and


messages, while functional programming writes more declarative code by
minimizing side effects, using persistent data structures or monads.
Combining both can yield a wide range of design possibilities—like
applying list processing in a class, or controlling object state with functional
style.

On the other hand, depending on how side effects are handled,


functional programming and object-oriented programming may appear to
conflict with each other. However, in practice, they can complement one
another to create better design cases. Just as object-oriented programming
avoids indiscriminate state or overly complex message structures,
functional programming does not unconditionally recommend excessive
recursion or partial application. The key lies in how the shared principles of
“separation of responsibilities and minimization of side effects” are
integrated. Both paradigms ultimately aim to reduce unnecessary
complexity and side effects, moving in the same direction toward better
design.

As shown in this chapter’s examples, mixing functional techniques and


OOP yields code that is both less dependent and simpler, with more flexible
object state management. This multiparadigm approach can be a very
practical design methodology.
7.3 Building a Todo App, Part 2
Now there are two remaining Todos to handle before completing our
Todo app. In this chapter, we’ll finish both:

[ ] Register a new Todo item


[x] Mark items as completed
[x] Mark all items as completed at once
[ ] Filter so you see only incomplete or only completed items

Simplifying Code with Decorators

Previously, in ToggleView, we registered click events directly in the


onRender() method. Now we’ll use decorators to make the event
registration logic much simpler, improving code readability.

TypeScript’s decorators are a syntax feature for injecting additional


functionality or metadata into classes, methods, accessors, properties,
or parameters. At compile time or runtime, decorators let you apply
extra operations or modify attributes on those targets (classes/methods,
etc.). To use the decorator syntax, you usually need to set
"experimentalDecorators" and "emitDecoratorMetadata" to true
in the "compilerOptions" of your tsconfig.json.

[Code 7-30] ToggleView Before Decorator

1 abstract class ToggleView extends View<Toggle> {


2 constructor(data?: Toggle) {
3 super(data ?? { on: false });
4 }
5
6 // We could replace the code below with @on('click') decorator.
7 protected override onRender() {
8 this.addEventListener('click', () => this.toggle());
9 }
10
11 private toggle() {
12 this.setOn(!this.data.on);
13 this.dispatchEvent(Toggled, { bubbles: true, detail: this.data });
14 }
15
16 setOn(bool: boolean) {
17 this.data.on = bool;
18 this.element().classList.toggle('on', bool);
19 }
20 }

If we’re prepared to use a decorator like @on('click'), we can


migrate the event registration part in onRender() to a decorator, making the
code more concise.

[Code 7-31] ToggleView After Decorator

1 abstract class ToggleView extends View<Toggle> {


2 constructor(data?: Toggle) {
3 super(data ?? { on: false });
4 }
5
6 @on('click')
7 private toggle() {
8 this.setOn(!this.data.on);
9 this.dispatchEvent(Toggled, { bubbles: true, detail: this.data });
10 }
11
12 setOn(bool: boolean) {
13 this.data.on = bool;
14 this.element().classList.toggle('on', bool);
15 }
16 }

In [Code 7-31], there’s no longer any logic in onRender() to register


an event; instead, we’re using the @on('click') decorator to specify that
the toggle() method is connected to the click event. As a result, you can
see at a glance which method handles which event, making the code
structure much clearer.

We can implement a decorator like @on('click') as follows. Below,


the code omits type annotations to focus on runtime behavior and how it
works.

[Code 7-32] Implementing the @on() Decorator

1 function on(eventType) { // (1)


2 return function (viewPrototype, propertyKey, descriptor) { // (2)
3 const method = descriptor.value;
4 // (3) The actual method that the decorator is applied to (e.g.
toggle())
5
6 // (4) Temporarily store the original onRender
7 const originalOnRender = viewPrototype.onRender;
8
9 // (5) Redefine onRender
10 viewPrototype.onRender = function() {
11 // Register the decorator’s target method as an event handler
12 this.addEventListener(eventType, method);
13
14 // Continue with the original onRender logic
15 originalOnRender.call(this);
16 };
17 };
18 }

1. on(eventType) function

This outer function is the decorator’s “factory,” taking an event


type string like 'click' as an argument, returning the inner
function.
Usage example: @on('click') toggle() { ... }

2. The inner return function

Called when the method (e.g., toggle()) is decorated.


Parameters:
viewPrototype: the class prototype to which the decorator is
applied
propertyKey: the method name (e.g., "toggle")
descriptor: the property descriptor for that method, where
descriptor.value is the method function

3. const method = descriptor.value;

descriptor.value refers to the decorated method (here toggle).


Later we do this.addEventListener(eventType, method) to
register method as the handler.

4. Storing the original onRender


The decorator fetches the class prototype’s onRender method and
keeps it in a temporary variable.
We do this so the newly defined onRender can call the original
code without losing it.

5. viewPrototype.onRender = function() { ... } redefinition

We override onRender, causing “register event handler → call


original onRender” to happen in that order.
The code this.addEventListener(eventType, method) means
that when the view is rendered, it automatically registers the
method as an event handler.
Then originalOnRender.call(this) calls the original code (the
parent or previously declared onRender).

Using decorators, you can place the event registration logic near the
method definition. This makes it easier to see at once which method handles
which event. The process of manually registering events in onRender(),
etc., disappears, so the code is more concise and easier to maintain.

The View class’s addEventListener method supports a class-based


custom event pattern (like the Toggled example), making type inference
easy. Combined with decorators, you can simplify code like TodoItemView:

[Code 7-33] Applying @on(Toggled) to TodoItemView

1 class TodoItemView extends View<Todo> {


2 private checkView = new CheckView({ on: this.data.completed });
3
4 override template() {
5 return html`
6 <div>
7 ${this.checkView}
8 <span class="title">${this.data.title}</span>
9 </div>
10 `;
11 }
12
13 @on(Toggled)
14 private syncCompleted() {
15 this.data.completed = this.checkView.data.on;
16 }
17
18 setCompleted(bool: boolean) {
19 this.checkView.setOn(bool);
20 this.syncCompleted();
21 }
22 }

Comparing to [Code 7-20] in chapter 7.2, you see we replaced the old
onRender logic with @on(Toggled)—which is simpler.

Below is an example that includes type info in the decorator, for


improved code safety and maintainability. In TypeScript, when using
decorators, you can declare the type of the event object and method
signature to get full IDE autocompletion and compile-time type checks,
further enhancing the developer experience.

[Code 7-34] Adding Type to the @on Decorator

1 // Signature for @on('click')


2 function on<K extends keyof HTMLElementEventMap>(
3 eventType: K
4 ): <T extends (event: HTMLElementEventMap[K]) => void>(
5 target: View,
6 propertyKey: string,
7 descriptor: TypedPropertyDescriptor<T>
8 ) => void;
9
10 // Signature for @on(Toggled)
11 function on<E extends new (...args: any[]) => Event>(
12 EventClass: E
13 ): <T extends (event: InstanceType<E>) => void>(
14 view: View,
15 propertyKey: string,
16 descriptor: TypedPropertyDescriptor<T>
17 ) => void;
18
19 function on<K extends keyof HTMLElementEventMap>(eventType: K) {
20 return function <T extends (e: any) => void>(
21 viewPrototype: any,
22 propertyKey: string,
23 descriptor: TypedPropertyDescriptor<T>
24 ) {
25 const method: T = descriptor.value!;
26 const onRender: () => void = viewPrototype.onRender;
27 viewPrototype.onRender = function() {
28 this.addEventListener(eventType, method);
29 onRender.call(this);
30 }
31 };
32 }
[Code 7-34] shows how the @on decorator can take “an event name
(HTMLElementEventMap keys)” or “a custom event class
(CustomEventClass)” as a generic, letting it infer the event handler’s
argument type automatically. For 'click', it infers MouseEvent; for
Toggled, it infers the Toggled class. If there’s a mismatch, a compile-time
error occurs, preventing runtime failures.

[Code 7-34]’s first signature, on<K extends keyof


HTMLElementEventMap>, maps an event name to the correct event type so
'click' implies MouseEvent, etc.

[Code 7-35] Inferring Argument Types from Event Names

1 class DeleteView extends View<object> {


2 @on('click')
3 private remove(e: MouseEvent) {
4 // ...
5 }
6 }
7
8 class MovableView extends View<object> {
9 @on('click') // type error
10 private move(e: KeyboardEvent) {
11 // ...
12 }
13 }
14
15 // Error details:
16 // Type (e: KeyboardEvent) => void is not assignable to type (event:
MouseEvent) => void

DeleteView is fine because 'click' => MouseEvent. MovableView


fails because 'click' => MouseEvent not KeyboardEvent.

The second signature, on<E extends new (...args: any[]) =>


Event>, maps a “custom event class → custom event instance” generically
to ensure type safety for user-defined events.

[Code 7-36] Inferring Argument Types from a Custom Event Class

1 class MyItemView extends View<object> {


2 @on(Toggled)
3 private method(e: Toggled) {
4 const toggle: Toggle = e.detail;
5 const on: boolean = toggle.on;
6 }
7 }
8
9 class MyItemView2 extends View<object> {
10 @on(Toggled) // type error
11 private method(e: FocusEvent) {
12 }
13 }
14
15 // Error details:
16 // Type (e: FocusEvent) => void is not assignable to type (event: Toggled)
=> void

MyItemView can use e.detail from the Toggled event safely, but
MyItemView2 tries to treat the Toggled event as FocusEvent so it fails at
compile time.

Earlier in section 2.3, we discussed LISP’s metaprogramming.


Decorators are also a metaprogramming technique for modifying,
inspecting, or extending “original code structures (classes, methods, etc.)”
at runtime. Though “code with @ syntax” might look simple, it effectively
changes the original structure at compile time or runtime. In TypeScript,
decorators insert code during transpilation, with the actual expansions
occurring at runtime. This allows writing extra functionalities (DI
containers, validation, automatic routing, event wiring) in a declarative
style.

In TypeScript, decorators primarily enhance the expressiveness of


object-oriented code, but under the hood, the implementation uses first-
class and higher-order functions—a multiparadigm mix. Additionally,
writing typed decorators can catch errors at compile time, making your
code more robust and maintainable.

TextSubmitView

If we had a component that let the user type text, then fires a
TextSubmitted event with that text once Enter is pressed, implementing
“add new Todo” in TodoPage would be trivial.
[Code 7-37] TextSubmitView

1 class TextSubmitted extends CustomEventWithDetail<string> {}


2
3 class TextSubmitView extends View<{ value?: string }> {
4 override template() {
5 return html`<input type="text" value="${this.data.value ?? ''}" />`;
6 }
7
8 @on('keypress')
9 private keypress(e: KeyboardEvent) {
10 if (e.code === 'Enter') {
11 const input = e.target as HTMLInputElement;
12 const detail = input.value.trim();
13 if (detail) {
14 this.dispatchEvent(TextSubmitted, { detail, bubbles: true });
15 input.value = '';
16 }
17 }
18 }
19 }

is a component that fires a TextSubmitted event


TextSubmitView
containing the user’s input when Enter is pressed:

1. TextSubmitted event class

A custom event class carrying a string as detail.


The detail field has the submitted input text.

2. TextSubmitView class

Inherits View<{ value?: string }>, rendering a text input field


with optional initial value.
The user types text, and on pressing Enter, the keypress handler
runs.

3. keypress method

Called on every keypress event.


Checks e.code === 'Enter'; if so, gets input.value, trims it,
and if non-empty, dispatches the TextSubmitted event with that
string in detail.
Then clears input.value so the user can type the next entry.
With that, a parent component like TodoPage can easily catch
TextSubmitted to create a new Todo.

Adding a Helper Method to ListView

If ListView had an append(item) method to create an item view and


add it to the DOM, implementing “create new Todo” in TodoPage would be
simpler.

[Code 7-38] Adding append to ListView

1 abstract class ListView<IV extends View<object>> extends View<IV['data']


[]> {
2 // ... omitted ...
3
4 append(item: IV['data']): this {
5 const itemView = new this.ItemView(item);
6 this.data.push(item);
7 this.itemViews.push(itemView);
8 this.element().append(itemView.render());
9 return this;
10 }
11 }

The append(...) method takes a new item object, creates an itemView


instance with it, adds it to the internal arrays, and renders it:

1. Instantiate itemView
new this.ItemView(item), passing item.
2. Insert into the list data and the itemViews array
this.data.push(item) and this.itemViews.push(itemView),
so the new item is recognized.
3. Insert the element into the DOM
this.element().append(itemView.render()) so it shows up
immediately.
4. Support chaining
We return this so you can do method chaining.
Creating a New Todo

Using TextSubmitView and ListView’s append method, you can let


the user type text and press Enter to add a new Todo. Example code in
TodoPage:

[Code 7-39] TodoPage

1 class TodoPage extends View<Todo[]> {


2 private listView = new TodoListView(this.data);
3
4 private toggleListController = new ToggleListController(
5 new CheckView(),
6 this.listView,
7 (itemView) => itemView.data.completed,
8 (itemView, bool) => itemView.setCompleted(bool)
9 );
10
11 override template() {
12 return html`
13 <div>
14 <div class="header">
15 ${this.toggleListController.toggleAllView}
16 ${new TextSubmitView({})}
17 </div>
18 <div class="body">
19 ${this.listView}
20 </div>
21 </div>
22 `;
23 }
24
25 @on(TextSubmitted)
26 private append({ detail: title }: TextSubmitted) {
27 const todo: Todo = { title, completed: false };
28 this.listView.append(todo);
29 this.toggleListController.syncToggleAllView();
30 }
31 }

1. We decorate append with @on(TextSubmitted) to register it as an


event handler.
2. When the user presses Enter in TextSubmitView, it fires
TextSubmitted.
3. append({ detail: title }: TextSubmitted) handles it.
4. We create a new Todo object { title, completed: false },
5. Call this.listView.append(todo), adding it to the list.
6. If all items were previously completed, the newly added item is not
completed, so we call
this.toggleListController.syncToggleAllView() to uncheck the
header’s toggle.

So the user can type a new task into the text field at the top, press
Enter, and see the item appear in the Todo list. We can also mark items
complete individually or all at once. This is a simple but fairly complete
Todo app structure.

[x] Register a new Todo item


[x] Mark items as completed
[x] Mark all items as completed at once
[ ] Filter to see only incomplete or only completed items

SegmentControlView

Now we want to add “filter items to see only incomplete or only


completed” to our Todo app. If we had a reusable UI component that
displays filter options and fires an event when the user selects a filter, we
could add filter features easily.

SegmentControlView1 is suitable. It’s essentially a list view managing


multiple SegmentItemViews (like “All,” “Active,” “Completed” filter
options). It renders them and tracks which segment is currently selected.

[Code 7-40] SegmentControlView

1 type Segment = {
2 title: string;
3 value?: string;
4 selected?: boolean;
5 };
6
7 class SegmentSelected<T extends Segment = Segment> extends
CustomEventWithDetail<T> {}
8
9 class SegmentItemView<T extends Segment> extends View<T> {
10 override template({ selected, title }: T) {
11 return html`
12 <button class="${selected ? 'selected' : ''}">${title}</button>
13 `;
14 }
15 }
16
17 class SegmentControlView<T extends Segment> extends
ListView<SegmentItemView<T>> {
18 ItemView = SegmentItemView;
19
20 selectedIndex: number;
21
22 // Omitted below for brevity (in GitHub repository only)
23 constructor(data: T[], selectedIndex?: number) {
24 super(data);
25 this.selectedIndex = selectedIndex ?? Math.max(
26 0, this.data.findIndex((segment) => segment.selected)
27 );
28 this.data[this.selectedIndex].selected = true;
29 }
30
31 override onRender() {
32 this.delegate('click', SegmentItemView<T>, (e, itemView) => {
33 if (itemView.data === this.selectedSegment()) return;
34 itemView.element().classList.add('selected');
35 itemView.data.selected = true;
36 this.selectedSegmentView().element().classList.remove('selected');
37 this.selectedSegmentView().data.selected = false;
38 this.selectedIndex = this.itemViews.indexOf(itemView);
39 this.dispatchEvent(
40 SegmentSelected, {detail: this.selectedSegment(), bubbles: true}
41 );
42 });
43 }
44
45 private selectedSegmentView() {
46 return this.itemViews[this.selectedIndex];
47 }
48 // Omitted code ends here
49
50 selectedSegment() {
51 return this.data[this.selectedIndex];
52 }
53 }
54
55 const filterView = new SegmentControlView(
56 [
57 { title: 'All', value: 'all' },
58 { title: 'Active', value: 'active' },
59 { title: 'Completed', value: 'completed' }
60 ],
61 1 // The second option ("Active") is selected by default
62 );
63
64 const segment: Segment = filterView.selectedSegment();
65 console.log(segment);
66 // { title: 'Active', value: 'active', selected: true }
67
68 console.log(filterView.toHtml());
69 // <div class="SegmentControlView">
70 // <button>All</button>
71 // <button class="selected">Active</button>
72 // <button>Completed</button>
73 // </div>

SegmentControlView takes a Segment[], and selectedIndex for


which option is initially selected. The example passes three filter options
(All, Active, Completed), with the second (Active) as initially selected.
Then it logs the result of selectedSegment() and shows the rendered
HTML.

Below is an example of using SegmentControlView in a TodoPage.

[Code 7-41] TodoPage

1 class TodoPage extends View<Todo[]> {


2 // (1) copy the array, separating from original data
3 private listView = new TodoListView([...this.data]);
4
5 private toggleListController = new ToggleListController(
6 new CheckView(),
7 this.listView,
8 (itemView) => itemView.data.completed,
9 (itemView, bool) => itemView.setCompleted(bool)
10 );
11
12 private filterView = new SegmentControlView([ // (2) filterView using
SegmentControlView
13 { title: 'All', value: 'all' },
14 { title: 'Active', value: 'active' },
15 { title: 'Completed', value: 'completed' }
16 ]);
17
18 private get filterState() { // (3) a simple getter for the selected
filter
19 return this.filterView.selectedSegment();
20 }
21
22 override template() {
23 return html`
24 <div>
25 <div class="header">
26 ${this.toggleListController.toggleAllView}
27 ${new TextSubmitView({})}
28 </div>
29 <div class="body">
30 ${this.listView}
31 <div class="filter">${this.filterView}</div>
32 </div>
33 </div>
34 `;
35 }
36
37 @on(TextSubmitted)
38 private append({ detail: title }: TextSubmitted) {
39 const todo: Todo = { title, completed: false };
40 this.data.push(todo); // (4) Add the new Todo to the page’s main data
41 // (5) If the filter is "completed", don't add it to the screen
42 if (this.filterState.value !== 'completed') {
43 this.listView.append(todo);
44 this.toggleListController.syncToggleAllView();
45 }
46 }
47
48 // (6) Called whenever the filterView selection changes
49 @on(SegmentSelected)
50 private refresh() {
51 const todos = this.data.filter(todo =>
52 this.filterState.value === 'all'
53 ? true
54 : this.filterState.value === 'completed'
55 ? todo.completed
56 : !todo.completed,
57 );
58 this.listView.set(todos); // the set method is to be implemented in
ListView
59 this.toggleListController.syncToggleAllView();
60 }
61 }

Comparing with [Code 7-39], the changes are:

1. Copy the array ([...this.data]) when creating TodoListView,


separating from the original data for consistent filter logic.
2. Create a filterView using SegmentControlView with 'All',
'Active', and 'Completed'.
3. A simple getter (filterState) to access the current selection.
4. When adding a new Todo, we push it to the main this.data as well.
5. If the filter is 'completed', we don’t show the newly added item on
screen. If 'all' or 'active', we do this.listView.append(todo).
6. The refresh() method runs on SegmentSelected, re-filtering
this.data into todos and calling this.listView.set(todos) to
update the screen. Then syncToggleAllView() adjusts the “select all”
checkbox in the header.

ListView must implement a set method that receives the entire new
Todo[] data to re-render the items. The method removes previous items,
creates new item views, and updates the screen.
[Code 7-42] Adding set to ListView

1 abstract class ListView<IV extends View<object>> extends View<IV['data']


[]> {
2 // ... omitted ...
3
4 set(items: IV['data'][]): this {
5 this.data.length = 0; // (1)
6 this.itemViews.length = 0; // (2)
7 this.element().innerHTML = ''; // (3)
8 this.data.push(...items); // (4)
9 this.itemViews.push(...this.data.map(item => new
this.ItemView(item))); // (5)
10 this.element().append(...this.itemViews.map(itemView =>
itemView.render())); // (6)
11 return this; // (7)
12 }
13 }

This method:

1. Clears the existing list data.


2. Clears the itemViews array.
3. Removes all rendered elements from the DOM.
4. Adds the new items to the data.
5. Creates a new ItemView for each data item, storing them in itemViews.
6. Appends those item views to the DOM.
7. Returns this for chaining.

Test the filter logic again, it should work as intended.

One more detail: If you’re on the “Active” tab, and you complete a
Todo, it should disappear from the list. Likewise, in the “Completed” tab,
uncompleting a Todo should remove it. But in “All,” it remains visible.

Thanks to the ListView set method, you can handle this quickly. Just
attach @on(Toggled) to refresh() in TodoPage so that toggling an item
triggers a re-filter:

[Code 7-43] TodoPage

1 class TodoPage extends View<Todo[]> {


2 // ... omitted ...
3
4 @on(Toggled) // fires on select-all or on toggling a Todo
5 @on(SegmentSelected)
6 private refresh() {
7 const todos = this.data.filter(todo =>
8 this.filterState.value === 'all'
9 ? true
10 : this.filterState.value === 'completed'
11 ? todo.completed
12 : !todo.completed,
13 );
14 this.listView.set(todos);
15 this.toggleListController.syncToggleAllView();
16 }
17 }

Optimizing DOM Updates via a Heuristic Diff

Currently, ListView’s set logic always removes everything and re-


renders from scratch. This is easy to implement but can be inefficient if
many items could be reused (we’re replacing the entire DOM list).

We can introduce a simple heuristic to compare the old list


(this.data) and the new list (items) from the front. If the references
match, we leave them; if not, we remove or insert only the differing items.
It doesn’t always produce a minimal edit, but if changes are partial, we can
avoid a full re-render for improved efficiency.

A heuristic is an approach that doesn’t guarantee the globally optimal


solution but can quickly find a “good enough” solution based on rules
(experience) or partial logic. For example, computing minimal edit
distances or an LCS might be complex to implement or overkill, so this
simpler approach can be good enough—especially if item order doesn’t
change drastically.

[Code 7-44] set that Updates Only Some Items

1 abstract class ListView<IV extends View<object>> extends View<IV['data']


[]> {
2 // ... omitted ...
3
4 /**
5 * Attempt a partial update by only inserting/removing items that
differ,
6 * comparing the old items with the new items.
7 */
8 set(items: IV['data'][]): this {
9 let i = 0, j = 0;
10
11 // quick lookup map for old items
12 const oldItemsMap = new Map(
13 this.data.map(item => [item, true])
14 );
15
16 // Compare from the front
17 while (i < this.data.length && j < items.length) {
18 const oldItem = this.data[i];
19 const newItem = items[j];
20
21 if (oldItem === newItem) {
22 // same reference, keep it
23 i++;
24 j++;
25 continue;
26 }
27
28 // if newItem was in the old array => remove oldItem
29 if (oldItemsMap.has(newItem)) {
30 this.itemViews[i].element().remove();
31 this.itemViews.splice(i, 1);
32 this.data.splice(i, 1);
33 } else {
34 // not in old array => insert newItem
35 const oldItemView = this.itemViews[i];
36 const newItemView = new this.ItemView(newItem);
37
38 oldItemView.element().before(newItemView.render());
39 this.itemViews.splice(i, 0, newItemView);
40 this.data.splice(i, 0, newItem);
41
42 i++;
43 j++;
44 }
45 }
46
47 // remove leftover old items
48 while (i < this.data.length) {
49 const oldItemView = this.itemViews[i];
50 oldItemView.element().remove();
51 this.itemViews.splice(i, 1);
52 this.data.splice(i, 1);
53 // i remains the same
54 }
55
56 // append leftover new items
57 while (j < items.length) {
58 const newItem = items[j];
59 const newItemView = new this.ItemView(newItem);
60 this.itemViews.push(newItemView);
61 this.element().append(newItemView.render());
62 this.data.push(newItem);
63 j++;
64 }
65
66 return this;
67 }
68 }

[Code 7-44] compares the old array (this.data) and the new array
(items) from the front, only inserting or removing what’s necessary:

1. oldItemsMap
We build a map from the old array for quick membership checks,
e.g. oldItemsMap.has(newItem).
2. Two Pointers (i, j)
We pair this.data[i] and items[j] in ascending order.
3. If newItem is recognized in oldItemsMap => remove oldItem
That implies newItem might appear later in the old array, so we
expect to reuse it eventually, so remove the current oldItem.
4. Else Insert
If newItem is not in the old array, we insert it before the oldItem’s
position in the DOM.
We push it into this.itemViews and this.data too, shifting the
oldItem to the right.
5. Remove leftover old items
Any old items that remain after the main loop must be removed.
6. Append leftover new items
Any new items that remain are appended to the end.
7. Return this for chaining.

After code changes, you can test the filter. It should still work, possibly
with fewer DOM changes when items remain the same. You can log how
often items are reused vs. replaced, verifying the partial updates are
working.

[Code 7-44a] Checking Reuse Count

1 if (oldItem === newItem) {


2 // same reference => reuse
3 console.log('Reused');
4 i++;
5 j++;
6 continue;
7 }

You can see how many times 'Reused' is logged, to see if partial
updates are effectively skipping new DOM creation.

This is an example mixing OOP style state management (data,


itemViews) with a multiparadigm approach (functional pieces, OOP style
classes, imperative code). We maintain arrays and pointers for an efficient
partial update. A purely functional approach might be more cumbersome
here. Although a functional approach might help avoid errors from pointer
increments, the imperative style may be simpler for array indexing. Both
paradigms have strengths in different scenarios, and mixing them is often
beneficial.

Of course, techniques like i++, j++, if statements, and state changes


are prone to mistakes and require plenty of practice. In Chapters 1–5, we
solved various problems using functional or list processing techniques,
which significantly reduced such errors—this is a major advantage. Many
problems can be solved very naturally with list processing, and in such
cases, list processing is more suitable than an imperative approach.
However, just because a paradigm is particularly appealing doesn’t mean
we should force it onto a problem where it doesn’t fit. Some problems are
better suited to object-oriented approaches, others to functional
programming, and still others to imperative styles—or even a combination
of multiple paradigms. Practicing this multiparadigm approach will provide
developers with exceptional problem-solving skills.
7.4 Building a Todo App, Part 3
We’ve completed all the tasks on our Todo app feature list:

[x] Registering a new Todo item


[x] Marking items as completed
[x] Marking all items as completed at once
[x] Filtering so you see only incomplete or only completed items

This time, we’ll introduce how applying the State Pattern to the Todo
app can make the program more extensible and flexible.

Making It Flexible with the State Pattern

The State Pattern is a technique in which an object behaves


differently depending on its internal state. That is, the object runs
specialized logic (method implementations) for the current state (e.g.,
“Active,” “Completed,” etc.), reducing conditionals and improving code
readability.

Previously, we used fixed strings like 'all', 'active', 'completed'


to handle filter branching. Now we’ll introduce a separate state object
called FilterState to manage the filter logic more flexibly. This allows us
to easily add or handle new filters or logic—just by creating a new state
(object) containing the logic (predicate)—and use it in TodoPage.

[Code 7-45] Applying the State Pattern

1 interface FilterState {
2 title: string;
3 predicate: (todo: Todo) => boolean;
4 }
5
6 class TodoPage extends View<Todo[]> {
7 private listView = new TodoListView([...this.data]);
8
9 private toggleListController = new ToggleListController(
10 new CheckView(),
11 this.listView,
12 (itemView) => itemView.data.completed,
13 (itemView, bool) => itemView.setCompleted(bool)
14 );
15
16 // (1) Pass filterState[] to the SegmentControlView constructor
17 private filterView = new SegmentControlView(
18 [
19 {title: 'All', predicate: () => true},
20 {title: 'Active', predicate: todo => !todo.completed},
21 {title: 'Completed', predicate: todo => todo.completed}
22 ] as FilterState[]
23 );
24
25 // (2) addFilterState method
26 addFilterState(filterState: FilterState) {
27 this.filterView.append(filterState);
28 }
29
30 // (3) The result type of this.filterView.selectedSegment()
31 // is automatically inferred as FilterState
32 private get filterState(): FilterState {
33 return this.filterView.selectedSegment();
34 }
35
36 override template() {
37 return html`
38 <div>
39 <div class="header">
40 ${this.toggleListController.toggleAllView}
41 ${new TextSubmitView({})}
42 </div>
43 <div class="body">
44 ${this.listView}
45 <div class="filter">${this.filterView}</div>
46 </div>
47 </div>
48 `;
49 }
50
51 @on(TextSubmitted)
52 private append({detail: title}: TextSubmitted) {
53 const todo: Todo = {title, completed: false};
54 this.data.push(todo);
55 // (4) Previously `if (this.filterState.value !== 'completed') { ...
}`
56 if (this.filterState.predicate(todo)) {
57 this.listView.append(todo);
58 this.toggleListController.syncToggleAllView();
59 }
60 }
61
62 @on(Toggled)
63 @on(SegmentSelected)
64 private refresh() {
65 // const todos = this.data.filter(todo =>
66 // this.filterState.value === 'all'
67 // ? true
68 // : this.filterState.value === 'completed'
69 // ? todo.completed
70 // : !todo.completed,
71 // );
72 // (5) Replacing the above multi-branch logic with:
73 const todos = this.data.filter(this.filterState.predicate);
74 this.listView.set(todos);
75 this.toggleListController.syncToggleAllView();
76 }
77 }
78
79 export function main() {
80 const todos: Todo[] = [
81 {title: 'Coding', completed: false},
82 {title: 'Dinner', completed: true},
83 {title: 'Test', completed: false},
84 ];
85
86 const todoPage = new TodoPage(todos);
87 window.todoPage = todoPage;
88
89 document.querySelector('#body')!.append(
90 todoPage.render()
91 );
92 }

1. Passing filterState[] to SegmentControlView constructor

We create a filterState[] array of objects in the form { title,


predicate }.
predicate is a function specifying which Todo items pass this filter
condition.
We encapsulate the filter logic in a single function.
We then pass this filterState[] array to SegmentControlView.

2. addFilterState method

A method for dynamically adding new filter states.

3. get filterState(): FilterState

Because we pass filterState[] to the constructor in step (1), the


SegmentControlView can infer that
this.filterView.selectedSegment() returns FilterState.
So we can easily call this.filterState.predicate().

4. Using predicate in the append method


Previously we had if (this.filterState.value !== 'completed')
{ ... }.
Now we switch to if (this.filterState.predicate(todo)) { ...
}, so if the filter changes, we only need to change predicate.
If the user is on the 'completed' filter, predicate(todo) returns false
so it’s not added to the screen; if 'all' or 'active', it returns true and
is displayed instantly.

5. Simplified filter logic (refresh method)

Instead of multi-branch logic like filterState.value === 'all' ?


true : filterState.value === 'completed' ? ..., we do const
todos = this.data.filter(this.filterState.predicate);.
We call toggleListController.syncToggleAllView(); to sync with
the overall toggle checkbox, so the new filter structure and the old
toggle logic integrate smoothly.

Benefits of Applying the State Pattern

In this code structure, “filter states” are each an object representing


filter logic (predicate). This yields:

1. Minimal Conditionals

Instead of doing this.filterState.value !== 'completed', we do


if (this.filterState.predicate(todo)) { ... }.
Because “filter states” can be recognized by a function (predicate)
instead of text strings, the code has fewer branching conditionals.

2. Flexible Requirement Changes

If you want to apply a completely different set of filters or remove the


“completed” filter, you only adjust the filter-state objects. No need to
search the code for !== 'completed'.
You can add or remove filters with a line or two, making the code
highly adaptable to requirement changes.

3. Code Extensibility
We no longer do multi-branch logic for 'all', 'active',
'completed'; we just call
this.data.filter(this.filterState.predicate). Additional filters
don’t complicate the code.
Because each filter’s logic is encapsulated in a state object, adding or
changing filters has minimal friction and is easy to maintain.

In short, applying the state pattern to store filter logic in a state object
means far fewer code modifications when features or logic change,
simplifying filter expansions and maintenance.

Dynamically Changing Code and Software Behavior at


Runtime

We just saw how flexible it is to apply the state pattern. In [Code 7-


45], we set window.todoPage = todoPage; so we can access the todoPage
instance from the browser console. Now open the browser console and
enter [Code 7-46], then watch what happens on screen.

[Code 7-46] Entering Code in the Browser Console

1 todoPage.addFilterState({
2 title: 'ASAP',
3 predicate: todo => todo.title.includes('ASAP') && !todo.completed
4 });

Typing [Code 7-46] in the browser console immediately adds an


“ASAP” filter to SegmentControlView. That’s because calling
todoPage.addFilterState() calls this.filterView.append(...) to add
a new filter to SegmentControlView.

Then, if you type something like "[ASAP] Urgent Meeting" in the


Todo input box, you’ll see that the new item appears under the 'ASAP' tab
(as long as it’s not completed). Once you mark the item completed, it
disappears from the 'ASAP' filter list.

Hence, the state pattern allows you to expand the program’s


functionality and behavior at runtime. You can add a new filter (“state
object”) without modifying the code at all, just specifying a new object with
a predicate. This kind of runtime extensibility is valuable in scenarios like
“user changing editor modes in real time, with each mode having different
logic,” or “music players with presets you can add or remove, with
immediate effect,” enabling user-driven customization and offering big
flexibility to the software features.

Further Expanding with State Objects

So far, FilterState has only handled “which items to filter out,” but
we can broaden its responsibilities to handle all “preprocessing,” like
custom sorting. For instance, beyond simple filter logic (Active, Completed,
etc.), we can let FilterState handle advanced tasks like sorting or random
shuffling, building a more extensible program.

[Code 7-47] Expanding the Role of FilterState

1 interface FilterState {
2 title: string;
3 predicate: (todo: Todo) => boolean;
4 filter(todos: Todo[]): Todo[];
5 }
6
7 class FilterState {
8 constructor(
9 public title: string,
10 public predicate: (todo: Todo) => boolean
11 ) {}
12
13 filter(todos: Todo[]) {
14 return todos.filter(this.predicate);
15 }
16 }
17
18 const shuffleFilterState: FilterState = {
19 title: 'Shuffle',
20 predicate: (todo) => !todo.completed,
21 filter(todos: Todo[]) {
22 return shuffle(todos.filter(this.predicate));
23 }
24 }
25
26 function shuffle<T>(array: T[]): T[] {
27 for (let i = array.length - 1; i > 0; i--) {
28 const j = Math.floor(Math.random() * (i + 1));
29 [array[i], array[j]] = [array[j], array[i]];
30 }
31 return array;
32 }

1. interface FilterState and class FilterState

The interface is for type declaration, specifying title, predicate,


filter.
The class is the actual implementation with a constructor that sets
title, predicate, and a filter(todos: Todo[]) method. So we can
define various objects that meet FilterState using either the class or
just an object literal.

2. TodoPage can directly call filter instead of predicate

Instead of todos.filter(filterState.predicate);, we can do


filterState.filter(todos).

3. shuffleFilterState

A special filter that first filters out completed items, then randomly
shuffles them (Fisher–Yates shuffle).
So a user can see incomplete Todos in random order, e.g. to pick which
task to do next at random.

Below is the updated code using the new FilterState.

[Code 7-48] Using the Updated FilterState

1 class TodoPage extends View<Todo[]> {


2 private listView = new TodoListView([...this.data]);
3
4 private toggleListController = new ToggleListController(
5 new CheckView(),
6 this.listView,
7 (itemView) => itemView.data.completed,
8 (itemView, bool) => itemView.setCompleted(bool)
9 );
10
11 // (1) Updated FilterState creation
12 private filterView = new SegmentControlView(
13 [
14 new FilterState('All', () => true),
15 new FilterState('Active', todo => !todo.completed),
16 shuffleFilterState, // (2) shuffle functionality
17 new FilterState('Completed', todo => todo.completed),
18 ] as FilterState[]
19 );
20
21 addFilterState(filterState: FilterState) {
22 this.filterView.append(filterState);
23 }
24
25 private get filterState(): FilterState {
26 return this.filterView.selectedSegment();
27 }
28
29 override template() {
30 return html`
31 <div>
32 <div class="header">
33 ${this.toggleListController.toggleAllView}
34 ${new TextSubmitView({})}
35 </div>
36 <div class="body">
37 ${this.listView}
38 <div class="filter">${this.filterView}</div>
39 </div>
40 </div>
41 `;
42 }
43
44 @on(TextSubmitted)
45 private append({detail: title}: TextSubmitted) {
46 const todo: Todo = {title, completed: false};
47 this.data.push(todo);
48 // (3) Instead of if statements:
49 this.filterState.filter([todo]).forEach(todo => {
50 this.listView.append(todo);
51 this.toggleListController.syncToggleAllView();
52 });
53 }
54
55 @on(Toggled)
56 @on(SegmentSelected)
57 private refresh() {
58 // (4) we use filterState.filter(...) for simpler, more extensible
code
59 const todos = this.filterState.filter(this.data);
60 this.listView.set(todos);
61 this.toggleListController.syncToggleAllView();
62 }
63 }

Here’s a recap:

1. Instead of just a single function predicate, FilterState encapsulates


more advanced “filter” logic.
2. shuffleFilterState is a special filter that randomly shuffles
incomplete Todos, offering a random selection mechanism.
3. You can even remove if statements by using filter([todo]).
4. We can do this.filterState.filter(this.data) in refresh(). Our
code is simpler but more powerful.

Now the user can choose “Shuffle” to get a random incomplete Todo
ordering.

A Multiparadigm Object-Oriented Design

Below is the final cleaned-up code for TodoPage, after removing


comments:

[Code 7-49] TodoPage

1 class TodoPage extends View<Todo[]> {


2 private listView = new TodoListView([...this.data]);
3
4 private toggleListController = new ToggleListController(
5 new CheckView(),
6 this.listView,
7 (itemView) => itemView.data.completed,
8 (itemView, bool) => itemView.setCompleted(bool)
9 );
10
11 private filterView = new SegmentControlView(
12 [
13 new FilterState('All', () => true),
14 new FilterState('Active', todo => !todo.completed),
15 new FilterState('Completed', todo => todo.completed),
16 ] as FilterState[]
17 );
18
19 addFilterState(filterState: FilterState) {
20 this.filterView.append(filterState);
21 }
22
23 private get filterState(): FilterState {
24 return this.filterView.selectedSegment();
25 }
26
27 override template() {
28 return html`
29 <div>
30 <div class="header">
31 ${this.toggleListController.toggleAllView}
32 ${new TextSubmitView({})}
33 </div>
34 <div class="body">
35 ${this.listView}
36 <div class="filter">${this.filterView}</div>
37 </div>
38 </div>
39 `;
40 }
41
42 @on(TextSubmitted)
43 private append({detail: title}: TextSubmitted) {
44 const todo: Todo = {title, completed: false};
45 this.data.push(todo);
46 this.filterState.filter([todo]).forEach(todo => {
47 this.listView.append(todo);
48 this.toggleListController.syncToggleAllView();
49 });
50 }
51
52 @on(Toggled)
53 @on(SegmentSelected)
54 private refresh() {
55 const todos = this.filterState.filter(this.data);
56 this.listView.set(todos);
57 this.toggleListController.syncToggleAllView();
58 }
59 }

Reading [Code 7-49], we see how we’ve built the Todo app in a
concise, intuitive way. Each component does its job, interacting organically.
The components have low coupling, can be reused widely, and remain easy
to maintain or extend.

TextSubmitView: The input field that fires an event on Enter.


SegmentControlView: Manages which filter is selected, firing an
event on filter changes.
FilterState: Decides how to filter (and possibly do further
preprocessing like sorting).
ToggleListController, CheckView: Manages toggling items
(including a “select all” UI) and the logic for marking items
completed.
ListView: Provides common logic for rendering lists (append, set)
used by TodoListView and SegmentControlView.
TodoPage: Assembles these components, places them in the template,
and orchestrates event communication among them.
This code structure exemplifies the benefits of object-oriented design.
Each class’s role and responsibility is clear, so if we need to add or replace
something (like a new filter or a different UI component), we can modify
just that part. The logic isn’t all in one place but is distributed thoughtfully,
giving good readability, maintainability, and extensibility even as the code
grows. Clear role separation is a hallmark of object-oriented design that
helps keep complexity manageable for larger applications.

TodoPage is completed by assembling multiple components into a


fairly full-featured app. The code is short and concise, while also well-
optimized. Each component has its own rendering strategy, works
reactively, and avoids unnecessary side effects or re-renders. We’ve used
the strategy pattern for reusing complex logic, the state pattern for runtime-
modifiable programs, and so forth, showing a flexible, extensible design.

As noted in Section 7.1, if you harness the language paradigms and


structural design effectively, you can gradually build reactive code even
without a large external framework. Indeed, we achieved reactive rendering
using only a few straightforward components like ToggleView, ListView,
SegmentControlView, abstracting away DOM manipulation while
preserving performance. As we keep adding components, developer
productivity grows, and we can maintain stable performance even with
many features.

Today’s TypeScript/JavaScript, as a multiparadigm language


supporting both OOP and functional styles, can reduce dependence on large
libraries or ephemeral trends. By fully using the language, you can create a
robust software legacy for the long term while staying up to date with
modern Web APIs.

Leaning on the language’s capabilities and lowering external library


dependencies also means you get more help from the type system, and you
can maximize the benefits of both OOP and FP. It also means you can apply
the design patterns and fundamental tech that have evolved for decades,
leading to strong and maintainable solutions. By employing object
orientation, functional, classes, iterable iterators, list processing, and design
patterns, you can build and combine components where each part is a self-
contained, reactive software entity with its own optimized rendering logic.
This achieves performance and productivity. Even if the application grows
large, it remains simpler to maintain and extend.

If you don’t want to reinvent the wheel, that wheel must have
performance and portability, so you can reuse it widely. In other words, you
need a smooth rendering performance and easy loadability anywhere.
Delving into fundamental technology with minimal compile/package
overhead means you can reuse code freely in the JavaScript runtime,
achieving portability across frontends, backends, or any environment,
which is a hallmark of a well-crafted “wheel.”

Using a multiparadigm object-oriented approach, we can build


software architectures that meet both performance and productivity needs,
ensuring flexibility and extensibility for large-scale apps. Over time, such
an approach yields benefits for a long-term project’s stability,
maintainability, and added features. Moreover, using fundamental language
features grows the team’s and individual’s know-how. Freed from heavy
library or framework dependence, you can more readily expand to other
platforms, while your technical capabilities become that much stronger. We
hope this conveys the potential of a programming world deeply rooted in
these fundamental technologies.

7.5 Asynchronous Flows with UI, Promise,


and Class
In section 4.1, we saw some simple examples—([Code 4-6] Using
Promise.race for Timeout Control, [Code 4-7] Controlling the UI
According to Response Speed)—where Promise was used as a value to
manage basic UI rendering logic. In this chapter, we’ll extend that idea
further and explore how to use Promise for communication between UI
components.
Using Promises for Custom Alerts and Confirms

1) alert vs. a Custom AlertView

Below is an example of manually implementing an “alert” dialog box.


The native alert() has certain UI/UX constraints and limited extensibility.
We address those constraints by creating a custom AlertView, then
compare its operation with the native alert().

[Code 7-50] Using alert vs. AlertView

1 import { html, View, on } from "rune-ts";


2
3 class AlertView extends View<{ message: string }> {
4 override template() {
5 return html`
6 <div>
7 <div class="message">${this.data.message}</div>
8 <button>확인</button>
9 </div>
10 `;
11 }
12
13 @on('click', 'button')
14 private close() {
15 this.element().remove();
16 }
17
18 static open(message: string) {
19 const alertView = new AlertView({ message });
20 document.body.append(alertView.render());
21 }
22 }
23
24 function test() {
25 alert('Process complete.'); // Blocks progress until the user clicks OK
26 console.log('alert');
27
28 AlertView.open('Process complete.'); // Does NOT block
29 console.log('AlertView'); // Runs immediately
30 }
31
32 test();

Native alert(): A blocking call that halts progress until the user
closes the dialog, so code after alert() (like console.log('alert'))
won’t run until the user clicks OK.
AlertView.open(...): Non-blocking. It displays the dialog, but
subsequent code (console.log('AlertView')) executes immediately.

Hence, while AlertView.open shows the dialog, it doesn’t provide the


same blocking behavior as alert(). Let’s see how to make AlertView act
more like alert() (“waiting until the user confirms”) by incorporating
Promise.

2) Applying a Promise to AlertView

Below is an AlertView that uses a Promise internally, so from the


caller’s viewpoint, it behaves similarly to a blocking alert() until the user
clicks the “OK” button.

[Code 7-51] AlertView Using Promises

1 class AlertView extends View<{ message: string }> {


2 private resolve!: () => void; // (1)
3 readonly promise = new Promise<void>(res => this.resolve = res); // (2)
4
5 override template() {
6 return html`
7 <div>
8 <div class="message">${this.data.message}</div>
9 <button>확인</button>
10 </div>
11 `;
12 }
13
14 @on('click', 'button') // (3)
15 private close() {
16 this.element().remove();
17 this.resolve();
18 }
19
20 static open(message: string) { // (4)
21 const view = new AlertView({ message });
22 document.body.append(view.render());
23 return view.promise;
24 }
25 }
26
27 async function test() {
28 alert('Process complete.'); // Native blocking
29 console.log('alert');
30
31 await AlertView.open('Process complete.'); // Also “blocks” until user
clicks OK
32 console.log('AlertView');
33 }
34
35 void test();

1. resolve field

Inside AlertView, we store a function (resolve) to call when the


user clicks OK.
Declared private resolve!: () => void;, the ! is a non-null
assertion to indicate we’ll assign a value later.

2. promise member

new Promise<void>(res => (this.resolve = res)) creates a


Promise right away and stores res in this.resolve.
So external code can await AlertView.open(...) and not
proceed until the user clicks OK.

3. @on('click', 'button') private close()

Event handler for the “OK” button.


Removes the alert from the screen via
this.element().remove().
Calls this.resolve(); to fulfill the promise.

4. static open(message: string)

A static method creating an AlertView, inserting it into the DOM,


and returning its promise.
The caller can do await AlertView.open(...) for a blocking-
like effect.

Similar to how TaskRunner in Section 6.2 orchestrated UI with


asynchronous logic, here AlertView uses a Promise to replicate alert()-
like blocking. Meanwhile, it remains fully customizable in the UI.
Promises as Return Values, Mediating Communication

Below is a confirm-like component featuring two buttons: “OK” and


“Cancel.” Whichever is clicked returns true or false via Promise.

[Code 7-52] ConfirmView

1 class ConfirmView extends View<{ message: string }> {


2 private resolve!: (bool: boolean) => void; // (1)
3 readonly promise = new Promise<boolean>(res => this.resolve = res); //
(2)
4
5 override template() {
6 return html`
7 <div>
8 <div class="message">${this.data.message}</div>
9 <button class="cancel">취소</button>
10 <button class="confirm">확인</button>
11 </div>
12 `;
13 }
14
15 @on('click', 'button')
16 private close(e: MouseEvent) {
17 const button = e.currentTarget as HTMLButtonElement;
18 this.element().remove();
19 this.resolve(button.matches('.confirm'));
20 }
21
22 static open(message: string) {
23 const view = new ConfirmView({ message });
24 document.body.append(view.render());
25 return view.promise;
26 }
27 }
28
29 // (4)
30 async function test2() {
31 if (confirm('Delete this?')) {
32 console.log('Deleted');
33 } else {
34 console.log('Canceled');
35 }
36
37 if (await ConfirmView.open('Process complete.')) {
38 console.log('Deleted');
39 } else {
40 console.log('Canceled');
41 }
42 }
43
44 void test2();
1. resolve

resolve!: (bool: boolean) => void; is assigned after


construction.
The user’s choice (true=OK, false=Cancel) is returned to the
caller.

2. promise

new Promise<boolean>(res => (this.resolve = res)) so


external code can do await ConfirmView.open(...) and get true
or false.

3. Event handler

Clicking “OK” or “Cancel” calls this.resolve(true/false).

4. Call

if (await ConfirmView.open(...)) {...} else {...}, just


like confirm.

While AlertView used a promise mainly to replicate blocking,


ConfirmView uses it to pass “OK” or “Cancel” values back. In short, using
Promise is a convenient pattern for letting UI components communicate
data (the user’s choice) asynchronously.

Selecting Friends for a Group Chat

Using Promise to pass data among UI components is handy in many


scenarios. In [Code 7-53], we combine the previously built ListView,
CheckView, ToggleListController, AlertView, ConfirmView, etc., to
implement a “Create Group Chat” feature.

[Code 7-53] Group Chat Creation

1 type User = {
2 id: number;
3 name: string;
4 };
5
6 type Chat = {
7 users: User[];
8 };
9
10 class UserItemView extends View<User> {
11 override template() {
12 return html`
13 <div>${this.data.name}</div>
14 `;
15 }
16 }
17
18 class UserListView extends ListView<UserItemView> {
19 ItemView = UserItemView;
20 }
21
22 class CheckUserItemView extends View<User> {
23 checkView = new CheckView();
24
25 override template() {
26 return html`
27 <div>
28 ${this.checkView}
29 ${new UserItemView(this.data)}
30 </div>
31 `;
32 }
33 }
34
35 class CheckUserListView extends ListView<CheckUserItemView> {
36 ItemView = CheckUserItemView;
37 }
38
39 class UserPickerView extends View<User[]> {
40 private resolve!: (users: User[]) => void;
41 readonly promise = new Promise<User[]>(res => this.resolve = res);
42
43 private toggleListController = new ToggleListController(
44 new CheckView(),
45 new CheckUserListView(this.data),
46 (itemView) => itemView.checkView.data.on,
47 (itemView, bool) => itemView.checkView.setOn(bool)
48 );
49
50 override template() {
51 return html`
52 <div>
53 <div class="header">
54 ${this.toggleListController.toggleAllView}
55 <h2>Pick Friends</h2>
56 <button class="done">OK</button>
57 </div>
58 <div class="body">
59 ${this.toggleListController.listView}
60 </div>
61 </div>
62 `;
63 }
64
65 @on('click', 'button.done')
66 private done() {
67 this.element().remove();
68 this.resolve(
69 this.toggleListController.listView.itemViews
70 .filter(({ checkView }) => checkView.data.on)
71 .map(({ data }) => data)
72 );
73 }
74
75 static open() {
76 const users: User[] = [
77 { id: 1, name: 'Luka' },
78 { id: 2, name: 'Stephen' },
79 { id: 3, name: 'Nikola' },
80 { id: 4, name: 'Kevin' },
81 ];
82 const view = new UserPickerView(users);
83 document.body.append(view.render());
84 return view.promise;
85 }
86 }
87
88 class ChatCreationView extends View<Chat> {
89 private userListView = new UserListView(this.data.users);
90
91 override template() {
92 return html`
93 <div>
94 <button class="pick">Pick Participants</button>
95 ${this.userListView}
96 <button class="create">Start Chat</button>
97 </div>
98 `;
99 }
100
101 @on('click', 'button.pick')
102 private async pickUsers() {
103 const users = await UserPickerView.open();
104 this.userListView.set(users);
105 }
106
107 @on('click', 'button.create')
108 private async create() {
109 if (this.isEmpty()) {
110 await AlertView.open('No friends selected.');
111 } else {
112 if (await ConfirmView.open(this.startMessage)) {
113 alert('Enjoy your chat!');
114 // new ChatView(this.data)...
115 } else {
116 this.userListView.set([]);
117 }
118 }
119 }
120
121 isEmpty() {
122 return this.data.users.length === 0;
123 }
124
125 get startMessage() {
126 const names = this.data.users.map(({ name }) => name);
127 return `Start chat with ${names.join(', ')}?`;
128 }
129 }
130
131 export function main() {
132 document.body.append(
133 new ChatCreationView({ users: [] }).render()
134 );
135 }

1. UserPickerView.open()

Displays a checkbox list of friends and returns the selected ones


when the user clicks “OK.”

2. ChatCreationView

A “Pick” button: Opens UserPickerView.


A “Start Chat” button: Uses AlertView and ConfirmView to
confirm the user’s decision.

We see asynchronous data passed among components, each managing


its own UI logic. Classes, promises, lists, and checkboxes tie together into a
“group chat creation” flow.

This code completes the book’s code examples. Thank you for
reading! We hope this helps developers who enjoy building software,
contributing (in a small way) to the broader developer community.
1. You can see the full implementation of SegmentControlView in
the book’s official GitHub repository.↩︎

OceanofPDF.com

You might also like