0% found this document useful (0 votes)
14 views20 pages

AP Midterm Notes

The document covers the fundamentals of functional programming, emphasizing the differences between imperative, declarative, and functional paradigms, with a focus on Python's capabilities. It discusses key concepts such as pure functions, higher-order functions, decorators, currying, closures, and the use of iterables and generators. Additionally, it explains concurrency and parallelism in programming, highlighting the differences between threads and processes in Python.

Uploaded by

Alba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views20 pages

AP Midterm Notes

The document covers the fundamentals of functional programming, emphasizing the differences between imperative, declarative, and functional paradigms, with a focus on Python's capabilities. It discusses key concepts such as pure functions, higher-order functions, decorators, currying, closures, and the use of iterables and generators. Additionally, it explains concurrency and parallelism in programming, highlighting the differences between threads and processes in Python.

Uploaded by

Alba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Lecture 1 - Functional

Programming I
Date @February 1, 2022

Type Lecture

Completed

The Functional Programming Paradigm


Python was designed to be used mostly as an imperative language

In the imperative programming paradigm, the code contains commands that


specify step-by-step changes of the internal state or the environment of the
program.

The state is what we keep in memory, i.e. variables and their values

Changes in state add, modify, or remove variables.

Functions in Python are also stored as variables, with a name and executable
code

The imperative paradigm focuses on how to arrive to a solution

we specify the sequence of instructions that will produce the desired output

These instructions will change the state of the computation by adding,


modifying and removing information

This paradigm is also called procedural

Lecture 1 - Functional Programming I 1


Imperative programming can also be combined with other compatible paradigms
such as the Object Oriented Paradigm

Declarative Programming
Languages like Prolog use this

In declarative programming, we describe the environment with rules and facts

We then specify the goal

And the interpreter then derives the necessary computations to reach the goal
given the rules and facts. The programmer does not need to write the
instructions to arrive at the goal.
W=6

Functional Programming
subtype of declarative programming

puts emphasis on functions, particularly on:

design principles to create functions without side-effects

use of function composition

To avoid side-effects, functions should:

produce an output that solely depends on the input

not change the environment outside the function

The Environment

Lecture 1 - Functional Programming I 2


Variables defined in
Python live in an
environment or
namespace

A namespace is a
dictionary that maps
identifiers (variables or
function names) to
objects

everything in Python
is an object:
functions too!

Python has three


namespaces:

local, global, and


built-in.

When resolving a name,


it looks for the object in
these 3 namespaces, in
this order^.

Functional programming aims to minimize the dependence from the state


of this environment:

produce an output that solely depends on the input (arguments)

they should not rely on the current state of the outer environment
(globals)

not change the environment outside the function

globals should not be modified

locals can be created and modified

arguments should remain unchanged once the function has finished

Purely Functional Languages

Lecture 1 - Functional Programming I 3


To reduce side-effects, functional programming restricts or forbids state changes
of the environment.

In this way, the computation is predictable: functions don’t depend on global


values, and don’t change them. They only depend on their input.

Languages that are strict in forbidding state changes are considered purely
functional, for example, Haskell.

Is Python purely functional?


It does not forbid state changes

python is conceived for OOP

So Python is clearly not a purely functional programming language; we could


also say that Python is not even a functional language

However, Python has some built-in functions and features to support some
amount of functional programming

In practice, Python is a multiparadigm language

Pure Functions
The core of functional programming is the use of pure functions.

As in math, pure functions map the input values to output values.

In particular, the function takes values from the domain and returns values in
the range.

Pure functions do not depend on the state of the environment, and don’t modify
it: operate only locally & avoid global statements

Being independent of the state of the environment, they prevent side-effects

⇒ Lambda Functions
one-line anonymous functions, restricted to a single expression

defines an expression → it can be evaluated to a value

can’t create or modify any variables

Higher-Order Functions
functions that accept a function as

Lecture 1 - Functional Programming I 4


an argument or return a function as
a value

needed in FP since program is


created by composing (pure)
functions

For this, we need a language that


supports first-class functions

In Python, functions are first-class


objects: can be manipulated like
any other object

with Lambda

Python has a number of built-in functions that use other functions: map() ,
filter() , min() , max() , sorted() , …

map →

filter →

Lecture 1 - Functional Programming I 5


reduce →
applies another function to each pair of items in an iterable, accumulating each
partial result.

This technique is known as reduction or folding

Items are evaluated from left-to-right (fold-left)

map-reduce →
way of composing functions

These functions are useful because they can simplify complex loops, can be
chained (function composition), and are very general (many computations can

Lecture 1 - Functional Programming I 6


be expressed as reductions)

Lecture 1 - Functional Programming I 7


Lecture 2 - Functional
Programming II
Date @February 15, 2022

Type Lecture

Completed

Decorators
take an existing function and add some
functionality (or “decoration”) over it

decorators add functionality to a function


without modifying its internal code, because of
function composition

Thanks to this, we don't change the original


function, and we keep the same interface.

The two phases when using decorators


1. Wrap — at definition time, the decorator takes the original function and returns a
new wrapped function.

2. Evaluate — at evaluation time, the wrapping function usually calls the original
function. The wrapping function can pre-process the argument values or post-
process the return value (or both).

Occasionally we may need a decorator that only calls the original function
under some conditions, or which calls it multiple times.

Decorators are useful in scenarios like: Logging, Security, Handling incomplete


data, etc.

Currying
most multi-argument functions can be
converted to a sequence of calls of

Lecture 2 - Functional Programming II 1


functions with one argument

When we curry a function, we can use any of the intermediate functions.

g is a function
“specialized” on printing
sentences that start with
“Hi”

This process is called currying (function composition)

From a functional programming perspective


by separating one complex multi-
argument function into single-argument
functions, we have simpler components:
functions only have one input and one
output

functions are compatible with theoretical


frameworks that only allow single
argument functions (e.g. lambda
calculus)

Since currying relies on function composition, we can actually use decorators to


implement them

Library → pymonad

Lecture 2 - Functional Programming II 2


Partial
we can call multi-argument functions with less
arguments than they require, and obtain a
specialized function with fixed arguments.

Example: doublesum is now a more specialized


function that always multiplies by 2.

Partial may return a (possibly multi-argument)


function or a value. Note that this is different from
currying!

Closures
function that remembers values

Example without closure

we have moved pi to the outer function

outer_func will still print pi is 3.14

The pi variable is a non-local variable for the inner function ⇒ this is a closure
closure binds the value to the inner function

Why are closures useful?


One is to avoid the use of globals

Example without closures →

Lecture 2 - Functional Programming II 3


Alternative with closures →

This code avoids the use of global


and keeps the counter dictionary
hidden

Due to this, side effects are avoided


(other functions cannot modify the
counter dictionary)

Summary

Lecture 2 - Functional Programming II 4


Lecture 3 - Functional
Programming III
Date @March 8, 2022

Type Lecture

Completed

Iterables
⇒ act as containers of other objects, and can be used to access contained objects
individually

examples are lists, tuples, dictionaries, strings, etc.

one way to iterate over iterables is the use of for ... in ... expressions

Another way to create iterables is with list comprehension

useful and compact way to create a list based of an operation or condition


applied to another iterable

similar to list comprehension, we have set and dictionary comprehension:

What is required for objects to be iterable?


When we iterate over iterable
objects, we are implicitly calling the
iter() built-in function with the
iterable object as an argument

iter(x) checks if object x has


an __iter__ method

Lecture 3 - Functional Programming III 1


__iter__ returns an
iterator for the iterable

(for backwards compatibility, if the previous check fails, iter will still try to find a
getitem method, which would attempt to create an iterator starting from index 0)

Iterators
An iterator is an object that contains a __next__ method

the built-in function next(x) will call x.__next__

This method returns the next item stored in a collection

When there are no more items left, this method will raise a StopIteration
exception, which will be caught by iterating expression (e.g. the for … in
expression)

the order of access depends on the type of collection →

for ordered types it is the same as the insertion order e.g. lists, strings,
tuples, and dictionaries

for unordered types it is unknown to the programmer e.g. sets

Built-in Iterators
There are many such as:

range, enumerate, map, filter, zip …

Lecture 3 - Functional Programming III 2


Generators
Generators are one type of
iterators: they are created by
defining functions that incorporate a
yield statement

Generators were first proposed in


this Python Enhancement
Proposal (PEP) , to provide…:

“ a kind of function that can


return an intermediate result
("the next value") to its caller,
but maintaining the function's
local state so that the function
can be resumed again right
where it left off”

Generators also have an __iter__ and a __next__ method, which are created
automatically

Generator Expression
Another way to create generators is
using a generator expression
(introduced in this PEP)

Lecture 3 - Functional Programming III 3


Their behavior is equivalent to calling an
anonymous generator function (i.e. a
function with yield)

The syntax is very similar to that of list


comprehensions (though note the
brackets!)

The usages are also very similar, but


there is a crucial difference:
generators are lazy

Lazy Computation
Lazy Evaluation

Lecture 3 - Functional Programming III 4


Another example →

Most evaluations in Python are eager (also known as strict):

they go from left to right, and evaluate all elements

However, there are some operators that use lazy evaluation, and therefore stop
the evaluation when the outcome is known:

or, and, if-else logical operators, generator functions

Lecture 3 - Functional Programming III 5


So here, lazy
computation with
generators is more
efficient since we are not
stripping all the lines of
line_list and storing them
in memory

Lecture 3 - Functional Programming III 6


Lecture 4 - Concurrency and
Parallelism
Date @March 22, 2022

Type Lecture

Completed

Motivation

Parallelism and Concurrency


There is a subtle difference between both:

A parallel program uses a multiplicity of computational


hardware (e.g., several processor cores) to perform a
computation more quickly. The multiple processors execute
some parts of the code at the same time.

Parallelism is an implementation detail, and it is concerned with efficiency: a


program that uses parallel computation will be completed faster than the
equivalent program without parallel computation.

A program with concurrency uses multiple threads of control.


The behavior of the program is such that it appears as if the
threads are executed at the same time, that is, the user sees

Lecture 4 - Concurrency and Parallelism 1


interleaved outputs. However, a concurrent program may or
may not use parallel processing: it can either be executed on a
single processor through interleaved execution, or on multiple
processors with actual parallelism.

Concurrent programming is a conceptual notion that is concerned with code


structure.

It is related to modularity: different parts of the program can be separated in


different threads, which typically interact with different agents (e.g. user,
network).

In concurrent programming, threads run concurrently (i.e. not sequentially, one


after the other), but may do so either with interleaved or with parallel
computations.

Examples

Threads vs. Processes


To achieve concurrency, we need to indicate which parts of our code should run
concurrently.

This is a form of modularity: we find which parts of our code can be isolated and
have their own execution (even if they eventually share some information)

We have two ways of doing this: threads and processes.

Process

Lecture 4 - Concurrency and Parallelism 2


A process is a program in execution. Your main program will become a process
when you run it, and it can spawn additional processes.

A process has its own memory space. For this reason, it is an independent unit,
and it can be executed in parallel if we have multiple processors.

However, communication between processes may be slow.

Threads
Your processes can launch threads. These are sometimes known as lightweight
processes.

Threads don't have their own memory space: they use the memory space of the
process that created them.

This means threads of the same process share memory. This makes the
communication between them faster than communication between processes,
as they simply communicate by accessing shared memory.

Concurrency in Python
Both threads and processes are useful for creating concurrency in Python. However,
there are crucial differences.

When using threads, the use of multiple CPU cores is being limited.

This is caused by a Global Interpreter Lock (GIL) being implemented in the


Python interpreter, which makes sure only one thread can be executed at a time.

Concurrency is therefore delivered by switching rapidly between threads.

Multiprocessing goes around the GIL, as it spawns entire different processes.

However, exchanging data will be less efficient. Furthermore, the creation and
destruction of processes is less efficient.

Lecture 4 - Concurrency and Parallelism 3

You might also like