0% found this document useful (0 votes)
25 views38 pages

Debugging

This document provides an overview of debugging, testing, and profiling Python code specifically in the context of Civil Engineering. It covers techniques such as using print statements, IDE debuggers, and the pdb module for debugging, as well as the use of assertions for sanity checks and error handling with try-except blocks. By mastering these practices, engineers can ensure their code is robust, efficient, and free of critical errors during computations.

Uploaded by

Hassan Wazir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views38 pages

Debugging

This document provides an overview of debugging, testing, and profiling Python code specifically in the context of Civil Engineering. It covers techniques such as using print statements, IDE debuggers, and the pdb module for debugging, as well as the use of assertions for sanity checks and error handling with try-except blocks. By mastering these practices, engineers can ensure their code is robust, efficient, and free of critical errors during computations.

Uploaded by

Hassan Wazir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Debugging, Testing, and Profiling Python Code in Civil Engineering

Authors: Faisal-ur-Rehman, Department of Civil Engineering, University of Engineering and


Technology Peshawar
Prerequisites: Basic Python syntax (variables, loops, functions), elementary Civil
Engineering concepts.

Introduction

In Civil Engineering computations, ensuring correctness and efficiency of code is just as


important as formulating the equations themselves. This chapter introduces core Python
practices to help you write robust programs: debugging techniques, using assertions for
sanity checks, handling errors safely, writing unit tests, and profiling code performance. We
will explore each topic with examples rooted in Civil and Structural Engineering – from
structural analysis calculations to fluid dynamics simulations. By the end, you should be
able to confidently troubleshoot code, validate engineering assumptions with assert,
handle exceptions gracefully, test functions for correctness, and optimize your code for
speed.

1. Debugging Techniques in Python

Debugging is the process of identifying and fixing errors (bugs) in your code. Even
seasoned engineers spend considerable time debugging to ensure calculations and
simulations run correctly. In Python, there are several ways to debug your programs:

• Print statements for quick-and-dirty inspection of values.

• IDE debuggers (Integrated Development Environment tools) that allow breakpoints


and step-by-step execution.

• The built-in pdb interactive debugger for stepping through code in a console.

Let’s explore each technique and how it can apply to engineering code.

1.1 Using Print Statements for Debugging

One of the simplest ways to debug Python code is to insert print() statements at strategic
points in your program. By printing variable values or intermediate results, you can trace
the execution flow and catch where things go wrong (Understanding Debugging & Testing
Code In Python - LEARNCSDESIGN). For example, if you have a loop computing the total
load on a structure, you might print the cumulative sum at each iteration to see if it
matches expectations:

loads = [10, 15, 20, -5] # kN, (the -5 might be an input error representing an upward load)

total_load = 0

for i, L in enumerate(loads, start=1):

total_load += L

print(f"After load {i}, total_load = {total_load} kN")

If the output shows an unexpected negative total after the fourth load, you’ve identified a
potential bug (a negative load that might be invalid in this context). While print debugging
is straightforward, it can become tedious for large programs or complex loops
(Understanding Debugging & Testing Code In Python - LEARNCSDESIGN). Excessive print
statements also clutter the output, so remember to remove or comment them out after
fixing the bug.

Tip: Use formatted strings to label your debug output clearly (as in the example above). This
makes it easier to pinpoint which part of your code produced each printout.

1.2 Using an IDE Debugger and Breakpoints

Modern development environments like Visual Studio Code, PyCharm, or Spyder provide
built-in debuggers. These tools let you set breakpoints (markers in your code where
execution will pause), so you can inspect variables and step through code interactively.
This is especially useful in engineering computations where multiple functions or modules
interact (e.g., a structural analysis program where loads, geometry, and solver routines all
interconnect).

How it works: You run your script in debug mode and the program halts at breakpoints. At
each halt, you can usually see the current state:

• Variables/Watch window: list of current variables and their values.

• Call stack: the chain of function calls that led to the current point.

• Step controls: buttons to step over to the next line, into a function call, or out of the
current function.

For example, suppose you wrote a function deflection(length, load, youngs_modulus) that
sometimes returns an incorrect negative deflection. You suspect a problem in how sub-
functions compute the moment of inertia. By setting a breakpoint inside that sub-function,
you can run the debugger and pause execution right before the result is returned. You can
then examine if the intermediate variables (length, load, etc.) have expected values.

(File:Winpdb-1.3.6.png - Wikimedia Commons) Figure: A snapshot of a Python debugger


interface (Winpdb) showing breakpoints, call stack, and variables.

Graphical debuggers provide a convenient visual way to step through code line by line
and check internal state. Unlike print statements, you don’t need to modify your code to
inspect it. You can hover over a variable to see its value or add watches for specific
expressions (e.g., total_load, current_deflection). This significantly speeds up debugging
for complex codebases.

1.3 Using the pdb Module (Interactive Debugger)

Python’s built-in Python Debugger, pdb, offers an interactive debugging environment in the
console (Understanding Debugging & Testing Code In Python - LEARNCSDESIGN). It’s a
powerful tool included in the standard library, meaning you can use it anywhere without
additional installation (How To Use the Python Debugger | DigitalOcean). With pdb, you can
set breakpoints, inspect variables, execute code line by line, and evaluate expressions on
the fly.

How to invoke pdb:

• Insert import pdb; pdb.set_trace() at the location in your code where you want to
pause (breakpoint). When Python executes this line, it will drop into the debugger
prompt.

• As of Python 3.7, you can achieve the same by simply writing breakpoint(). This
built-in function is an easier way to enter the debugger (it calls pdb by default).

Consider this example in the context of a civil engineering calculation:

import pdb

def compute_stress(force, area):

stress = force / area # σ = F/A

pdb.set_trace() # Pause here to inspect variables

return stress
# Suppose we call the function with an unexpected area of 0 (which would cause an error)

stress_val = compute_stress(1000, 0)

When the code runs and hits pdb.set_trace(), it will pause execution and open an
interactive prompt (Pdb). At this prompt, you can enter commands to investigate:

• p variable_name to print the value of a variable (p force or p area).

• n (next) to execute the next line (return stress in this case).

• s (step) to step into function calls (not applicable here since next line is a return).

• c (continue) to resume normal execution until the next breakpoint (or program end).

In the above scenario, using pdb would immediately reveal that area is 0 before attempting
the division, helping you pinpoint the bug (division by zero error). You could then quit the
debugger (q command) and fix the code (for example, add a check to prevent zero area).

Some common pdb commands include:

• l (list source code around the current line).

• p (print a variable).

• pp (pretty-print a variable, useful for data structures).

• w (where – shows the stack trace of function calls).

• q (quit debugging, abort the program).

Using pdb can be a bit more complex than print statements, but it is extremely powerful for
deep debugging. It allows interactive exploration of your program’s state at any point in
execution (Understanding Debugging & Testing Code In Python - LEARNCSDESIGN) (How
To Use the Python Debugger | DigitalOcean). This is particularly helpful in long-running
engineering simulations, where manually adding prints for hundreds of iterations is
impractical. With pdb, you can break on a certain iteration (say, when iteration == 50) by
adding a conditional breakpoint:

if iteration == 50:

breakpoint() # enters pdb when iteration reaches 50

This lets you inspect why, for example, a numerical method might be diverging at a certain
step.
Summary: Debugging in Python can be done through simple prints, powerful IDE GUIs, or
the interactive pdb tool. Mastering these will save you countless hours by making it easier
to find and fix mistakes in your code. Always choose the method that best fits the task:

• Use prints for quick checks on small scripts.

• Use an IDE debugger for larger projects where you need to frequently inspect state.

• Use pdb when you need an interactive session or are working on a remote system or
environment without a GUI.

2. Using assert for Sanity Checks in Engineering Calculations

In engineering, we often make assumptions or expect certain conditions to hold true during
calculations. For instance, a computed load should not be negative, or the sum of
distributed loads should equal the total load applied. Python’s assert statement is a
convenient way to enforce and check these sanity checks in code.

2.1 What Are Assertions?

An assertion is a statement that a condition must be true at a certain point in the program.
In Python, the assert keyword is used as follows:

assert condition, "Optional error message if condition is False"

• If condition is True, the program does nothing and continues.

• If condition is False, the program raises an AssertionError with the specified


message (or a default message if none is provided).

Assertions are primarily a debugging aid. They are used to catch unrecoverable errors –
situations that should "never happen" if the code is correct (Best practice for using assert?
- python - Stack Overflow). The philosophy is that if such a situation does occur, it indicates
a bug in the program logic (or an invalid assumption), and the program should halt
immediately (crashing fast and early) rather than proceeding with potentially corrupted
state (Best practice for using assert? - python - Stack Overflow).

2.2 Using Assertions in Civil Engineering Code

In Civil Engineering computations, assert can be used to validate assumptions. Here are
some examples:
• Physical constraints: Assert that a value is within a physically meaningful range
(e.g., a frequency is non-negative, a probability or coefficient is between 0 and 1,
etc.).

• Input validation: Assert that inputs satisfy certain relations (e.g., in structural
analysis, assert that the sum of support reactions equals total applied load as a
quick equilibrium check).

• Intermediate results: If you derive an intermediate formula that must hold, you can
assert it during development to catch mistakes.

Example 1: Beam loading sanity check. Suppose we distribute a total load P_total (in kN)
evenly on a beam with n segments (for a simple model, each segment carries P_total/n).
We might assert that our distribution adds up correctly:

P_total = 100 # total load in kN

n=5

segment_loads = [P_total/n] * n

assert abs(sum(segment_loads) - P_total) < 1e-6, "Loads do not sum up to total!" # sanity
check

Here, the assertion checks that the sum of segment_loads is essentially equal to P_total
(within a tiny tolerance for floating-point). If there’s a bug in how segment_loads is
constructed, this assert will catch it immediately.

Example 2: Cross-sectional area for stress calculation. When computing stress =


Force/Area, the area must not be zero (and probably should be positive):

def compute_stress(force, area):

assert area > 0, "Area must be positive and non-zero"

return force / area

# Using the function:

stress = compute_stress(5000, 0) # in N and mm^2, for example

In debug mode, the above code will raise an AssertionError with message "Area must be
positive and non-zero" if area is 0 or negative. This immediately signals the issue (perhaps
an uninitialized area or a missing unit conversion leading to 0).
Why not use exceptions here? For input validation in production code (especially for user
input), you might instead prefer to raise an exception (like ValueError) that can be caught
and handled. Assertions, by contrast, are mainly for developer sanity checks. They are
typically not intended to be caught; if an assertion fails, it indicates a serious issue in the
code logic.

2.3 Important Considerations for assert

1. Assertions can be disabled: When Python runs in optimized mode (using the -O
flag or environment variable), all assert statements are skipped. This means they
won’t execute at all. So, never put essential logic or operations in an assert – they
might not run in deployment. Use asserts only for checks that are only needed
during development/testing.

2. Don’t use asserts for data validation in production: As mentioned, if you need to
check user input or function arguments in a library, use explicit error handling (raise
exceptions) so that the calling code can handle them. Asserts are more appropriate
for catching internal errors and assumptions during development.

3. AssertionError: If an assert fails, Python raises an AssertionError. If uncaught, it will


crash the program. You typically don’t catch assertion errors – if one occurs, you fix
the underlying problem instead of handling the error in code.

4. Including messages: Always include a helpful message in your assert, describing


what went wrong. This message will be shown in the AssertionError and greatly aids
debugging. For example, assert x >= 0, "Coefficient x must be non-negative" is more
informative than a bare assert x >= 0 (which would just say "AssertionError" without
context).

Real-World Analogy: Think of assertions as the digital equivalent of a structural engineer’s


“sanity checks” when reviewing hand calculations. For instance, after computing reactions
for a beam, an engineer might quickly check that the sum of reactions equals sum of loads
(if not, something is off). In code, an assert can perform that check automatically during
runtime.

3. Error Handling in Python: try, except, else, and finally

No matter how carefully you write your code, errors will occur. They might be due to invalid
inputs, numerical issues (like division by zero or overflow), or resources (like missing files).
In Python, errors during execution are reported as exceptions. We can handle exceptions
using try...except blocks to prevent program crashes and handle error conditions gracefully.
3.1 The Basics of try and except

A simple try/except structure looks like:

try:

# code that might raise an exception

except SomeExceptionType as e:

# code to handle the exception

• The code inside try is executed first.

• If no error occurs, the except block is skipped entirely (8. Errors and Exceptions —
Python 3.13.2 documentation).

• If an error does occur during the try block, execution jumps immediately to the
except block (skipping the rest of the try block) (8. Errors and Exceptions — Python
3.13.2 documentation).

• The except line can specify the type of exception to catch (e.g., ZeroDivisionError,
FileNotFoundError, etc.). If the exception type matches, the handler runs (8. Errors
and Exceptions — Python 3.13.2 documentation).

For example:

try:

result = compute_stress(1000, 0) # Using the function from before

print(f"Stress = {result:.2f} MPa")

except ZeroDivisionError as err:

print("Error computing stress:", err)

In this case, compute_stress(1000, 0) will raise a ZeroDivisionError. The except clause


catches it, naming it err, and prints an error message. Without the try/except, the program
would crash with a traceback. With try/except, we intercept the error and can take
appropriate action (logging it, substituting an alternative calculation, asking the user for
new input, etc.).

Catch broad vs specific: It’s best practice to catch specific exception types. A bare
except: will catch all exceptions, including ones you might not anticipate (even
KeyboardInterrupt from a Ctrl+C!). This can make debugging harder. So catch specific
errors or at least use except Exception: (which excludes system-exiting exceptions). For
instance, use except ValueError if you expect a conversion might fail, or multiple except
blocks for different errors.

3.2 else Clause in Try/Except

Python allows an optional else clause after all the except clauses (8. Errors and Exceptions
— Python 3.13.2 documentation). The else block runs only if no exception was raised in
the try block (8. Errors and Exceptions — Python 3.13.2 documentation). This is useful for
code that should run when the try succeeds, and you want to keep it separate from the try
block to avoid accidentally catching exceptions it might raise.

Pattern:

try:

risky_operation()

except SomeError:

handle_error()

else:

do_something_if_no_error()

Why use else? It helps clarify the logic. For example, when working with files:

try:

f = open('data.txt', 'r')

except FileNotFoundError:

print("Could not open file")

else:

data = f.read()

print("File has", len(data), "characters")

f.close()

Here, the else block executes only if the file was opened successfully (8. Errors and
Exceptions — Python 3.13.2 documentation). If we put the read/print inside the try, and
something in print("File has...", ...) raised an unexpected exception, our except
FileNotFoundError would not catch it (since it only catches file errors). The else ensures
that only file I/O is in the try, and our handling clearly separates file-not-found vs any other
errors (8. Errors and Exceptions — Python 3.13.2 documentation).

In engineering terms, use else for the part of the calculation that should proceed only when
preliminary steps have succeeded. For instance, try to fetch input data (from a database or
file), handle the case where data is missing, else if retrieval succeeded, proceed to
compute results.

3.3 The finally Clause

A finally block contains code that always needs to run, regardless of whether an exception
occurred or not (8. Errors and Exceptions — Python 3.13.2 documentation). It comes after
try/except/else. This is typically used for cleanup: releasing resources, closing files, etc., to
ensure no matter what happens, those actions occur.

Pattern:

try:

perform_calculation()

except Exception as e:

print("Calculation failed:", e)

else:

print("Calculation succeeded.")

finally:

print("Cleaning up, done with calculation.")

The finally block here will run whether the calculation raised an error or not. In the output,
you will see "Cleaning up..." in both success and failure cases (8. Errors and Exceptions —
Python 3.13.2 documentation). If an exception was raised and not caught, it will propagate
after the finally block executes (8. Errors and Exceptions — Python 3.13.2 documentation).
If an exception was caught, the program continues normally after the finally.

A common real-world use: ensuring a file or network connection is closed:

f = None

try:

f = open('design_loads.csv')
# ... process the file ...

# maybe raise exceptions if data is invalid

except Exception as e:

print("Error processing file:", e)

finally:

if f:

f.close()

Even if processing fails halfway, the finally ensures the file is closed, preventing resource
leaks.

Important: If both the try block and finally block raise exceptions, the one in finally will
supersede. Also, a return inside finally will override any return in try. In general, avoid
returns or raises in finally unless absolutely needed, to keep behavior clear (8. Errors and
Exceptions — Python 3.13.2 documentation) (8. Errors and Exceptions — Python 3.13.2
documentation).

3.4 Raising Exceptions

We’ve seen catching exceptions, but you can also raise your own exceptions using the
raise statement. This is useful to signal an error condition in your function (especially for
invalid inputs, or if a computation fails some criteria).

For example, in a function computing the safety factor of a slope:

def safety_factor(tau_resisting, tau_driving):

if tau_driving == 0:

raise ValueError("Driving shear cannot be zero")

return tau_resisting / tau_driving

try:

FS = safety_factor(100, 0) # This will raise our ValueError

except ValueError as e:

print("Invalid input for safety factor:", e)


We explicitly raised a ValueError when an impossible scenario is encountered (driving
shear = 0, which would be division by zero and physically meaningless). The calling code
handles it via except. If not handled, it would terminate the program with a message.

You can raise built-in exceptions like ValueError, TypeError, etc., or define your own
exception classes (just subclass Exception) for more specific semantics. For instance, you
might define class StructuralFailure(Exception): pass and raise StructuralFailure("Beam
failed under load X") somewhere. This allows distinguishing different error types in except
clauses.

3.5 Example: End-to-End Error Handling

Let’s combine these in a somewhat realistic scenario:

Imagine a function that reads a file of load data (one load per line, in kN), computes the
total load, and returns the design load (say, factored by 1.6 for ultimate load). We want to
handle various errors: file not found, non-numeric data, etc.

def compute_design_load(filename):

try:

f = open(filename, 'r')

except FileNotFoundError:

raise FileNotFoundError(f"Input file '{filename}' not found")

else:

total_load = 0.0

with f: # ensure file closes after use

for line_no, line in enumerate(f, start=1):

try:

load = float(line.strip())

except ValueError:

raise ValueError(f"Non-numeric load on line {line_no}")

total_load += load

return 1.6 * total_load # apply load factor


Explanation:

• We first try to open the file. If it fails, we raise a new FileNotFoundError with a clearer
message. (We could have handled it in-place, but raising allows the caller to catch it
or the program to report it.)

• If file opens, we enter the else block. We then iterate through lines. For each line, we
attempt to convert it to float. If conversion fails (e.g., a line has "abc"), we raise a
ValueError with the line number information.

• Regardless of conversion success or failure, the file will be closed when exiting the
with f: block (which is a context manager, a safer way to handle file closing than
manual finally).

• If everything goes well, we return the factored total load.

• If any raise triggers, the function will exit immediately with that exception, which
could be caught by an outer try if needed.

Using this function:

try:

design_load = compute_design_load('loads.txt')

except FileNotFoundError as e:

print(e)

except ValueError as e:

print("Data error:", e)

else:

print(f"Design load = {design_load:.2f} kN")

finally:

print("Load computation attempt finished.")

This structure cleanly separates concerns:

• Handling file issues vs data format issues with different except blocks.

• Using else for the success path (printing the result).

• Using finally to note that the attempt is done (which runs no matter what).
(Python try...except...finally Statement) Figure: Flowchart of try/except/finally execution.
The finally clause runs regardless of whether an exception occurs (Python
try...except...finally Statement).

In summary, Python’s error handling constructs (try, except, else, finally, and raise) give you
fine control to make your engineering code robust. Instead of crashes or silent failures, you
can catch errors, output meaningful messages, and ensure resources are properly
managed. Aim to anticipate possible failure modes in your functions (invalid inputs,
calculation errors, missing resources) and handle them gracefully. This will make your code
more reliable and easier to debug when something goes wrong.

4. Writing Unit Tests for Civil Engineering Functions

Testing is an essential part of software development, including scientific and engineering


software. Unit tests are automated tests written to verify that small “units” of your code
(usually individual functions or classes) work as intended. In Civil Engineering
computations, unit tests can verify that your functions calculating reactions, bending
moments, flow rates, etc., are producing correct results for known scenarios.

By writing unit tests, you ensure that:

• Your code meets the expected formulas or conditions (e.g., a function computing
the bending moment of a simply supported beam under uniform load should match
the theoretical formula M_max = wL^2/8).

• Future changes to the code do not introduce regressions (if something breaks, a test
will fail and alert you).

• You can refactor or optimize code more confidently, knowing tests will catch any
mistakes in logic.

4.1 Introduction to Unit Testing Frameworks

Python has a built-in module unittest (inspired by Java’s JUnit) and a popular third-party
framework pytest. Both serve the same purpose: allowing you to write test functions or
methods that call your code with predefined inputs and check for expected outputs.

unittest framework (built-in): You create a class that extends unittest.TestCase and write
methods beginning with test_ inside it (Understanding Debugging & Testing Code In Python
- LEARNCSDESIGN) (Understanding Debugging & Testing Code In Python -
LEARNCSDESIGN). Each method typically uses assert methods like self.assertEqual to
compare the result with an expected value. You then run the tests using a test runner (for
example, by calling unittest.main() or using the command line python -m unittest).

pytest framework: You write simple functions starting with test_ in a module (no class
needed, though you can use classes optionally). Inside, you use plain assert statements to
check expectations. Pytest will discover and run these tests automatically. Pytest often has
a bit less boilerplate and is very powerful for more advanced testing, but for basic tests
either is fine.

We’ll focus on a unittest example (since it’s built-in), but keep in mind you can do the same
with pytest using simpler syntax. The concepts of what to test remain the same.

4.2 Choosing What to Test

Identify key functions in your engineering code whose correctness is critical. These might
include:

• Computational formulas (e.g., converting units, calculating forces, moments,


stresses, flow, etc.).

• Functions with conditional logic (different behavior for different regimes, e.g.,
laminar vs turbulent flow in a Reynolds number function).

• Functions that should handle edge cases (zero values, very large values, invalid
inputs gracefully).

For demonstration, let’s consider a couple of simple engineering functions to test:

def max_bending_moment_udl(w, L):

"""

Calculate the maximum bending moment for a simply supported beam

with uniform distributed load.

w: load per unit length (kN/m)

L: span length (m)

Returns bending moment in kN·m.

"""

return w * L**2 / 8 # theoretical formula for max M at mid-span


def shear_force_end_udl(w, L):

"""

Calculate the reaction (shear force) at each support for a simply supported beam

with uniform distributed load (UDL).

Returns shear force in kN.

"""

return w * L / 2 # each support takes half the total load w*L

We chose these because we know the expected outcomes from structural analysis theory:

• For example, if w = 10 kN/m and L = 5 m, then M_max = 10*25/8 = 31.25 kN·m at


mid-span.

• The end reaction should be half of total load: total load = 10*5 = 50 kN, half = 25 kN.

We also might want to test edge cases (like w=0 or L=0 giving zero moment/shear).

4.3 Writing Tests with unittest

Using unittest, a basic test module might look like:

import unittest

from structural_calcs import max_bending_moment_udl, shear_force_end_udl

class TestBeamFormulas(unittest.TestCase):

def test_bending_moment_udl_basic(self):

# Test a basic case: w=10 kN/m, L=5 m

w = 10

L=5

expected_M = 31.25 # kN·m

result = max_bending_moment_udl(w, L)

self.assertAlmostEqual(result, expected_M, places=2)


def test_shear_force_udl_basic(self):

# Basic case: w=10, L=5 should give 25 kN at each support

w = 10

L=5

expected_V = 25 # kN

result = shear_force_end_udl(w, L)

self.assertAlmostEqual(result, expected_V, places=2)

def test_zero_load(self):

# Edge case: zero load should give zero moment and zero shear

self.assertEqual(max_bending_moment_udl(0, 10), 0)

self.assertEqual(shear_force_end_udl(0, 10), 0)

def test_symmetry(self):

# Check that doubling load doubles the moment and shear (linearly proportional)

M1 = max_bending_moment_udl(5, 4)

M2 = max_bending_moment_udl(10, 4)

self.assertAlmostEqual(M2, 2*M1)

V1 = shear_force_end_udl(5, 4)

V2 = shear_force_end_udl(10, 4)

self.assertAlmostEqual(V2, 2*V1)

def test_invalid_inputs(self):

# If we expect our functions to handle invalid inputs by raising errors, we test that too

with self.assertRaises(TypeError):
max_bending_moment_udl("ten", 5) # passing a string instead of number

with self.assertRaises(ValueError):

max_bending_moment_udl(-5, 10) # negative load (if we decided to raise ValueError


for that)

A few things to note from the above test class:

• Each test method name starts with test_ (Understanding Debugging & Testing Code
In Python - LEARNCSDESIGN). This is how unittest finds tests to run.

• We used self.assertAlmostEqual for floating-point comparisons (Understanding


Debugging & Testing Code In Python - LEARNCSDESIGN), specifying a tolerance (2
decimal places in this case). This is important because floating-point arithmetic can
have tiny rounding differences.

• We grouped similar assertions in one test (for zero load, we checked both moment
and shear results).

• We tested not only normal cases but also an edge case and a theoretical property
(linearity in this case).

• We also demonstrated testing for exceptions using assertRaises in a context


manager. For example, if our code is supposed to raise a ValueError for negative
loads, the test confirms that.

Running the tests: If this code is in a file, say test_structural_calcs.py, you can run it by:

• Including at the bottom:

• if __name__ == '__main__':

• unittest.main()

and running python test_structural_calcs.py.

• Or simply using command line: python -m unittest test_structural_calcs.py (or even


discover all tests via python -m unittest discover).

If all tests pass, you will get an OK message. If a test fails, you’ll get a failure report showing
which test failed and what was expected vs actual. For instance, if
max_bending_moment_udl had a bug, you might see something like:

FAIL: test_bending_moment_udl_basic (__main__.TestBeamFormulas)

----------------------------------------------------------------------
AssertionError: 30.0 != 31.25 within 2 places (1.25 difference)

This tells you the result was 30.0 but expected 31.25, indicating the formula might be using
the wrong factor.

4.4 Using pytest (Briefly)

To illustrate how the same tests would look in pytest, here’s a quick comparison. In pytest,
you could write:

from structural_calcs import max_bending_moment_udl, shear_force_end_udl

def test_bending_moment_udl_basic():

assert abs(max_bending_moment_udl(10, 5) - 31.25) < 1e-6

def test_shear_force_udl_basic():

assert abs(shear_force_end_udl(10, 5) - 25) < 1e-6

def test_zero_load():

assert max_bending_moment_udl(0, 10) == 0

assert shear_force_end_udl(0, 10) == 0

def test_symmetry():

M1 = max_bending_moment_udl(5, 4)

M2 = max_bending_moment_udl(10, 4)

assert abs(M2 - 2*M1) < 1e-9

# similarly for shear

V1 = shear_force_end_udl(5, 4)

V2 = shear_force_end_udl(10, 4)

assert abs(V2 - 2*V1) < 1e-9


import pytest

def test_invalid_inputs():

with pytest.raises(TypeError):

max_bending_moment_udl("ten", 5)

with pytest.raises(ValueError):

max_bending_moment_udl(-5, 10)

Notice we don’t need a class, and we use plain assert for checks. Pytest’s output on
failures is also very nice, showing the expression that failed. The choice between unittest
and pytest often comes down to personal or team preference.

4.5 Testing in Practice for Engineering Problems

When writing tests for engineering functions, try to:

• Use known results: If a formula has known values from textbooks or prior
calculations, test those. (E.g., test that your beam deflection function matches
known solutions for a cantilever vs simply supported case).

• Test boundaries: If your function behavior changes at a certain threshold (Reynolds


number 2300 separating laminar/turbulent, for example), test values around that
threshold.

• Test error conditions: If your code is supposed to raise an error for certain bad
inputs (negative length, etc.), ensure it does so.

• Keep tests independent: Each test should set up its own scenario and not rely on
another test’s results. The test framework may run them in any order.

Also, organize your tests. Real projects often have a tests/ directory. Here we only had a
couple of simple functions, but imagine testing an entire module that computes a full
structural analysis – you’d have many tests for various parts (material behavior, section
properties, load combinations, etc.).

By incorporating unit tests into your development, you effectively create an automated
verification suite. For example, if you modify the max_bending_moment_udl function to
handle triangular loads as well (making it more complex), you can run your tests to ensure
the original UDL functionality still works. If something breaks, the tests catch it early, saving
you from propagating an error into, say, a final design result.
Continuous integration: In professional settings, tests are run automatically on each code
change. As a student or researcher, you can simulate this by running your test suite
frequently as you develop. It gives confidence that your code remains correct as it evolves.

To conclude, unit testing brings a level of rigor to computational engineering code that
mirrors the validation we do in hand calculations or peer reviews. It helps ensure our
Python implementations of engineering formulas are reliable and accurate.

5. Profiling Code Performance in Computational Simulations

Efficiency is crucial in computational simulations – whether you’re running a finite element


analysis of a bridge or a fluid dynamics model of a dam spillway. Profiling is the process of
measuring the performance of your code (which parts consume the most time or
resources) so that you can identify bottlenecks and optimize the slow parts (Profiling
Python Code Using timeit and cProfile - Analytics Vidhya) (The Python Profilers — Python
3.13.2 documentation).

Often, the first priority is to make sure the code is correct (using debugging, testing as
above). But once it’s correct, if it’s too slow for practical use, we need to optimize. Profiling
helps answer where to focus optimization efforts.

5.1 Introduction to Profiling Tools

Python comes with a few tools in the standard library for profiling:

• timeit – a module for timing small code snippets, good for micro-benchmarks
(measuring specific functions or operations).

• cProfile – a built-in profiler that records the time spent in each function across your
program. It produces a report of function calls, number of calls, and time
consumption.

• profile – a pure-Python profiler (same interface as cProfile but slower; cProfile is


preferred in almost all cases (The Python Profilers — Python 3.13.2
documentation)).

• pstats – a module to analyze and sort the output from cProfile for easier
interpretation.

There are also external tools and more advanced profilers (like line-by-line profilers or
memory profilers), but we’ll focus on these basic ones which are enough for a broad
overview.
Important distinction: Profiling gives you a breakdown of where time is spent in your
program (e.g., 50% in function X, 30% in Y, etc.) (The Python Profilers — Python 3.13.2
documentation). It has some overhead but is comprehensive. For precise timing of a
particular operation (especially for comparing two approaches), timeit is more accurate
(The Python Profilers — Python 3.13.2 documentation).

5.2 Using timeit for Micro-Benchmarks

The timeit module can be used from the command line or within Python. Its job is to run a
snippet of code many times and measure how long it takes on average, minimizing
interference from system stuff.

Example: Suppose we want to compare two ways of computing the sum of the first N
natural numbers:

1. Using the formula N*(N+1)/2.

2. Using a loop to add numbers.

We can use timeit to compare:

import timeit

# Approach 1: direct formula

direct_time = timeit.timeit('n*(n+1)//2', setup='n=1000000', number=1000000)

# Approach 2: loop

loop_code = """

total = 0

for i in range(1, n+1):

total += i

"""

loop_time = timeit.timeit(loop_code, setup='n=1000000', number=1000)

print(f"Direct formula time: {direct_time:.6f} seconds")


print(f"Loop time: {loop_time:.6f} seconds")

Here we run the direct formula 1,000,000 times and the loop 1,000 times (loop is slower, so
we run fewer iterations to get a measurable time). The output might be something like:

Direct formula time: 0.050123 seconds

Loop time: 0.257890 seconds

This indicates the loop is about 5 times slower in this simple test. (We’d extrapolate that if
both were run equally, direct formula vastly wins.)

In engineering context, you might use timeit to test the performance of different
implementations of an algorithm. For instance, if you wrote a function to compute matrix
multiplication in pure Python loops, you can time it against NumPy’s vectorized approach
to quantify the difference. Or compare two solvers for a nonlinear equation (binary search
vs Newton’s method, etc.).

Using timeit interactive: You can also use %timeit magic if you’re in a Jupyter notebook,
which is very handy. For example, in a notebook:

%timeit max_bending_moment_udl(10, 5)

would output something like 200 ns ± 5 ns per loop (mean ± std. dev. of 1e7 runs, 10000000
loops each).

5.3 Using cProfile to Profile Full Programs

When you have a large simulation, you want to see which parts (functions) are consuming
the most time. cProfile is ideal for this. It will run your program and collect stats like:

• Total time spent in each function.

• How many times each function was called.

• Time spent in a function excluding time spent in sub-functions (i.e., internal time vs
cumulative time).

How to run cProfile:

• From the command line:

• python -m cProfile -o profile_results.txt my_program.py

This runs my_program.py and saves results to a file for later analysis (you can also just print
to console by omitting -o profile_results.txt).
• Programmatically inside a script or interactive session:

• import cProfile

• import my_module

• cProfile.run('my_module.main()', filename='profile_stats')

(This assumes my_module.main() starts your simulation; adjust accordingly.)

Once you have results, you can either open the file or use pstats to sort and print them:

import pstats

p = pstats.Stats('profile_stats')

p.sort_stats('cumulative').print_stats(10)

This would print the top 10 functions by cumulative time (time including subcalls), which
typically highlights the main bottlenecks.

Interpreting cProfile output: A typical line from cProfile output might look like:

ncalls tottime percall cumtime percall filename:lineno(function)

100 0.050 0.000 0.150 0.001 solver.py:10(iterate)

• ncalls: number of calls (here solver.iterate was called 100 times).

• tottime: total time spent in this function excluding sub-functions (so time spent
strictly in its own code).

• cumtime: cumulative time including sub-calls (0.150s, meaning iterate called other
functions that took 0.100s in total).

• The rest identifies the function.

In this example, if iterate is heavy, we see it took 0.150s cumulatively over 100 calls, so
about 0.0015s each on average. If another function shows up with a high cumtime, that’s a
candidate for optimization.

Let’s say we profile a finite element analysis that assembles a stiffness matrix and solves it.
We might find something like:

1 0.200 0.200 0.300 0.300 fem.py:50(assemble_stiffness)

1 0.100 0.100 0.250 0.250 fem.py:80(apply_boundary_conditions)

1 0.050 0.050 0.200 0.200 solver.py:20(solve)


1000 0.150 0.000 0.150 0.000 math:<built-in>(sin)

Interpretation:

• assemble_stiffness took 0.3s (0.2 in itself, plus maybe 0.1 in subcalls).

• apply_boundary_conditions took 0.25s.

• solve (maybe calling a linear algebra library) took 0.2s.

• Interestingly, math.sin was called 1000 times taking 0.15s (maybe used in
assembling stiffness or loads).

From this, you might realize assemble_stiffness is the biggest chunk. If you dig deeper
(perhaps by modifying code or using line-profiling), you might find that computing certain
coefficients in a Python loop is slow. This could lead you to optimize by using NumPy arrays
or reducing Python-level loops (since Python loops are slower than C loops in libraries).

Note on overhead: Profilers add some overhead, so absolute times may be slightly
inflated, especially for functions that are very fast. But the relative times are what matter
for identifying bottlenecks.

5.4 Other Profiling Tips

• Profile with representative input sizes: If your final simulation will use, say, 10000
elements, profile with a similar size (not just 10 elements) to see realistic
bottlenecks.

• Iterative improvement: Sometimes after optimizing one part, another part


becomes the new bottleneck. Profile again after changes.

• Don’t optimize prematurely: Use profiling to guide optimization. Don’t guess where
the code is slow – often our intuition can be wrong about what’s taking time. Let the
data (profile results) lead you.

• Memory profiling (optional): If you have memory-heavy computations (large


matrices, etc.), there are tools like tracemalloc or others to see where memory is
used. This chapter focuses on time, but memory can also be critical in big
simulations.

• Vectorization and external libraries: In many engineering calculations, using


optimized libraries (NumPy, SciPy) can drastically improve performance. Profiling
can justify the need to switch to those if pure Python is too slow.

5.5 Example: Profiling a Hydraulic Simulation Loop


Imagine a simple simulation of water flow in a channel using a numerical method. You have
a function that updates flow parameters in small time steps. You notice the simulation is
slow for long runs.

Pseudo-code:

def simulate_flow(num_steps):

import math

# some initial parameters...

for t in range(num_steps):

for cell in range(num_cells):

# update velocity

velocity[cell] += (dt * accel[cell])

# update position

position[cell] += (dt * velocity[cell])

# maybe some nonlinear calc

eddy = math.sin(position[cell]) * coef[cell]

turbulence[cell] += eddy

return position, velocity

Profiling this might show that the inner loop is a bottleneck (especially the math.sin call).
We could try using NumPy to update arrays without Python loops, or limit math.sin calls by
vectorizing.

Using cProfile:

import cProfile

cProfile.run('simulate_flow(1000)')

We might see output showing simulate_flow took a lot of time, with math.sin being called
many times. If num_cells and num_steps are large, that nested loop is a classic hot spot.

Optimization attempt: Replace the inner operations with NumPy arrays:

import numpy as np
# Using vectorized operations instead of explicit loops

velocity += dt * accel

position += dt * velocity

turbulence += np.sin(position) * coef

This pushes the heavy work into C (inside NumPy), which can be much faster. After
changing, run the profile again to confirm the speedup and that math.sin no longer appears
as a major time consumer (NumPy’s internal ufunc will handle sin, likely much faster).

Documenting improvements: It’s good practice to note performance before and after
optimization, possibly adding tests to ensure the results remain the same. Profiling assures
us that we targeted the right section of code and achieved faster execution.

5.6 Using line_profiler (Optional Mention)

While not in the standard library, there is a tool called line_profiler that can show time
spent on each line of a function. It’s extremely useful for pinpointing which part of a long
function is slow. If you venture into heavy optimization, this tool can complement cProfile
by zooming in at the line level. For instance, line profiling could show that out of a 10-line
function, one line (maybe a triple nested loop) takes 95% of the time. You then know
exactly which line to optimize.

Wrap up: Profiling is like running diagnostics on your code’s performance. Just as an
engineer might test how a structure behaves under load to find weaknesses, you test how
your code behaves under load (time load, that is) to find slow spots. With debugging and
testing ensuring correctness, profiling ensures efficiency, enabling your engineering
simulations to run in reasonable time.

Hands-On Exercises

Finally, to solidify these concepts, here is an extensive set of exercises. These include
practical coding tasks and conceptual questions to test your understanding of debugging,
assertions, error handling, testing, and profiling. Try to solve these to practice the skills
from this chapter.

Each of these exercises and questions is designed to reinforce critical concepts from the
chapter. By completing them, you'll gain practical experience in debugging Python code,
using assertions wisely, handling errors robustly, writing tests to validate engineering
computations, and profiling programs to ensure they run efficiently. These skills will greatly
aid you in developing reliable and high-performance computational tools in your Civil
Engineering projects.

Part I: Coding and Debugging Exercises (50 Problems)

1. Debug Print Statements: The following code is supposed to calculate the factor of
safety (FS) for slope stability but is giving an incorrect result. Add print statements to
debug the values of resisting and driving forces and identify the problem.

2. def safety_factor_slope(resisting, driving):

3. FS = driving / resisting

4. return FS

5.

6. fs = safety_factor_slope(1200, 800)

7. print("Factor of Safety:", fs)

8. # Expected FS > 1 for stable slope if resisting > driving, but result seems wrong.

9. Fixing a Bug: You wrote a function to compute the area of a circle and a short test,
but it crashes:

10. def circle_area(r):

11. import math

12. return math.pi * r**2

13.

14. print(circle_area())

Debug and fix the code so that it prints the area for a given radius.

15. Using pdb: Write a small snippet with a loop that computes the cumulative sum of a
list of loads. Set a breakpoint using pdb (via breakpoint()) when the sum exceeds a
certain value, and inspect the program state at that point.

16. Breakpoint Practice: Given the function:

17. def find_neutral_axis(x_coords, forces):

18. total_moment = 0
19. total_force = 0

20. for x, F in zip(x_coords, forces):

21. total_moment += x * F

22. total_force += F

23. return total_moment / total_force

Use an IDE debugger to step through this function with x_coords=[0, 2, 4] and forces=[10,
10, 10]. At what point (line) does total_force get its final value? What is the return result?

24. Assertion for Physical Constraint: Write a function water_pressure(depth) that


returns the water pressure at a given depth (in meters) using P = ρ g h (use ρ=1000
kg/m³, g=9.81 m/s²). Use an assert to ensure the depth is not negative (since
negative depth is not physical). Test the function with a valid and an invalid depth to
see the assertion in action.

25. Assertion for Sum of Loads: In a beam analysis, you have a list of point loads. Write
a snippet that calculates the total load and uses an assert to check that it equals
the sum obtained by a different method (e.g., manually adding a few known values).
Intentionally modify one load value to break the assertion and observe the error.

26. Convert to assertRaises: There’s a piece of code:

27. def get_strain(stress, E):

28. if E == 0:

29. raise ZeroDivisionError("Modulus cannot be zero")

30. return stress/E

Write a unit test (using either unittest or pytest) to confirm that calling get_strain(250, 0)
indeed raises a ZeroDivisionError.

31. Try/Except for Input Validation: Write code that asks the user to input a beam
length (float). Use try/except to catch a ValueError if the input is not a valid number,
and prompt the user again until a valid float is entered.

32. Multiple Except Blocks: You have a dictionary of material properties. Write a
function that takes a material name and returns its Young’s modulus. Use a try to
attempt to get the value from the dictionary. Use two except blocks: one for KeyError
(material not found, return a default or raise a custom exception), and one for any
other exception that might occur (and print an error message).
33. Try/Except/Else: Open a text file reinforcement.txt that contains reinforcement bar
diameters. Use try/except to handle file not found, and an else block to read the file
and print “File read successfully” if no exception occurred.

34. Finally for Cleanup: Write a script that opens a file and writes some data (e.g.,
generated wind speeds) to it. Use a try block for writing and a finally block to ensure
the file is closed properly, even if an error occurs during writing.

35. Raise Custom Exception: Define a custom exception class OverstressError. In a


function check_stress(limit, actual), raise OverstressError if actual > limit. Write
code to call this function and catch the custom exception, printing an appropriate
warning message.

36. Unit Test - Bending Moment: Implement the function


max_bending_moment_udl(w, L) as discussed in the chapter. Then write at least two
test cases: one basic scenario (with known expected result) and one edge case
(zero load or zero length) using assert statements.

37. Unit Test - Shear Force: Implement shear_force_end_udl(w, L) and similarly write
tests to validate its output for a standard case and an edge case.

38. Test Failure Analysis: Deliberately introduce a bug in max_bending_moment_udl


(e.g., use /4 instead of /8). Run your tests from exercise 13. Show the failing test
output and explain what it indicates and how you would fix the bug.

39. Pytest Function: Write a simple pytest function (not using unittest classes) named
test_density_calculation that checks a density(mass, volume) function for
correctness. Assume mass=1000 kg, volume=1 m³ should yield density=1000
kg/m³.

40. Test Edge Conditions: For a function deflection_cantilever(P, L, E, I) that returns


deflection of a cantilever beam under end load P (formula: PL^3/(3EI)), write tests to
check:

o Doubling P doubles deflection (linear behavior).

o Zero load yields zero deflection.

o Negative E or I raises an exception (if you design it to do so).

41. Test Coverage for Branches: Suppose you have a function that categorizes flow
regime:

42. def flow_regime(Re):


43. if Re < 2000:

44. return "Laminar"

45. elif Re < 4000:

46. return "Transitional"

47. else:

48. return "Turbulent"

Write a set of tests that covers all three branches (laminar, transitional, turbulent). Include
boundary values like exactly 2000 and 4000 to see how they are classified.

49. Profiling a Loop: Write a code snippet that computes the sum of squares of
numbers up to N in a Python loop. Use timeit within the code (or simply measure
time with time.time()) to see how long it takes for N=1000000. Then try to optimize
the code (maybe using a formula or list comprehension) and measure the time
again to compare.

50. cProfile Basic Usage: Create a small script that defines a couple of functions and
calls them in some sequence (for example, a function that performs a sort, another
that does some math in a loop). Use cProfile to run this script and output the
profiling results. Identify which function took the most time.

51. Profile an Algorithm: Write two functions to compute the Fibonacci sequence: one
using simple recursion, one using an iterative loop. Use cProfile or timeit to
compare their performance for, say, the 30th Fibonacci number. (Be careful, the
recursive one may be slow!). Optimize or memoize the recursive one and measure
again.

52. Finding Bottleneck: Given a simulation that runs a nested loop (like the hydraulic
simulation in the chapter), insert timing or profiling to find which part of the loop is
slow. As a simplified model, loop over range(10000) and inside do a math operation
like math.sin(i). Profile it and confirm that the math.sin (or loop itself) is consuming
the time.

53. Improving Bottleneck: Continue from exercise 22: try to remove or reduce calls to
math.sin by precomputing values or using an approximation. Then profile again to
see the improvement.

54. Logging vs Print for Debugging: Replace print statements in a code with use of the
logging module (set to DEBUG level). Write a short snippet using logging in debug
mode to print variable values. Explain the advantage of using logging over print for a
larger application.

55. Debugging with a Stack Trace: Provide a piece of code that results in a traceback
(for example, calling int("abc")). Read the traceback and identify which line in which
function caused the error. Explain how you would use that information to fix the
code.

56. Debug a Logical Error: The function compute_payments(principal, years) is


intended to compute annual payments for a loan but always returns 0.0. It has no
syntax errors or exceptions. Explain how you would debug a logical error like this
(hint: step through calculations or use prints to see intermediate results).

57. Assertion in Loops: Write a loop that generates a sequence of approximations (e.g.,
Newton-Raphson iteration for a root). Use an assert inside the loop to ensure the
result is converging (for example, assert that the new estimate change is smaller
than the previous). Intentionally violate this to see the assertion trigger.

58. Assert vs If/Raise: Take the compute_stress(force, area) example. Rewrite it using
an if statement and raising ValueError instead of assert. Call it with invalid input to
see the exception. Discuss when you’d use one approach vs the other.

59. Exception Hierarchy: Write code that has a try/except catching a general
Exception. Inside the try, cause a ZeroDivisionError. Verify that the general exception
block catches it. Then modify to catch specifically ZeroDivisionError and see that it
still catches it. Finally, add a second except for Exception and show that it can catch
other types not caught by the first.

60. Resource Handling: Simulate a scenario where you open a file and an error occurs
after opening but before closing (e.g., open file, then throw an exception). Show that
without a finally, the file remains open (you can attempt to open it again to see if it’s
locked). Then add a finally to close it, and demonstrate that this resolves the issue.

61. Unit Test - Floating Comparison: Write a test for a function that returns a floating-
point result (like a deflection calculation). Show how using assertEqual with floats
might fail due to precision issues. Then use assertAlmostEqual or a tolerance in
plain assert to properly test it.

62. Unit Test - Expected Exception: Write a test for a function that is supposed to raise
IndexError when accessed out of bounds (e.g., a custom list class). Use the with
self.assertRaises(IndexError): context in unittest to verify this behavior.
63. Write Tests First (TDD): Imagine you need to write a function triangular_area(base,
height) that returns area of a triangle. Write the tests first (without the function
implemented), with a few cases (normal case, zero base or height yields zero area,
negative inputs maybe raise error). Then implement the function to make the tests
pass.

64. Performance Test: Write a simple loop adding numbers 1 to N. Measure its time for
N=100k, 200k, 500k (you can use time.time() around the loop). Plot or note the
times. Do you see roughly linear scaling? This is a manual way of performance
testing – what does it indicate for complexity?

65. Profiling I/O vs Computation: Write a program that does two things: reads a very
large text file from disk, and then performs a CPU-heavy calculation (like a large
nested loop) on the data. Use profiling to determine which portion (I/O or CPU) takes
more time. (If you can’t use a very large file, simulate I/O by a delay).

66. Optimize Data Structure: You have a list of (x, y) tuples representing points and you
need to look up points by their x-coordinate frequently. Profiling shows a search
function is slow. Modify the code to use a dictionary keyed by x for faster lookup.
Write a small test to confirm the speed improvement (time the lookup 10000 times
before and after).

67. Memory Profiling (conceptual): Although not directly covered, explain how you
might identify if your program is using too much memory. (Hint: tools like
tracemalloc or simply monitoring in Task Manager/top, and writing tests that check
memory usage).

68. Debugging External Libraries: You are using a library function that is failing (raising
an exception). Describe how you can use Python’s traceback or debugger to step
into the library code (if available) to see why it’s failing. (This might involve reading
stack trace or source if it’s pure Python).

69. Conditional Breakpoint: In an interactive debugger (like pdb or an IDE), how would
you set a breakpoint that only triggers when a certain condition is met (e.g., a loop
index equals a certain value)? Write a short code snippet and explain setting a
conditional breakpoint for it.

70. Catching Multiple Exceptions: Modify this code to catch both TypeError and
ZeroDivisionError:

71. try:

72. # perform some division operation that might go wrong


73. result = compute_stress("100", 20)

74. except Exception as e:

75. print("Error:", e)

Change it to use two except clauses, one for TypeError (if, say, a string is passed) and one
for ZeroDivisionError (if zero passed), with different messages for each.

76. Using else effectively: Write a try/except/else where the else part computes
something (like the sum of values) only if no exception was raised in try. For
example, try to convert a list of strings to floats (could raise ValueError), and in else,
if all conversions succeed, compute the sum.

77. Finally usage scenario: Describe or write a code snippet where not using finally
could lead to a problem. For instance, opening a lock or starting a database
transaction and not closing it if an error occurs. Show how adding finally solves it.

78. Test Organization: Create a small module with two functions related to beam
design (maybe one for moment, one for shear). Then create a separate test module
where you write tests for both functions. Demonstrate how you would run this test
module to test both functions together.

79. Test a Random Outcome: Suppose you have a function that Monte Carlo simulates
something and returns a random result (like simulate traffic flow and return average
delay). Testing such functions is tricky due to randomness. Write a strategy (or code)
to test it – perhaps by seeding a random number generator to get deterministic
behavior or by testing statistical properties over many runs.

80. Integration Test Example: How would you test a scenario that involves multiple
functions working together? For example, a function that reads input, processes it,
and writes output. Write a brief outline of an integration test (which might not assert
on a single function’s return, but on the end-to-end outcome, like a file created or a
value computed through several steps).

81. Improve Test Based on Bug: Assume you found a bug where shear_force_end_udl
fails for very large L due to floating point precision. Write a new test that would catch
this bug (for example, by comparing the result of shear_force_end_udl with w*L/2
within a tolerance for a large L). Then mention how you’d adjust the implementation
if needed (like using Decimal for high precision or something).

82. Speed vs Readability: Write two versions of a code block that calculates the sum of
squares of 1…N: one very compact (maybe a one-liner list comprehension or using
a mathematical formula), and one very verbose (explicit loops and variables). Both
give the same result. Profile or time them if possible (they likely have similar
performance for this simple task). Discuss the trade-off between code clarity and
micro-optimizations.

83. Identify Exception Source: In a deeply nested call (A calls B calls C), an exception
is raised in C. If you catch it in A, how can you get information on in which function
(and line) it occurred? Write a try/except that catches an exception and prints the
traceback (you can use Python’s traceback module for this).

84. Profiling Output Interpretation: Provide a sample output of cProfile (you can make
up a small one) and ask: “In the given profile output, which function had the highest
cumulative time and what does that imply?” Then answer it based on your sample
output.

85. Practice Problem - End-to-End: Write a small program that:

o Reads an integer N from input.

o Uses a loop to compute sum of 1..N.

o If N is negative, raises an exception.

o Has an assertion that the final sum is N*(N+1)/2.

o Contains a bug (maybe the loop goes to N-1 or something). Now, outline a
debugging session: use print or debugger to find the bug, write a test that
would have caught it, and then fix the code.

Part II: Conceptual Questions (50 Questions)

1. Debugging vs Testing: In your own words, explain the difference between


debugging and testing.

2. Why might using a lot of print statements for debugging be problematic in a large
program? (Understanding Debugging & Testing Code In Python - LEARNCSDESIGN)

3. What are the advantages of using an interactive debugger like pdb over print
statements?

4. Describe what a breakpoint is in the context of debugging.

5. In an IDE debugger, what is the function of the “step into” command?

6. What does the assert statement do in Python, and what exception does it raise on
failure?
7. When should assertions be avoided or disabled in Python code?

8. Give an example of a condition in a Civil Engineering context that you would enforce
with an assert.

9. What is the difference between raising an exception and asserting a condition?


When might you use one vs the other?

10. Explain the flow of execution in a try/except block when no exception occurs.

11. Explain the flow of execution in a try/except when an exception does occur in the try
block.

12. What is the purpose of the else clause in a try/except structure? (8. Errors and
Exceptions — Python 3.13.2 documentation)

13. What is the purpose of the finally clause in exception handling? (8. Errors and
Exceptions — Python 3.13.2 documentation)

14. If an exception is raised in a try block and there is no matching except, what
happens?

15. Why is it a bad idea to use a bare except: without specifying an exception type?

16. What built-in Python exception would you raise for an invalid argument (like a
negative length where positive is required)?

17. How do you create a custom exception? (Describe the basic steps or write a small
class example).

18. In the context of testing, what is a “unit”?

19. Why are unit tests important in engineering calculations software?

20. What does the unittest module provide that plain assert statements in test
functions do not?

21. Name at least two assert methods provided by unittest.TestCase other than
assertEqual (Understanding Debugging & Testing Code In Python -
LEARNCSDESIGN).

22. In pytest, how do you indicate that a test is expecting an exception to be raised?

23. What is test coverage and why is it important?

24. What is a regression, in terms of software bugs, and how do tests help prevent
them?
25. Explain the concept of profiling in the context of program performance.

26. What’s the difference between using timeit and using cProfile?

27. If a certain function appears at the top of a profile output with a high cumulative
time, what does that tell you?

28. How can profiling guide you in optimizing a program?

29. Why is it not always a good idea to optimize code without profiling data?

30. What does “premature optimization is the root of all evil” mean in context of writing
code?

31. In big O notation terms, if doubling the input size quadruples the running time, what
is the likely complexity?

32. How can you measure the execution time of a small code snippet accurately in
Python?

33. What is the meaning of the columns “ncalls”, “tottime”, and “cumtime” in a cProfile
output? (The Python Profilers — Python 3.13.2 documentation)

34. Why might a pure Python loop be slower than using a library function (like a NumPy
vectorized operation) that accomplishes the same task?

35. What is an example of an “external resource” that should be cleaned up in a finally


block?

36. How does setting a breakpoint in code differ from inserting an assert in terms of
program behavior?

37. True or False: A program with lots of asserts will run just as fast in optimized mode (-
O) as one without asserts.

38. True or False: Unit tests can catch logical errors in code.

39. True or False: The finally block will execute only if an exception was raised in the try.
(If false, explain when it executes.)

40. True or False: Using with open(filename) as f: is generally better than using try/finally
to open/close a file. (Explain your answer.)

41. Conceptually, what is the difference between an error and an exception?

42. When writing tests, what is the benefit of testing edge cases (like 0, negative, very
large values)?
43. How does one run all tests in a directory using the unittest module from the
command line?

44. In test output, what does an “E” or “F” typically stand for?

45. Why might you want to set a random seed in a test for a function that uses random
numbers?

46. If a test for a floating-point calculation fails by a very small margin (like expected
5.000, got 4.99998), what might be the cause and solution?

47. What is a call stack and how is it useful in debugging?

48. How can reading a traceback help you debug an error in your program?

49. What does it mean for a test to be “flaky” and what could cause it?

50. If a program runs correctly but very slowly on a large input, which techniques from
this chapter would you apply to improve it?

You might also like