0% found this document useful (0 votes)
62 views28 pages

Code Quality: Improving Readability and Maintainability

Uploaded by

kanzaakram123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views28 pages

Code Quality: Improving Readability and Maintainability

Uploaded by

kanzaakram123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Chapter 4: Code Quality

Q1) You are a new developer joining a project and notice that your team has been using
inconsistent naming conventions, with variables like x1, varA, and tmpData, scattered
throughout the codebase. The senior developer claims these names are "short and
efficient."

● What arguments would you present to demonstrate the importance of readability in


code quality?
● Suggest steps to improve naming consistency across the project.

As a new developer, I understand that any opinions that I may have must be presented with
discretion, so as to not offend my team members. Having that in mind, I would present my case
with the following arguments:

Efficiency Trade-off: While the current naming convention may save time, it can prove to be
entirely counter-productive in the future. This efficiency of quick naming comes at the cost of
reduced code readability in the future for any new developers, like myself, who would join the
team and waste considerable time in understanding the variables before proceeding with the
work. Inconsistent naming is seen as a code smell in software construction, hinting at possible
problems that may arise due to it.

Coding Standards: Inconsistent naming conventions go against the idea of coding standards,
which are well-practiced and industry-standard practices followed in code guaranteeing code
quality. Variable naming is a miniscule aspect of coding, but one that can largely impact the
project’s performance. Having robust coding standards in place which necessitate consistency in
naming variables manifests a high quality project.

Ease of Maintenance: A codebase with inconsistent naming convention is messy and unruly,
making the maintenance task as difficult as the development. It might be the case that
maintenance work is done by staff apart from the core project developers, who would then
require extra effort in understanding the code, thereby making the maintenance process
incredibly strenuous.

Logic duplication: With multiple developers working on the project, it is entirely possible that
the same variable is initialised under different names as a misconception. For example, a variable
called ‘varX’ being used to store the students’ names can be duplicated by ‘studentNames’. This
would create confusion in the codebase, as well as in the development team, who would be
reworking for no reason, in turn increasing the technical debt.

Poor Logic Building: Inconsistent naming would make the code construction difficult, due to its
lack of structuring and robust logic. Meaningful variable names can help build logic a lot better
than random sequences of characters. For example, using variables like ‘finalGrade’,
‘gradingThreshold’, ‘gradesList’ can help understand the system than ‘a’, ‘varB’, ‘listA’.
Conditionals like one for checking if a grade is within the threshold can be done easily in this
scenario, and any other developer can understand it better.

The abovementioned problems are critical enough to be taken into notice, and the following
measures can be taken to ensure consistent naming:

1. Decide on a naming convention: Various coding standards offer naming conventions


that can be adopted by small projects. More common conventions include camel-case
(finalGrade) or snake-case (final_grade). Deciding a naming convention must be done
with the consensus of all developers, to choose one that everyone is comfortable with.
2. Use modern IDEs: The choice of an IDE makes all the difference. Many old IDEs like
Dev C++ do not offer robust code consistency checks, and can perpetuate the problem.
Using modern IDEs like IntelliJ IDEA, VSCode, or PyCharm can help manage
consistency
3. Use linting tools: Linting is a code reforming practice which looks for syntactic errors in
the source code. Common frameworks like ESLint can help maintain consistency in
naming conventions, and highlight problems.
4. Developers’ training and assistance: It may not be possible to get all developers
on-board with a certain naming convention, which is why additional training or
assistance must be provided to developers unaware of the convention, so that they may
familiarise themselves with the environment a lot faster.
5. Code Reviews: Conduct timely code reviews to ensure any variables that were missed out
by linting or consistency checks are corrected before they perpetuate into larger
problems. Developers can review each other’s code collectively, or pair-programming can
be practiced.

Q2) A project manager wants to quickly implement a feature using "copy-paste"


programming to meet a tight deadline. While this approach will deliver the feature, it
introduces significant duplication in the codebase.

● How would you convince the manager to prioritize maintainability over speed?
● Propose an alternative approach that balances time constraints and code quality.

Speed is essential when a deadline is approaching, but it is crucial to think about the longevity of
the project. Copy-paste programming can help alleviate the stress of meeting the deadline, but
diminish code quality largely. I would suggest the following to my manager to reconsider:

High coupling in code: When the codebase is coupled with duplicate components, it is difficult
to reduce dependencies between modules. Such a scenario is not ideal for scaling the project, as
multiple dependencies may restrict the addition of newer logic.

Increased memory utilisation: With code that is copy pasted in different places, there exist
multiple instances of the same logic which add to the space complexity of the code. Speedy work
may help meet the deadline, but the software needs to be maintained over its lifetime. Excessive
memory utilisation is wasteful for resources and goes against green IT practices.

Short-term efficiency: It is crucial that we do not measure efficiency from a short-term


approach, but a long-term one. While it may seem efficient to copy paste and have more work
done in less time, this practice would increase the technical debt of the project in case of
incorrect logic placement. More problems may arise due to this in the long run, thereby reducing
efficiency.

Possible Code Failure: Bulky codebases produced by excessive copy pasting are most likely to
crash due to a significant performance overhead. The code may be functioning now, but
prioritising speed could endanger the project to crash if not maintained.

Loss of abstraction: Copy-pasting code diminishes opportunities for abstraction, which is a core
principle of good software design. Abstraction allows for shared logic to be encapsulated in
reusable methods or classes, reducing repetition. When abstraction is overlooked, the code
becomes harder to generalize and extend, limiting the system's flexibility and scalability.

Given the problems that need to be considered in copy paste programming, here is an alternate
approach considering the time constraint and code quality:

1. Pair programming: This method involves grouping two developers together where at a
time, one develops and the other reviews the work. This approach is known to increase
efficiency greatly, and is practiced in most RAD (rapid application development)
projects. Since development and review happen concurrently, code quality is preserved.
2. Modular design: The new feature should be decomposed into small, executable units and
assigned to programming pairs to work concurrently, reducing time in waiting for a
component to be developed.
3. CI/CD Pipelines: Establishing a pipeline for the project can enable seamless and quick
compilation and integration of modules. Incorporating automation tools like Maven can
test code before integrating it, thereby performing quick code reviews.
4. Automated Testing: Using tools such as SonarQube can quickly isolate errors in the
code, as well as suggest improvements. Such tools can also find code smells, which are
crucial to be highlighted to safeguard the software from future problems.
5. Linting tools with IDEs: Using linting tools like ESLint can help highlight code
problems during development, so lesser time is utilised in reviewing the code once done.

Q3) Your team identifies a method in your project with a cyclomatic complexity score of 25.
This method is critical to the system and contains several nested loops and conditionals.

● What are the risks associated with such a high complexity score?
● Propose a refactoring strategy to simplify the method without compromising its
functionality.
Cyclomatic complexity is a measure of the linearly-independent paths in a system. Any score
above 20 suggests high linear-independence and difficulty in understanding the code. The score
of 25 is certainly problematic, and here are the reasons:

Collaboration Difficulties: Highly complex code with excessive nesting increases the cognitive
load on developers, making collaboration more difficult. New team members, in particular, will
face a steep learning curve as they try to decipher intricate logic, which can delay progress and
increase onboarding time.

Challenges in Debugging: High cyclomatic complexity often leads to convoluted logic that is
hard to debug. Identifying the root cause of an issue may require unraveling the entire method,
which is time-consuming and inefficient.

Resource Drain: High cyclomatic complexity results in nested loops that increase computational
time and memory usage, putting unnecessary strain on system resources. This inefficiency
contradicts green IT practices by consuming more power and computational resources than
necessary.

Testing Complications: A method with high cyclomatic complexity requires testing a large
number of potential execution paths, making automated testing both resource-intensive and
potentially unreliable. Even thorough testing might fail to catch all errors due to the vast number
of permutations.

To address high cyclomatic complexity, consider using the "extract" method of code refactoring.
This involves breaking down large, complex methods into smaller, more manageable ones that
are easier to test, debug, and maintain.
Chapter 5: Software Testing

Q1) You are working on a software project that has recently passed unit testing but is now
facing integration issues when multiple modules interact. The senior developer insists that
unit tests are enough, while you believe integration tests are essential.

1. How would you explain the importance of integration testing in addition to unit
testing?
2. Propose a testing strategy that ensures all levels (unit, integration, and system) are
properly covered.

Being a part of the software project, it is my duty to let my superiors know if a crucial aspect of
the development process is being missed out, and its subsequent repercussions. I would present
the following reasons to aid my argument:

Possible logical conflicts: While individual unit tests may perform well, it is entirely possible
that when merged together, the integration fails. This can happen due to logical conflicts in units,
such as the use of differing data structures for storage (i.e., using Hash maps in one unit and
Trees in the other). Integration testing prepares the system to merge together consistently, and is
necessary when a system is broken down into units. It is a measure of whether the system is
brought back together right.

No focus on modular interaction: Unit tests are an insufficient way to judge a unit’s external
interactions with other units. The sole purpose of modularity in a system is to have independently
developed units working in unison. Unit testing does not guarantee the independent systems
working together in the intended way.

Less effort in system testing: When integration testing is practiced, it removes nearly all errors in
the system, thus reducing the chances of having fatal defects during system testing. Considerable
resources are saved in this process, and code quality is preserved.

Fine-tuning the system: While unit tests ensure that individual components of the system work
perfectly, integration testing helps fine tune the overall functioning of the system by finding
errors in the connection of certain modules. This type of testing would allow us to make
necessary trade-offs when a conflict arises, such as when the payment module is integrated with
the checkout module and an error is found in the ‘PayPal’ option, it can be decided to remove the
option altogether, if it is found to be unimportant.

A proper strategy must be implemented to ensure all forms of testing are covered:

1. Use Automated Testing Tools: There are many tools specific to languages which provide
code testing capabilities to automate testing, such as JUnit for Java. This simplifies the
unit testing process, and test cases are made concurrently with development.
2. Implement CI/CD Pipelines: Such pipelines allow for seamless code integration and
setting automatic tests on every code merge. This ensures quick integration testing, and
prepares the system for holistic testing. Tools like Git’s bisect help assist this.
3. Regression Testing: The best and most efficient form of testing which ensures the project
does not encounter any errors due to refactoring. The system testing can be done using
this strategy, thereby saving the system from added defects.
4. Test-Driven-Development: On the other hand, TDD could be practiced from the
beginning of the project, where the code is built upon predefined test cases.

Q2) Your team is under pressure to release a feature quickly. The testing process is
currently manual, and there's a debate over whether to automate tests or continue
manually testing each time.

1. What are the advantages of test automation in this situation, especially for
regression testing?
2. How would you convince the team to start automating the tests without delaying the
project timeline?

Automation testing, or regression testing to be specific, can prove to be quite beneficial for the
project in the given circumstances. The following are reasons to justify this:

Corrective Testing: Regression testing is a technique which regards the system functionality
while fixing errors. Any defects found in regression testing are fixed at its source, as well as in
all associated instances to ensure that the system does not fail, or new problems don’t arise with
one fixed problem. This is specifically helpful for our scenario because there is not enough time
to fix errors and ensure system stability manually.

Efficient Testing: One of the greatest advantages of all automated testing is the efficiency it
offers as compared to manual testing. No human testers are needed to be hired, reducing efforts,
expenses, and human error significantly. While we might need to purchase automated tools, the
cost is generally lesser compared to human labour and its plausibility to miss out defects.

Implicit System Testing: The idea of regression testing is to maintain system stability with each
fix. This in turn prepares the system to function as a whole unit, and does most of the system
testing tasks itself. This would save considerable resources and time, given that the project has to
meet tight deadlines.

Quicker Time-to-Market: If chosen to continue with manual testing, the project might not be
able to meet its deadline. Regression testing, in this scenario, would help release the feature in
time after thorough testing.

Long-term testing advantage: Switching to regression testing at this stage can make the team
familiar with the technique, and help implement this for the future features to be released. This
would make the system scalable and help release well-tested products timely.
The following strategies can be adopted to shift:

Incremental Automation: Begin by automating the most critical and frequently executed tests
first, such as smoke tests and core functionality checks. This ensures that essential areas are
covered without requiring a full shift to automation right away.

Parallel Testing: Implement a hybrid approach where automated regression tests run alongside
manual tests. This allows the team to continue manual testing for areas that are not automated yet
while progressively moving towards automation.

Test Case Prioritization: Prioritize tests based on risk and impact, automating the most critical
ones that are more likely to fail and affect the system's functionality. This ensures high-value
tests are automated first, contributing to quicker feedback and more reliable releases.

Automate During Development: Encourage developers to automate their unit and integration
tests as they write the code. This way, tests are continuously added to the automation suite
without additional work after the fact.

Q3) During a sprint, your team encounters an issue in production. The logs show a vague
error message, and there is no obvious cause. A developer suggests that the issue is too
complex to debug, and we should "wait for the next release."

1. How would you approach debugging this issue to identify the root cause?
2. What tools or techniques would you use to ensure the issue is resolved before the
next release?

Debugging is one of the cheapest risk management techniques, and can save a project from
major crashes. The issue encountered by my team could manifest as a larger, more grave issue,
and must be eradicated from its root. Here is how I would use debugging to get to the root:

Binary Search Debugging: This technique of debugging is much like the binary search
algorithm, where the problem is located by isolating it from the rest of the code. The code is
scanned through sequentially, and the buggy part is revealed by a series of print or console.log
statements. This technique is highly effective when the codebase is large.

Backtracking: Another helpful technique in debugging is backtracking, which is also much like
the algorithm it is named after. We begin at the site of the problem, and backtrack our way to its
possible root cause. This forms a logical link between the problem and its related piece of code,
and helps reach the problem faster. When the cause is not apparent, it is mostly the case that it's
hidden behind method dependencies.
Paired Debugging: This strategy involves two members partaking in debugging together, to
make the task easier and quicker. In paired debugging, the chances of finding the root cause is
significantly larger because the other debugger might catch the link missed by the first one.

Version Control: Since the project is a production-line project, it is assumed that robust version
control mechanisms and pipelines must already be in place. These pipelines can be utilised to
isolate the error and find its true cause. Implement automated build tests for the pipelines to
catch the error whenever the code is ‘pushed’.

Error Logging Analysis: Utilize detailed error logs to pinpoint the exact conditions under which
the issue occurs. Implement structured logging with contextual information such as timestamps,
user actions, API responses, and system states. This provides insights into the sequence of events
leading to the error and helps narrow down potential root causes.

Q4) The team has achieved 100% code coverage for the feature but is still noticing
occasional bugs in production. Some developers believe that since all lines of code are
tested, the tests are sufficient.

1. How would you explain the difference between code coverage and test quality?
2. Suggest a strategy for improving the overall effectiveness of the test suite beyond
just achieving 100% coverage.

The 100% code coverage achieved is certainly a good indicator, but not a guarantee that the code
is error free. Thus, additional measures must be enacted to solve the occasional bugs. It is crucial
to understand the stark difference between code coverage and test quality.

False Confidence: Code coverage is simply an indicator of the testability of the code, and does
not check the absence of errors. The 100% result means that all lines of code were executed
successfully at their current state; there is no effect-analysis of the errors that may arise.

Quality of Tests: Reaching to a score of 100% is possible with low-quality tests aiming to simply
succeed rather than perform robust checks on the code. There is no guarantee that the test suite
designed was complete and effective.

Edge Cases: Despite a successful code coverage result, there still exist edge cases which might
not be tested, because they need to be specified in the test. Without edge cases, there is no way
the code can be declared error free.

Partial testing: Code coverage does not offer a robust testing experience that various specific
testing strategies do, such as load testing. It is due to the insufficiency of code coverage that we
can not solely depend on it.
Interdependencies: Code coverage doesn’t check interactions between modules or systems,
which can cause bugs during production.

A strategy that can be adopted is as follows:

1. Prioritize Edge Case Testing: Focus on negative testing, invalid inputs, and boundary
values to catch rare but impactful bugs.
2. Strengthen Integration Testing: Test how different units of code interact, as bugs often
emerge from miscommunication between modules.
3. Automate Regression Testing: Ensure that fixes for previously discovered bugs don’t
resurface in future updates. Automate this to save time and improve reliability.
4. Scenario-Specific Testing: Perform load, stress, and performance tests to simulate
real-world conditions, ensuring the feature is robust under various circumstances.
5. Monitor Production: Add detailed logging and error tracking in production to catch
patterns or reproduce bugs that tests might miss.

Q5) CHATGPT: The team is considering adopting Test-Driven Development (TDD) for the
next sprint. Some developers are skeptical, arguing that it will slow down the development
process.

1. How would you explain the benefits of TDD in terms of code quality and long-term
maintainability?
2. Propose a plan to integrate TDD into the development process without
compromising the sprint’s timeline.

While it’s natural for developers to worry about TDD slowing down development initially, the
benefits it brings to code quality and long-term maintainability far outweigh the initial overhead.
Here's how I’d approach explaining TDD and integrating it effectively:

Benefits of TDD

1. Improved Code Quality: Writing tests first forces developers to think through the
requirements and edge cases upfront, leading to cleaner, more purpose-driven code.
2. Fewer Bugs: Since tests are written before the code, the functionality is verified
step-by-step, reducing the chances of bugs slipping through.
3. Easier Refactoring: With a strong test suite, developers can confidently refactor or
improve the code later, knowing the tests will catch any regressions.
4. Better Design: TDD promotes modular, loosely-coupled code because tightly-coupled
components are harder to test.
5. Long-Term Savings: While it may take time upfront, TDD reduces debugging and
maintenance time, speeding up future development.
1. Pilot Approach: Start with a critical or medium-complexity feature in the sprint to pilot
TDD instead of adopting it for the entire backlog. This allows the team to adapt gradually
without overwhelming the timeline.
2. Smart Time Allocation: Allocate fixed time for test writing (e.g., 20–30% of
development time). This ensures TDD doesn’t stretch deadlines unnecessarily.
3. Pair Programming: Pair experienced and skeptical developers to share TDD best
practices while keeping productivity high.
4. Use Existing Tools: Leverage the current testing framework to avoid additional setup
time. Keep the process lightweight and use tools the team is familiar with.
5. Iterative Feedback: Conduct retrospectives after the sprint to gather feedback on TDD,
identify bottlenecks, and adjust for future sprints.

Q6) CHATGPT: During testing, your application performs well under normal load but
starts to fail under stress conditions. The team is unsure whether performance testing was
done properly.

1. What is the importance of load and stress testing in ensuring the scalability of the
system?
2. How would you set up a proper performance testing strategy to simulate real-world
traffic and identify bottlenecks?

Some intro line

1. Capacity Assessment: Load testing determines the system's ability to handle expected
user loads, ensuring it scales smoothly under real-world traffic. Scalability demands
precise knowledge of these thresholds.
2. Bottleneck Identification: Stress testing pushes the system beyond its limits, revealing
weaknesses in architecture (e.g., database constraints) that hinder scalability under peak
usage.
3. Resource Utilization Optimization: Both tests analyze how efficiently the system uses
CPU, memory, and network. Identifying inefficiencies helps optimize resources, reducing
waste—a green IT principle.
4. Failure Behavior Understanding: Stress testing exposes how the system behaves under
failure. A scalable system must degrade gracefully without crashing or affecting other
components.
5. Performance Baseline Establishment: These tests create benchmarks, helping predict
and improve scalability when traffic increases or features expand.

Setting Up a Proper Performance Testing Strategy

1. Simulate Real-World Traffic: Use tools like JMeter to generate realistic load scenarios,
including peak times and regional traffic patterns.
2. Define Key Metrics: Focus on response time, throughput, error rates, and resource
utilization. Establish clear targets for acceptable performance under varying loads.
3. Set Up Staging Environments: Mirror the production environment for testing to ensure
results are reliable and reflective of actual usage conditions.
4. Incremental Load Testing: Start with expected traffic, then gradually increase to
simulate growth. For stress tests, exceed limits to find breaking points.
5. Continuous Monitoring and Feedback: Integrate performance testing into CI/CD
pipelines. Use monitoring tools to capture live traffic data and refine tests.
Chapter 6: Exception Handling
idk honestly
Chapter 7: Code Reviews, Version Control, Security & Vulnerability
Q1) The development team uses Git for version control but often faces issues such as
overwritten changes and unclear commit histories.
How would you address these challenges using best practices for version control? Propose a
branching strategy that could improve collaboration and code quality.
Version control, when used efficiently, can boost productivity significantly. Here is how the best
practices of version control can help eradicate the issues being faced:
Meaningful Branches: Use well-defined branches serving a strict purpose. For example, if there
are multiple developers on the team, then each can have their own named branch. Or if the
project is a product-line software, then branches for different versions can be set up. This would
eradicate the overwritten changes problem, because when each branch is responsible for its
intended task, changes are made in an isolated manner.
Well-phrased commit messages: It is often helpful to write proper comments when pushing code
to a branch, as this can help other developers to know the contents of the pushed code and make
informed decisions. Use committing strategies like Conventional Commits and write “fix:
updated the document upload functionality” instead of just “fixed error”. This can help clear
commit histories.
Automated Tests: Write test cases for pipelines which validate all code that is pushed. This helps
assess code before it is merged with the larger codebase, and single out any errors which could
corrupt the merged code and cause bigger problems. Automated tests help keep the commit
history clean.
Merge Access Control: Implement access roles to restrict unwarranted pushes to the main
branch. This can significantly reduce overwritten changes by only allowing authorised personnel
to push corrected and validated code to the main branch in a controlled manner.

Branching Strategy:

1. Branch Types: ‘main’ for stable, production-ready code, ‘develop’ for ongoing
development and integration of feature branches, and feature branches for individual
features or bug fixes. Developers work here until changes are complete.
2. Workflow: Developers create a branch from ‘develop’ for their tasks. After completing
and testing locally, the branch is merged into ‘develop’ through a PR, which includes
automated test checks and a code review. Once all features for a release are ready, the
‘develop’ branch is merged into ‘main’.
3. Automation in Branching: Use CI tools like GitHub Actions to automatically run tests
and linting on every PR, ensuring code is clean and functional before merging.
Q2) After implementing several optimizations, the team notices a trade-off between code
readability and performance.
How would you balance performance improvements with maintainability? Provide
recommendations for documenting complex optimizations.
Code readability and performance are two crucially needed qualities in any high-quality
software. In a case where there is a trade-off between the two, careful consideration must be
done so as to not degrade software quality. The following are suggestions to balance
performance and maintainability:
Consider technical debt: Over-optimisation can often increase the code complexity and reduce
readability. This difficulty in understanding the code can make further development tricky, and
possibly increase technical debt by having to rework using understandable logic. For a more
proactive approach, always consider the technical debt before implementing complex
optimisation, and set a ‘debt ceiling’ (a predetermined limit of acceptable complexity).
Validate against client requirements: When the software’s performance is at question, it is best
to refer to the original client requirements, and whether or not they prioritised performance over
the product’s scalability. Code readability plays an important role in scaling a software. If the
originally requested software was a limited-scope safety critical software requiring high uptimes,
then performance can be prioritised over code readability, given that necessary documentation is
maintained.
Using quantitative analysis: Implementing code metrics such as the maintainability index (MI)
can help in understanding the degree of code maintainability with each performance
optimisation. It accounts for the cyclomatic complexity in the code, the size and comments
assisting the code. A score of greater than 20 suggests good maintainability, so this can help
control optimisations by staying above the score of 20.
To document complex optimisations, the following strategy can be followed:
1. Meaningful reasoning and references: When optimisation strategies are adapted from
existing software, it is best to provide a brief reasoning on its relevance in the current
project, and a reference to its usage in the existing software. This practice should educate
the developers enough to work with the logic.
2. Purpose-driven branches: For complex optimisations that are subject to discussion, a
dedicated branch can be made as a part of version control to isolate its effects. This,
paired with well-phrased commit messages, can alert the developers of the volatility of
the amends.
3. Review meetings: It is always best to discuss any confusing optimisation options with the
entire team so that everyone has a say in the matter, and unbiased opinions reach a
consensus. Any suggestions in the meeting can be documented, to serve as ‘alternates’ to
the suggested logic.
Q3) Profiling tools reveal that a particular function accounts for most of the performance
bottlenecks in an application.
What steps would you take to address the bottleneck? How can iterative profiling ensure
long-term performance improvements?
Performance bottlenecks in any application can prove to be detrimental to its quality. The
following guideline intends to address and resolve bottlenecks in an efficient and proactive
manner:
Identify bottlenecks: A performance bottleneck may be apparent, but its root cause can still be
difficult to find. Using profilers like gprof or high-profile IDEs like VSCode can help highlight
areas of code exhibiting diminished performance. Once a bottleneck root is confidently found, it
is easier to proceed with amends.
Test Hypotheses: There is never one true way of resolving a bottleneck; we must try to
implement various solutions and compare their effects. For example, in a piece of code with a
performance bottleneck due to the use of linked-list for searching, we can implement trees or
hashmaps to improve performance. This change of data structures must be done so in an isolated
manner, so as to not negatively impact the rest of the code if the hypothesised solution goes
southways.
Iterative Analysis: Once a solution is in place, we must then see its impact on the rest of the
code. Therefore, the isolation is iteratively broken down, and profiling tests are run across
different parts of the code, to see if no new bottlenecks have formed due to the amends.
The above mentioned procedure is a generic workflow followed in bottleneck resolution.
Iterative Analysis, in particular, is an incredibly helpful step which ensures long-term
performance improvement. The following are reasons that justify this:
1. Repetitive Checks: In iterative analysis, profiling tests are run repeatedly to ensure no
new bottlenecks have formed due to an optimisation. This approach double-checks the
code and increases confidence in the performance capabilities of the code.
2. Mitigation of Technical Debt: The iterative tests monitor performance across the code,
and therefore highlights problems before they accumulate into technical debt.
3. Documentation: Each iteration can be documented and minor problems can be noted for
future use.
Q5) A web application was recently exploited through a SQL injection attack, leading to
unauthorized data access.
How would you mitigate injection vulnerabilities in the application? Provide strategies for
securing database interactions.
Use parameterised queries
Input Validation and Sanitisation
Limit database access (expand this all)
Chapter 8: Deployment and CI/CD
Q1) You are part of a team working on a large e-commerce application. The team has been
considering moving from a monolithic to a microservices architecture. However, there are
concerns about the potential complexity and learning curve involved.

What are the key advantages and challenges of switching to a microservices architecture
from a monolithic one? How would you convince the team to make the transition while
maintaining a stable release schedule?

In this scenario, the shift from a monolithic to a microservices architecture could provide several
benefits, but it comes with its own set of challenges. Here’s how I would justify the transition:

1. Scalability: Microservices allow us to scale individual components independently. This


becomes especially useful as our application grows and different parts of the system have
varying traffic loads. Unlike a monolithic architecture where the entire application must
be scaled, microservices allow us to allocate resources more efficiently, scaling only the
services that require it.
2. Flexibility in Deployment: With microservices, we can deploy individual services
without impacting the entire application. This leads to faster releases and more frequent
updates. For example, if we need to make a change to the payment module, we can
deploy just that service instead of redeploying the entire application, reducing risk and
downtime.
3. Fault Isolation: Microservices improve fault isolation. If one service fails, it doesn’t
necessarily take down the entire application. This is particularly valuable in maintaining
high availability in our system. A failure in one microservice will be isolated, allowing
the rest of the application to function normally.
4. Technology Stack Independence: Microservices allow teams to use different
technologies or programming languages for different services based on the needs of the
component. This enables more flexibility, as we could opt for more efficient technologies
suited for specific services without being tied to a single tech stack.

Disadvantages:

1. Increased Complexity: Microservices come with added complexity in managing multiple


services, communication between them, and service discovery. To mitigate this, we can
adopt tools like Kubernetes for orchestrating containers and ensuring proper
communication between services. We could also invest in robust logging and monitoring
solutions to handle the complexity of tracking issues across services.
2. Learning Curve: Moving to microservices involves a learning curve, particularly in
terms of setting up and maintaining infrastructure. However, we can mitigate this by
training the team and gradually transitioning the system to microservices, starting with
one service and expanding from there.
To convince the team, I would focus on the long-term benefits. While the initial effort of moving
to microservices may seem daunting, the flexibility, scalability, and reduced risk of downtime
during releases will pay off in the future. By scaling services independently and allowing teams
to work on smaller, more manageable components, we can improve productivity and accelerate
the release cycle, which is crucial as the application grows.

Additionally, we can introduce the change gradually, starting with less critical components and
testing the microservices architecture before fully transitioning. This incremental approach
would allow us to keep the system stable while gradually reaping the benefits of microservices.

Q2) The team is preparing to release a new feature and is debating between a blue-green
deployment and a rolling deployment strategy. Some members are concerned about the
cost and effort involved in maintaining two identical environments for blue-green
deployment.

What are the benefits of a blue-green deployment strategy in ensuring zero-downtime


releases? How would you justify the added overhead of maintaining two environments to
the team?

Blue-green deployment offers several advantages over rolling deployment, particularly for
scenarios requiring high reliability and quick rollbacks:

1. Seamless Rollback: If issues arise with the new release, reverting to the previous version
is as simple as switching environments. Rolling deployment, in contrast, requires
reverting specific instances, which can be time-consuming.
2. Minimized Downtime: Since the new version is deployed to a separate environment, user
experience is uninterrupted. Rolling deployment involves phasing updates, leading to
potential inconsistencies during the process.
3. Production-Like Testing: Blue-green allows rigorous testing in the green environment
before switching, ensuring reliability. Rolling deployment does not offer the same level
of isolation.
4. Stability for High-Traffic Applications: With blue-green, all users switch simultaneously
to a thoroughly validated environment. Rolling deployment may lead to uneven user
experiences during rollout.
5. Simpler Monitoring: It’s easier to monitor a single environment during deployment
compared to multiple rolling phases.

Considering the above, blue-green deployment is more suitable for high-stakes systems requiring
rapid recovery and smooth user experience, making it the recommended choice here.
Q3) CHATGPT: You notice that the development team frequently takes shortcuts to meet
deadlines, resulting in an accumulation of technical debt. This has made it difficult to
maintain and extend the software.

What strategies would you recommend to manage and reduce technical debt in the long
term? How would you approach refactoring the existing codebase without disrupting
ongoing development?

1. Incremental Refactoring: Refactor small portions of the codebase during regular


development cycles, focusing on modules being actively worked on. This minimizes
disruption to ongoing tasks.
2. Prioritize Debt with High Impact: Use tools to identify areas of the codebase with the
most technical debt and focus first on modules that affect system stability or scalability.
3. Establish Code Standards: Enforce coding guidelines and review processes to prevent
further accumulation of debt.
4. Automated Testing: Implement comprehensive automated testing to ensure refactoring
doesn’t introduce new defects.
5. Dedicated Refactoring Sprints: Allocate specific time for addressing critical technical
debt between feature deliveries, ensuring a balance between new development and
maintenance.

Without disrupting:

● Branch-Based Development: Use feature branches to isolate refactoring efforts from


active feature development.
● Parallel Refactoring: Focus on improving specific modules alongside their functional
updates, ensuring no standalone refactoring disrupts other parts of the project.
● Progressive Integration: Gradually integrate refactored code into the main branch after
rigorous testing to avoid large-scale disruptions.
Chapter 9: Containerisation

Q1) A team is developing a microservices-based application and is considering


containerisation to improve deployment consistency. However, some team members argue
that containerisation adds unnecessary complexity and prefer traditional virtual machines
(VMs).

1. How would you explain the advantages of containerisation over traditional VMs?
2. Propose a strategy to migrate the application to a containerized architecture with
minimal disruption to existing workflows.

In a microservices environment, containerisation offers clear benefits over traditional VMs, even
though VMs can serve their purpose. Here’s how I would explain the advantages of
containerisation:

Lightweight Deployment: Containers are far more lightweight than VMs. Since they share the
host OS kernel, they don’t require a full OS instance like VMs do. This results in smaller image
sizes and much faster startup times, often in seconds compared to minutes for VMs. This allows
faster and more efficient deployment cycles.

Resource Efficiency: Containers consume fewer system resources since they don’t carry the
overhead of an entire OS. This allows more containers to run on the same hardware, leading to
better resource optimization, especially in cloud environments. VMs, on the other hand, require
significant resources to run multiple OS instances.

Modular and Scalable: With microservices, each service can run in its own container, isolating
them while allowing for easier scaling. Containers allow independent scaling and updates for
each microservice without affecting others. This is a key advantage in microservices architecture,
where managing dependencies and version control becomes crucial.

Integration with CI/CD: Containers integrate seamlessly into CI/CD pipelines. With tools like
Docker, you can automate the build, test, and deployment processes, ensuring consistency across
development, staging, and production environments. This improves speed and consistency in
deployments.

Open-Source and Flexibility: Tools like Docker are open-source, widely supported, and free to
use, which makes it easier for the team to adopt without worrying about costly licensing for VM
management solutions. This fosters greater flexibility in adopting cloud-native architectures.

The strategy we can follow:

1. Incremental Transition: Start with containerising services that have the least
dependencies or are already modular. Gradually containerize the more complex services
as the team becomes comfortable with the tools and processes. This ensures minimal
disruption to workflows.
2. Start with Known Dependencies: Begin with microservices that have clear, well-defined
dependencies. This will allow the team to get used to containerisation and avoid the
complexity of dealing with highly coupled services at first.
3. Backups and Rollbacks: Leverage the VM snapshot feature to back up existing
environments during the transition. This ensures that if anything goes wrong during the
migration, you can easily revert to a working state. Ideally, this process can be improved
by using container-native tools like Docker volumes for persistence.
4. Prioritize Critical Services: Focus on containerizing high-traffic or mission-critical
microservices first. This lets us test the scalability and performance of containers in
real-world scenarios before moving on to less critical services.
5. Documentation and Training: Since some team members may find containerisation
complex, providing documentation and guides will help them transition smoothly. This
should include troubleshooting steps and best practices to minimize the learning curve.

Q2) CHATGPT: A project manager has tasked the team with deploying a legacy monolithic
application using Docker containers to simplify deployment. However, the developers argue
that containerizing a monolith defeats the purpose of containers.

1. How would you justify containerizing a monolithic application in the short term?
2. Suggest a long-term plan to refactor the monolith into microservices while
leveraging containerization benefits.

Containerizing a legacy monolithic application, while it may seem contrary to the spirit of
containers (which is often associated with microservices), still offers several immediate benefits:

1. Improved Deployment Consistency:


Containerization ensures that the application will run consistently across different
environments (development, testing, staging, production). Without containers, the "it
works on my machine" problem can persist, making deployments error-prone and
tedious. With Docker, you create a predictable environment for the monolithic
application, simplifying deployment.
2. Simplified Dependency Management:
Legacy monolithic applications often have numerous dependencies that can be difficult to
manage across different environments. Docker containers encapsulate the application
with all of its dependencies, ensuring that the environment remains the same regardless of
where it's deployed. This reduces the risk of conflicts between development and
production environments.
3. Isolation and Resource Optimization:
Docker containers offer resource isolation, meaning the monolithic application can be run
in its own environment without affecting other services or processes on the host machine.
This helps optimize the usage of resources (like CPU and memory), even if the
application is not broken down into microservices yet.
4. Ease of Migration:
Containerizing the monolith in the short term allows you to start adopting containerized
deployment workflows (e.g., CI/CD pipelines) and infrastructure. This lays the
groundwork for future refactoring, and you can incrementally migrate the application to
microservices while still maintaining operational stability.
5. Portability:
Containerization makes the application portable, meaning it can easily be moved between
different infrastructure providers (on-premise, cloud, etc.). This enables better flexibility
in terms of hosting and scaling without being tied to specific hardware or cloud
configurations.

Long-Term Plan for Refactoring the Monolith into Microservices:

1. Define Microservice Boundaries:


Start by analyzing the monolithic application to identify logical components or domains
that can be split into independent services. Look for natural boundaries in the business
logic, such as user management, order processing, or inventory, and consider which parts
of the application can operate independently.
2. Incremental Refactoring:
Rather than attempting a complete rewrite, adopt an incremental approach to refactoring.
Break the monolith down one service at a time, migrating one feature or module into a
microservice, while ensuring the existing monolith remains functional throughout the
process. Containerize each new microservice as it's refactored, so they can be
independently deployed and scaled.
3. API Gateway for Communication:
As the application is refactored, introduce an API Gateway to manage communication
between the microservices and external clients. This provides a single entry point to the
system and enables centralized routing, authentication, and monitoring, making the
transition smoother.
4. Implement CI/CD for Microservices:
As new microservices are created, establish CI/CD pipelines tailored for each service.
Docker containers can be used to create consistent environments for testing, building, and
deploying each service, facilitating rapid and automated deployments.
5. Decompose the Database:
One of the most challenging aspects of refactoring a monolithic application is the
database. Start by gradually migrating the database from a monolithic structure to a more
distributed model, with each microservice managing its own database. This prevents the
“single point of failure” problem that a single monolithic database can create.
6. Monitor and Optimize:
As the transition to microservices progresses, ensure robust monitoring and logging are in
place for each microservice. Tools like Prometheus, Grafana, or ELK stack can be used
for monitoring performance and identifying bottlenecks. Container orchestration tools
like Kubernetes will also help with managing the scalability and availability of each
microservice.
7. Refactor and Optimize the Infrastructure:
Once the majority of the application has been refactored into microservices, look into
optimizing the container orchestration layer (e.g., Kubernetes) and the networking
infrastructure. This ensures that all microservices are well-coordinated and can scale as
needed.
Chapters 1-3
Q1) If you are the design lead for a ‘Newsletter Subscription’ project, and are adamant to
use the ‘Strategy’ design pattern while your teammates insist on using the ‘Observer’
design pattern, how will you convince your team otherwise?
If I am a design lead for such a project, and I find the ‘Strategy’ design pattern the most effective
approach, I will make sure to present my case in front of my team in the most unbiased and just
manner, and also consider their opinions on the matter. After careful consideration, I will reach a
definitive solution, honouring everyone’s opinions. Here is how I would defend my case:
● Encapsulation of Strategies: In a newsletter system, we would have multiple types of
subscription options (i.e., monthly, yearly, seasonal), which all are distinct from one
another. The Strategy pattern would honour their differences and implement each
subscription type as an encapsulated entity, effectively separating concerns and upholding
maintainability. On the other hand, the ‘Observer’ pattern, in such a case, would
accumulate all subscription types under the unified ‘Subject’, thus increasing its
overhead. The user demands in the Observer pattern would be handled according to state
changes in the subject, which may not offer the same decoupling as the Strategy pattern.
● System Scalability: Using the Strategy pattern would allow us to effectively add to our
subscription types (i.e., weekly, bimonthly) without any excessive performance overhead,
because all strategies would simply implement the core ‘strategy interface’, reducing
code duplication. The Observer pattern yet again proves to be detrimental to project
scalability due to the increase in load on the subject, which is now supposed to manage
an additional number of services, and manage synchronisation across multiple users. It
would be significant to ensure that the subject is designed well to scale efficiently.
● Flexibility: Users of the system would be able to dynamically switch from one
subscription type to another in the Strategy pattern due to the decoupling it offers in
terms of strategy selection and implementation. In the Observer pattern, any user-enabled
changes would need to be managed by the single subject, which would not be able to
handle concurrent requests from multiple users dynamically.
● Reusability of Strategies: Since the subscription types would be considered ‘strategies’
in the Strategy pattern following a set ‘strategy interface’, a significant amount of code
can be saved from duplication, by allowing all strategies to simply add to the logic
defined in the interface. Such a reusability is not found in an observer pattern, where all
control is with a central subject.

Thus, it is safe to assume that the Strategy pattern would be the most beneficial to our project
type. While the observer pattern has its own merits, they fail to benefit the newsletter
subscription project, making it an unfit choice.
Q2) “Using the SOLID principles might hinder ‘Green’ practices”, Justify your argument
either in favour or against the statement.
In my opinion, the use of SOLID principles enforces green practices. Let us consider each
principle and its implications for the green practices.
1. Single Responsibility Principle (SRP): This principle enforces the idea that one class
must have a single task, or single type of task to perform. Properties like maintainability,
readability and abstraction are fulfilled in the code due to SRP, and “code optimisation”
may be considered a green practice, in terms of resource management and reducing
carbon footprint of the system. However, enforcing SRP may also mean additional lines
of code, which would increase the space complexity, and in turn increase the memory
usage of the system - increasing the overall carbon footprint.
2. Open-and-Closed Principle (OCP): This principle states that dealing with inclusions in
the code must be done so by adding to the code, instead of modifying existing logic. By
definition, OCP demands extensibility from the code, which might prove beneficial if the
amount of inclusions are less and the originally implemented logic is too complex to
modify. However, for rapidly-developing systems, OCP would increase the lines of code
used, utilising excessive memory and increasing the system’s energy consumption. The
approach is inherently unsustainable for growing systems but in stable systems, it helps
prevent errors and reduces unnecessary rework, supporting green practices by limiting
wasteful code changes.
3. Liskov Substitution Principle (LSP): This principle demands all subclasses to be readily
interchangeable with its superclass in an inheritance-like setting. This approach benefits
systems by code reusability and efficient resource management, which in turn reduces the
computational power required, and energy consumed by the system. The substitution is
the best possible use of inheritance concepts towards the accomplishment of green
practices. However, strict focus on ensuring LSP in code could potentially deplete
resources which could be used to implement the same system in simple ways.
4. Interface Segregation Principle (ISP): This principle segregates functionalities by user
needs, and creates ‘interfaces’ of related properties from code monoliths. This practice
promotes code maintainability, and often helps reduce code duplication by only keeping
relevant functionalities and eradicating all things irrelevant. This reduces the energy
consumption of the system and may also reduce the carbon footprint. However, creating
interfaces in simpler systems may be an added complexity, and an unsustainable practice
due to additional lines of code and excess time utilised.
5. Dependency Inversion Principle (DIP): This principle simply states that no superclass
must depend on its subclasses. By definition, this principle decouples the superclasses
from its subclasses, which can increase system efficiency by minimising deadlocks and
resource depletions. Such a practice can largely benefit the system’s resources, such as
memory and power. However, the lack of dependency could reduce the sharing of
resources, necessitating greater energy for operation. Still, the modularity and flexibility
provided by DIP generally promote sustainable resource use.
Overall, it can be concluded that each of the SOLID principles uphold green practices due to
their modern approach at software construction. However, some scenarios may hinder their goal.

Q3) Suppose you have developed and deployed a software. However, after its deployment,
you are unable to maintain it. Identify the issues/problems you overlooked during
construction planning which lead to poor maintainability.
Construction Planning is a crucial phase in software development, and manifests the success or
failure of the subsequent project. If the software is lacking maintainability, here are the possible
activities that may have been overlooked:
● Poor choice of construction model: A crucial aspect of construction planning is to
choose between a linear or iterative construction model. Choosing a linear model would
significantly impact the ability to revisit past phases and make changes as a part of
maintenance. However, an iterative model would support maintenance due to its
corrective nature.
● Lack of documentation: Deciding the amount of documentation to be done ahead is done
in the construction planning phase. If it was decided to keep documentation minimal, this
could make any rework or optimisation difficult for maintenance developers because they
would not have guides to the software.
● Inadequate coding standards: The use of uniform coding standards is also a part of
construction planning. If inconsistent or unclear coding standards are used, any
maintenance developer in the future would have a hard time understanding the system,
and then fixing the problem.
● Modular coupling: Deciding the degree of dependency between modules is also done in
this phase. If modular dependency is not maintained, and is kept high, system scalability
would be a grave problem.
● Poor test planning: The type and degree of testing is also decided in this phase. If the
software was only manually tested against a few conditions, it would not resolve the
problems that lie within. These problems can accumulate to become great risks to the
software.
● No contingency planning: The lack of risk management strategies could endanger a
system’s longevity and maintainability. In the case that a risk has occurred, if there do not
exist plans to overcome the risk, the system may crash, or utilise an excessive amount of
resources to be fixed again.
● Insufficient Training and Knowledge Transfer: If proper training and knowledge
transfer are overlooked during construction planning, future maintenance teams may
struggle to understand the system. This can lead to a steep learning curve for new
developers who need to maintain or update the software.
Q4) As a software architect for an e-commerce platform, you strongly prefer using the
Factory design pattern for creating product objects, while your team advocates for using
the Singleton pattern for managing product inventory. How would you justify your choice
of the Factory pattern and persuade your team to adopt it?
As a software architect tasked to work alongside a team, I would primarily focus on deciding on
the most best-fit design pattern for the project, after having presented my case in front of my
team and considering their opinions in an unbiased manner. I strongly prefer using the Factory
pattern and here is how I would defend my answer:
● Multi-product nature of platform: The project being developed is one for an
e-commerce website, which usually maintains multiple instances of a wide range of
products. By definition, the Factory pattern best provides for the requirements. Using this
pattern, an abstract class for ‘Product’ can be implemented each time for a new type of
product (i.e., ‘Sunglasses’), and multiple instances exist for each product type, indicating
its stock. If the Singleton pattern is used in this scenario, we would experience a
significant increase in overhead, because for each product object that the website places
for sale, a class must be created.
● Scalability Improvements: An e-commerce platform is ever-evolving due to a growing
customer base. Therefore, the system must be designed to scale efficiently and not
deplete excessive resources when scaling. The Factory pattern allows new types of
products to simply implement abstract class, minimising code duplication. Compared to
the Singleton pattern, scalability is a tedious and resource-intensive task.
● Loose Coupling: A significant problem with Singleton systems is the interdependence of
modules and tight coupling, due to there being a single instance of each class. Managing
product inventory with high dependency in records would reduce system performance
and quality.
● Encapsulation of logic: Each of the implementations of the abstract class would
encapsulate its relevant logic, restricting it from sharing logic across modules. However,
this is not the case with all people.
Q5) Discuss how implementing design patterns can impact software maintainability. Provide
arguments both for and against the notion that relying too heavily on design patterns could
complicate code and hinder future modifications.
Benefits on maintainability
● Consistency and Readability: Implementing a standard for code benefits the current
development team, as well as any maintenance teams in the future. A software’s
alignment with a design pattern makes it easy to scale and maintain as per the
organisation’s needs, with a reduced cost wastage in software familiarisation.
● Inherent scalable nature: Design patterns such as Factory and Strategy offer efficient
system scaling with minimal performance degradation. These qualities allow the system
to be maintained for a long period of time, all while catering an increasing customer base.
● Flexibility: Design patterns offer flexibility for improvements in software. Patterns such
as Factory, Observer, and Strategy make use of interfaces for being used as templates for
classes implementation, all following a general set of rules. To account for any changes in
policies, all overall classes can be aligned again by changing the root abstract class.
Drawbacks of maintainability
● Unsustainable in the long-run: For any software to be maintainable, all of its resources
and designs must be sustainable. Working with design patterns on simpler or unrelated
projects can increase complexity, making it harder to manage as problems escalate with a
ripple-effect.
● Overhead: Most design patterns require high overhead for communication, which would
be difficult to manage and maintain for an increasing user base.
● Choosing the right pattern: There are several design patterns, and each has a set of
distinct advantages to offer. If the wrong design pattern is chosen for a project, it would
be highly unmaintainable in the future. For eg: choosing Singleton for a dynamic and
growing e-commerce website would increase their overhead and operational costs
significantly.
While design patterns offer their merits to any software, unmindful practices can hinder code and
impact future modifications.
Q6) Imagine you are in charge of a software development team that has just released a
customer relationship management (CRM) application. Post-deployment, you encounter
significant performance issues. Identify potential oversight in the software construction
planning phase that may have contributed to these performance issues.
● Poor construction model choice: choosing a linear model like waterfall would increase
the likelihood of such a scenario post-deployment because revisiting past phases for
improvements is not easy in this model. So any faults that may have been overlooked
trailed up until after deployment, and were never fixed.
● Unmanaged modular dependency: the degree to which modules must be dependent on
one another is decided in this phase. If strong coupling between modules was overlooked
and not effectively managed, this could have been the root cause of performance issues.
Coupling in modules increases the computational requirement of the system and uses
excessive energy, thus reducing performance.
● Inadequate coding standards and control structures: These are also decided in the
construction planning phase. Using inadequate coding standards could add to the space
complexity, and the choice of certain control structures (such as if-else) may slow down
the system performance.
● Inadequate test planning: if extensive test cases were not intended to be designed for the
system, then the performance issues become imminent. If load testing wasn’t done, then
performance degradations were never anticipated.
● No contingency planning: performance degradation might happen in scenarios of risk,
where the failure of certain modules results in overall impact on the system. Such a
scenario must not be catered in a contingency plan.
● Construction for validation: if the entire construction process was decided to be ‘for
validation’, this means the system was not assessed during development for correctness
and soundness, ultimately resulting in a flawed system which may have been validated on
certain business requirements. Construction for verification is a more thorough approach
to have followed, where the product is verified for success as it is being made.
● Inefficient resource management: allocating resources to the wrong things is harmful to
the project too.

Q7) You are tasked with leading a team to develop a weather forecasting application. Your
team proposes using the Observer design pattern for updating users on weather changes,
but you believe the Strategy pattern would be more suitable. What factors would you
consider to convince your team of the merits of your chosen pattern?
● Geographic Differences: A weather forecasting application must be able to display
weather conditions on a wide range of geographic locations. To effectively implement
this, the strategy pattern would be the best fit, by handling different forecasting
algorithms or weather data processing methods for various geographic regions. It allows
you to define distinct strategies for fetching and processing weather data specific to
coastal areas, mountains, urban zones, etc. The Observer pattern, while capable of
notifying observers about weather updates, focuses more on broadcasting changes than
handling diverse algorithms for different regions. This makes it less flexible in terms of
dynamically managing weather data across multiple geographic locations.
● Adapting notifications to scenarios: The observer pattern follows a general broadcasting
approach to update its observers. While this comes in handy for regular weather updates,
the app may be required to send customised and frequent updates to people in regions of
flood warnings. For such a feature, the strategy pattern allows various notification
severities to be defined as distinct ‘strategies’, which may be changed dynamically for a
user based on the weather conditions.
● Scalability: The observer pattern is notorious for its inability to manage a large number
of clients, or ‘observers’, connected to its central subject. The strategy pattern decouples
client interactions from algorithm processing, which gives the system the resources to
manage a growing customer base easily.
● Event-driven nature: The Observer pattern is inherently event-driven, notifying its
observers whenever a relevant event (such as a weather change) occurs. However, the
Strategy pattern provides greater flexibility in how the information is processed and
delivered to users. While the Observer pattern focuses on simply notifying subscribers,
the Strategy pattern excels in cases where different methods of notification or forecast
generation need to be applied depending on the context or user preferences.

You might also like