0% found this document useful (0 votes)
55 views5 pages

CSIA 485 - Week 2 Technology Briefing

Personal project

Uploaded by

faithmbithe3747
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views5 pages

CSIA 485 - Week 2 Technology Briefing

Personal project

Uploaded by

faithmbithe3747
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

CSIA 485 - Week 2 Technology Briefing

INTRODUCTION

Financial services must prepare for two emerging technologies that will profoundly
affect cybersecurity: quantum computing and artificial intelligence (AI). Quantum
computers make use of quantum bits (qubits) and such phenomena as superposition
and entanglement to effectively solve some problems exponentially faster than
classical computers can (NIST, 2025). Practically, quantum computers can perform
the given factors of large primes, as well as compute discrete log computations in
polynomial time (through Shor algorithm), which poses a threat to current
cryptography public key (NIST, 2025). At the same time, AI, particularly machine
and deep learning, can make computers learn patterns within the data and even create
human-like content (NASA, 2025). In finance, AI is already being used in fraud
detection, risk modeling, algorithmic trading, and customer services. This briefing
explains what these technologies are, what security risks are associated with them and
what mitigation companies can put into place to protect themselves.

ANALYSIS

Quantum Computing

A quantum computer uses qubits that can represent 0 and 1 simultaneously


(superposition) and become entangled so that operations act on many states at once
(NIST, 2025). For example, two entangled qubits encode four possible states at once,
three qubits eight states, etc. This means that algorithms like Shor’s can factor a 2048
bit integer, a task infeasible for classical machines, in a practical time (NIST, 2025).
In contrast, current lab quantum devices have limited qubits and high error rates; none
today can crack real encryption (NIST, 2025). Research continues, but experts
estimate that a “cryptographically relevant quantum computer” capable of breaking
RSA/ECC could arrive in the next 10–30 years (FS-ISAC, Inc, 2023). By then, many
encrypted assets, such as secure web transactions, interbank communications, and
blockchain keys, would be at risk. Symmetric ciphers like AES remain safer in the
short term because Grover’s algorithm only reduces their effective key strength;
adopting AES 256 is a recommended interim safeguard.

Quantum Security Risks

The biggest concern is encryption breakage. A scalable quantum computer running


Shor’s algorithm would instantly decrypt any RSA- or ECC-encrypted data (NIST,
2025). Financial data that is harvested today, like past transaction logs or private keys
could be decrypted later once quantum machines exist (FS-ISAC, Inc, 2023). Industry
analysts warn that adversaries may already be collecting encrypted traffic in
anticipation. This threatens data confidentiality for customer accounts, trade secrets,
and credit histories (FS-ISAC, Inc, 2023). Quantum also endangers digital signatures:
adversaries could forge legal or financial agreements by simulating a bank’s private
key. Even blockchain systems (Bitcoin, smart contracts) rely on quantum-vulnerable
public-key cryptography; researchers estimate that a significant portion of crypto
assets could be broken if quantum power arrives (Hosanagar & Werbach, 2018). In
short, current asymmetric cryptography (RSA, ECC, Diffie–Hellman) would become
obsolete. Another risk is quantum randomness failures: many systems rely on random
numbers; if quantum attacks disrupt pseudo-random number generators, integrity
could be compromised.

Artificial Intelligence (AI)

Artificial intelligence is the software which fulfills the need of tasks that demand
human-like intelligence, like learning trends, recognizing pictures/speech, and taking
decisions (NASA, 2025). Most AI systems today are actually based on machine
learning: some set of algorithms (such as neural networks and decision trees) are
trained on annotated data (such as customer transactions, market trends, financial
reports) in order to predict or classify their input. An important recent development is
that of deep learning and generative AI (such as GPT models) which can be used to
generate lifelike text, audio, or images. AI has been employed on the financial side in
credit scoring, anti-money laundering, portfolio optimization, and even customer
service via automated chat programs. Examples include an ML model that could learn
to identify fraudulent transactions based on millions of historical transactions or a
chatbot that would respond to customer questions. These systems take advantage of
big data and high-speed compute to derive insights, yet leave new vulnerabilities.

AI Security and Privacy Risks

AI both enables new attacks and introduces new weaknesses in the system. First,
adversaries use AI to mount more powerful attacks. Regulators warn of “AI-enabled
social engineering”: attackers use AI to craft highly personalized phishing emails or
deepfake voices/videos that convincingly impersonate executives (Department of
Financial Services, 2024). For example, an attacker might feed a company directory
into a large language model to generate individualized phishing messages, or use
deepfake audio of a CEO’s voice to trick a bank employee into authorizing a
fraudulent fund transfer (Department of Financial Services, 2024). AI also accelerates
mundane hacking tasks: malware authors can use AI to scan networks for
vulnerabilities, or to mutate code such that traditional signature-based defenses fail.

Second, the AI systems themselves can be attacked. Machine-learning models are


vulnerable to adversarial ML. For instance, an attacker could inject poisoned data into
a bank’s training set causing the AI fraud detector to misclassify future fraud (Beck,
2025). Skilled adversaries may also perform model extraction: by repeatedly querying
a trading model with inputs, they can reconstruct a proprietary algorithm or model
weights (Beck, 2025). Third, AI systems gather and process vast quantities of
sensitive data. Breaches of AI training data could leak customer PII, trading
strategies, or even biometric data. NYDFS notes that stolen biometrics, like face
images used for authentication, could enable spoofing attacks and more convincing
deepfakes (Department of Financial Services, 2024).

Lastly, AI has the problems of introducing algorithmic bias and explainability. When
training data capture historical bias, such as discrimination against lending, the AI
could repeat the biases, opening the bank to legal and reputational risk. In addition to
that, complicated models tend to have poor transparency, meaning that they are
treated as black boxes; regulators (FINRA among them) emphasize the utility of
human-interpretable models in finance ([Link], 2025). Overall, AI introduces
new sources of attack (deepfakes, data poisoning) and new areas of risk (privacy,
bias) not previously experienced by traditional IT controls.

MITIGATION AND RECOMMENDATIONS

Banks must address each threat systematically. For quantum risks, the key is crypto
agility. Companies should immediately inventory all cryptographic assets and data
needing long-term confidentiality (FS-ISAC, Inc, 2023). They should begin migrating
to post-quantum cryptography (PQC): NIST is finalizing quantum-safe algorithms to
replace RSA/ECC As FS-ISAC advises, remediation “will require companies to
migrate to PQC” and to plan phased rollouts of new ciphers (FS-ISAC, Inc, 2023).
Banks should increase key lengths and refresh critical keys frequently as an interim
measure (like use AES-256 and shorten certificate lifetimes). It is also prudent to test
emerging technologies like quantum key distribution (QKD) for ultra-sensitive
channels (Whiting, 2025). Importantly, firms should work with vendors and standards
bodies: require cloud and crypto-service partners to support quantum-safe algorithms,
and participate in industry initiatives (NIST PQC, bank consortia) to share best
practices. Regularly update risk assessments to account for advancing quantum R&D,
for example, assume a window of 15–20 years to upgrade mission-critical systems.

For AI risks, the strategy is governance and defense-in-depth. First, implement strong
model risk management (as FINRA recommends) for all AI/ML systems ([Link],
2025). Keep a record of models, record training data sources, and ensure people are
involved in reviewing high impact AI decisions. Conduct comprehensive testing:
subject models to various conditions and/or subject them to training strategies, such as
adversarial training or data purification, that can enhance resilience (Beck, 2025).
Secure AI data pipelines: ensure authorized persons have tight access to training data,
protect against tampering by hashing or watermarking training data, and ensure
sensitive data at rest is encrypted. Enhance authentication and network protections
around AI systems: e.g. implement multi-factor authentication (not only voice/SMS)
to prevent deepfake account takeover (Department of Financial Services, 2024).
Second, respond to AI-enabled attacks. Educate the staff on how to identify the threat
connected to AI (like deep fake emails), and introduce confirmation methods (call-
backs or out-of-band confirmation in case of large transactions). Engage in defensive
AI as well: for example, implement AI-driven transaction or network traffic
monitoring to spot suspicious activity.

Third, manage organizational and regulatory risk. Policies in banks must be revised to
state that they protect against AI/quantum attacks. Adopt AI security committee
formation and educating the executives on the issues (Department of Financial
Services, 2024). As an example, make security audit of rights a part of any AI vendor
contracts, and performance benchmarks of vendor AI models. Work with regulators:
NYDFS, EBA, and industry consortia have already published guidance on AI and
post-quantum security. Engaging in cybersecurity information-sharing and adhering
to the supervisory requirements will keep the institutions in tune with the emerging
best practices.

SUMMARY

Quantum computing and AI will revolutionize finance, and bring about significant
security issues. Quantum computers may crack current encryption soon, and banks
need to shift towards crypto-agility and quantum-proof algorithms (FS-ISAC, Inc,
2023). AI poses new risks, such as the realistic deepfake fraud and adversarial model
attacks and data privacy concerns, where new model governance and data controls are
necessary. There must be a risk-based approach: securing requirements of sensitive
data before it has been compromised, risk-based vetting of AI systems over the course
of their lifetime and executive level oversight. Financial firms can ensure that they
enjoy the advantages of these technologies without compromising security and trust
by actively assessing cryptography and AI policies, engaging regulators and other
industry participants in this process.

References

Beck, R. (2025, February 27). ISACA NOW Blog 2025 Financial services under
threat by adversarial AI. ISACA. [Link]
trends/isaca-now-blog/2025/financial-services-under-threat-by-adversarial-ai
Department of Financial Services. (2024, October 16). Industry Letter - Cybersecurity
Risks Arising from Artificial Intelligence and Strategies to Combat Related
Risks. [Link]
cyber-risks-ai-and-strategies-combat-related-risks
[Link]. (2025). Key challenges and regulatory considerations.
[Link]
intelligence-in-the-securities-industry/key-challenges
FS-ISAC, Inc. (2022). Post-Quantum Cryptography (PQC) Working Group - Risk
Model Technical Paper. In FS-ISAC, Inc.
[Link]
Hosanagar, K., & Werbach, K. (2018, November 16). How the Blockchain Will
Impact the Financial Sector. Knowledge at Wharton.
[Link]
financial-sector/#:~:text=The%20blockchain%2C%20a%20form%20of,of
%20operations%20and%20other%20benefits.
NASA. (2025). What is Artificial Intelligence? - NASA. [Link]
is-artificial-intelligence/
NIST. (2025, March 24). Quantum Computing explained | NIST.
[Link]
explained
Whiting, K. (2025, July 21). Quantum leaps: 3 ways banks can harness next-gen
technologies for financial services. World Forum Economic.
[Link]
detection-risk-forecasting-financial-services/

You might also like