0% found this document useful (0 votes)
53 views5 pages

Project PRISM Architecture

Nice

Uploaded by

Supriyo Mandal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views5 pages

Project PRISM Architecture

Nice

Uploaded by

Supriyo Mandal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Project PRISM Architecture

High-level AWS-hosted microservices deployment for Project PRISM (Prioritized Resource & Intelligent Subtask
Management). Clients (Associates, Scrum Masters, Resource Managers) use a web UI (e.g. Flask front-end) to
submit tasks and view dashboards. All client traffic hits an API Gateway which handles OAuth2/JWT
authentication, RBAC authorization, rate limiting and request routing 1 2 . The gateway forwards calls to
underlying microservices (Application Layer) over HTTPS. Each microservice (Task Breakdown, Assignment,
Timesheet, iEvolve integration, Dashboard) runs in its own Docker container to ensure consistency and
horizontal scalability 3 1 . Data flows through the system as follows: the Task Breakdown service calls an
AI NLP engine to decompose high-level work into actionable subtasks (leveraging GPT-4 or an open LLM)
4 ; the Assignment Engine then matches each subtask to an associate by querying a skill embeddings store

(a vector database) for closest skill–task semantic similarity 5 ; the Timesheet service aggregates
completed hours and pushes verified entries into the corporate timesheet system; and the iEvolve
integration service compares task skills with employee training to recommend courses and fetch
completion data. Meanwhile, the Dashboard service collates status (task progress, workload, skill matrices)
and learning insights for managers. Persisted data is split by use case: structured metadata (tasks, users,
timesheets) live in a relational store (e.g. MySQL), while high-dimensional skill and task embeddings live in a
vector DB (e.g. Chroma or FAISS) for semantic search 5 6 . Raw logs and JSON payloads are saved in
object storage (e.g. AWS S3).

Client Layer
The client tier is a web-based UI (for Associates, Scrum Masters, Resource Managers) built with a
lightweight framework (Flask, Django, etc.). End-users log in and interact via browsers or mobile front-ends.
The UI can be hosted on a CDN (e.g. AWS CloudFront) or load-balanced web servers for performance. All
client requests authenticate (OAuth2/OIDC) and then flow into the API Gateway 1 . Typical UI features
include task submission forms, assignment listings, timesheet entry pages, and live dashboards (charts,
alerts).

API Gateway
All client traffic goes through a centralized API Gateway. This gateway is the entry point to the microservices
ecosystem 1 . It validates OAuth2.0/JWT tokens and enforces role-based access control (RBAC), ensuring
only authorized roles (e.g. RM, SM, Associate) can call certain endpoints 2 1 . It also provides rate
limiting, TLS termination (HTTPS), and request logging. The gateway routes validated requests to the
appropriate backend service (e.g. Task Breakdown or Assignment). Internally, it may use a service mesh or
proxies (Envoy, Nginx) for east-west traffic. In summary, the API Gateway abstracts service complexity from
clients and handles cross-cutting concerns (auth, logging, load balancing) 1 .

1
Application Layer (Microservices)
Each core function is implemented as an independent microservice, containerized with Docker and
deployed (e.g. on AWS ECS/EKS or Kubernetes) 3 7 . These services communicate via REST/gRPC and
work within bounded domains. Key services include:

• Task Breakdown Service: Accepts high-level project tasks and decomposes them into detailed
subtasks. It calls an AI NLP engine (GPT-4 or similar LLM) to interpret the task and generate subtasks
hierarchically 4 . For example, “Implement feature X” might be broken into design, development,
testing subtasks. This service also logs the decomposition output for auditing.

• Assignment Engine: Matches each subtask to the best-suited associate. It uses a Skill Matching
Model based on ML embeddings: tasks and user skill profiles are encoded into vectors and
compared semantically in a vector database 5 . The engine also factors in current workload,
availability, and project priorities to distribute tasks fairly. (In essence, this is a constrained matching
problem that could use optimization or ML; storing embeddings in a vector DB enables fast nearest-
neighbor lookup by meaning 5 .)

• Timesheet Service: Collects completed subtask hours from associates, verifies them, and pushes
hours to TCS Timesheet via API calls. It supports partial-completion credit: if a task is partially done
in a week, the service can split hours accordingly. This service may queue entries (e.g. via SNS) and
ensure eventual consistency with the corporate timesheet system.

• iEvolve Integration Service: Interfaces with TCS’s iEvolve learning platform. It identifies skill gaps
(task skills vs. associate skills) and recommends courses. It also fetches course completion data and
updates the user’s skill profile. (For example, if a developer lacks a required skill for a task, it will
trigger a recommendation in the dashboard.)

• Dashboard Service: Aggregates data across the system to provide visibility. It serves real-time
dashboards for managers: showing task progress, team workload, skill matrices, and learning
metrics. It may use a monitoring stack (Prometheus/Grafana or ELK) to visualize logs and
performance. It also implements optional features like gamification (points, leaderboards for timely
task completion) and predictive analytics (e.g. forecasting future workloads based on current trends).

Each service is independently deployed and can be scaled horizontally 7 . They use lightweight internal
APIs and may communicate through message queues for events (e.g. task-created or timesheet-approved
events can be published/subscribed for loose coupling). The architecture supports polyglot persistence:
each service can use the database type best suited to its needs 6 .

AI Services
A separate AI layer provides ML models that the microservices call as needed. Key components include:
- NLP Task Decomposition Engine: A generative AI service (e.g. running GPT-4 or an open-source LLM) that
takes task descriptions and outputs subtasks. This implements a hierarchical task network (HTN)
decomposition 4 . Using GPT-4 or similar ensures high-quality, context-aware decomposition; open
models can be substituted for on-premises requirements.

2
- Skill Matching Model: A machine learning model (or embedding lookup) that scores how well an
associate’s skills match a given subtask. Skills and tasks are embedded into vectors (via an embedding
model), and similarities are computed via a vector DB. The vector database (ChromaDB, FAISS, etc.) stores
these embeddings for fast ANN search 5 . This allows semantic matching beyond exact keyword match.

AI model outputs (e.g. decomposed tasks, match scores) are exposed via APIs to the microservices. The
models are retrained periodically using historical data (completion rates, performance) to improve accuracy.
All model endpoints are also secured and logged.

Data Layer
Project PRISM uses polyglot persistence 6 5 :

• MySQL (Relational DB): Stores structured metadata such as user profiles, task records, project
assignments, and timesheet logs. Traditional relational schema is used for CRUD operations and
ACID compliance (e.g. updating remaining task hours).
• Vector Database (ChromaDB/FAISS): Stores high-dimensional embeddings of skills, tasks, and
possibly knowledge base documents. This enables semantic search: when the Assignment Engine
needs to match skills, it queries the vector DB for nearest neighbors to a given task embedding 5 .
• Object Storage (AWS S3 or equivalent): Holds unstructured data such as raw logs, AI model inputs/
outputs (JSON), and archived reports. For example, each task decomposition request and response
can be logged to S3 for auditing or model retraining. Large files (e.g. timesheet CSV uploads) are also
stored here.

This separation ensures each data store is optimized for its workload. As Azure guidance notes, each
microservice owning its data improves scalability and autonomy 6 . Regular backups and read replicas can
be configured for high availability.

Integration Layer
Project PRISM connects to several external systems via APIs:
- TCS Timesheet: An API endpoint (SOAP/REST) that allows pushing and retrieving timesheet entries. The
Timesheet Service securely authenticates to this API and updates hours.
- SWON (System Work Order Number): Integration with TCS’s project/allocation system to fetch project
codes, effort allocations, and billable information. This ensures tasks align with active projects.
- iEvolve: APIs to query available learning courses and report completions. Used by the iEvolve service to
recommend training.
- Optional Endpoints: Hooks for tools like Jira (for IT ticket integration), SAP (for enterprise resource
planning), and ServiceNow (for incident/tracking) can be added via RESTful webhooks. For example, high-
level tasks could optionally be mirrored as Jira issues.

Each integration is encapsulated in its own service module to decouple external dependencies. API keys and
credentials for these systems are managed securely (e.g. AWS Secrets Manager). Integration errors and
retries are centrally logged and monitored.

3
Security Layer
Security is enforced at multiple layers 1 2 :

• Authentication & RBAC: OAuth2.0 (with OpenID Connect) is used for user login. Every request
carries a JWT token. The API Gateway and each service enforce RBAC, ensuring that only users with
the correct role (Associate, ScrumMaster, ResourceManager) can call certain APIs 2 1 . For
example, only an RM can reassign tasks or override matches.
• Encrypted APIs: All traffic (client↔gateway and service↔service) uses TLS. Intra-service calls within
the VPC are also encrypted. Sensitive fields (e.g. personal data) are masked or encrypted at rest to
comply with GDPR/DPDP.
• Audit Logging: Every change (task creation, assignment, hours entry) is logged in an audit trail
(immutable log store). The system logs user actions and system events to a centralized logging
service for compliance and forensic analysis.
• Network Security: Services run in private subnets with security groups limiting access. The API
Gateway is the only public endpoint. Optionally a web application firewall (WAF) can protect against
common web exploits.
• Data Governance: Personally identifiable information (PII) is redacted where possible. Data access is
logged to satisfy data privacy regulations.

Together, these measures create a defense-in-depth architecture that secures both user data and service
operations.

Deployment and Scalability


All services are containerized (e.g. using Docker) and orchestrated in a cloud environment (AWS is shown
here, but a hybrid cloud or on-prem Kubernetes cluster could be used) 3 7 . For example:
- Compute: Containers run on AWS ECS/EKS or similar, behind an Application Load Balancer (ALB). Auto-
scaling groups ensure services scale based on CPU or queue length.
- Continuous Deployment: Services are built into Docker images and deployed via a CI/CD pipeline (e.g.
CodePipeline or Jenkins), enabling rapid updates.
- Infrastructure: Relational databases use AWS RDS (with read replicas), the vector DB could be a managed
FAISS (or self-hosted on EC2), and S3 is used for storage.
- Observability: Central monitoring (CloudWatch or Prometheus/Grafana) collects metrics from each
container. Alerts are configured for error rates or latency spikes.

This containerized setup allows each microservice to be scaled independently for peak load 7 . It also
supports blue/green deployments for zero-downtime updates.

Optional Features
Beyond the core flows, PRISM can include: - Live Dashboards & Analytics: Real-time charts of throughput,
timesheet compliance, predicted workload. Could leverage a BI tool (e.g. QuickSight, Grafana) updated from
the data layer.
- Gamification Engine: Award points or badges for on-time task completion or training. Leaderboards and
rewards (e.g. team recognition) can boost engagement.

4
- Predictive Analytics: Use historical data to forecast resource shortfalls or task delays. For example, a
model could predict which projects will run late, enabling proactive resource shifts.
- Supervisor Override Panel: A UI for Scrum Masters/Managers to manually reassign tasks, split hours, or
override AI decisions when needed. All overrides are logged with justification.

These features plug into the existing architecture (e.g. dashboards read from the same MySQL or a DW,
gamification uses the data layer and logic service) and are optional enhancements to improve user
experience.

Sources: The above design follows microservices best practices: splitting by business capability 7 , using
an API gateway for auth/routing 1 2 , containerizing services with Docker 3 , and leveraging AI/ML
(LLMs, embeddings) for intelligent task processing 4 5 . Data storage is chosen per-service (SQL, vector
DB, object store) 6 , and all external integrations are clearly defined by APIs. This modular, cloud-native
architecture ensures PRISM is scalable, secure, and maintainable.

1 6 Microservices Architecture Style - Azure Architecture Center | Microsoft Learn


https://learn.microsoft.com/en-us/azure/architecture/guide/architecture-styles/microservices

2 ArchView: API Gateway with Microservices | Blueprints Diagram


https://www.swiftorial.com/archview/blueprints/api-gateway-microservices/

3 Utilizing Docker for Microservices Architecture | AppMaster


https://appmaster.io/blog/docker-microservices-architecture

4 GitHub - marawan1805/LLM-Task-Decomposition: HTN Planner using GPT-4


https://github.com/marawan1805/LLM-Task-Decomposition

5 Understanding Vector Databases and Embedding Models for AI-Powered Search


https://www.brownmind.com/post/vector-database-and-embedding-models/

7 Implementing Microservices on AWS - Implementing Microservices on AWS


https://docs.aws.amazon.com/whitepapers/latest/microservices-on-aws/microservices-on-aws.html

You might also like