0% found this document useful (0 votes)
39 views27 pages

Session 16 Building Application Using Gen AI - Case Studies

The document presents various case studies showcasing applications of Generative AI, including Sahayak AI for legal document translation, AI-Powered Charting for healthcare documentation, and a chatbot for knowledge management at Morgan Stanley. It also discusses the development of a chatbot for PDF documents using LangChain and Pinecone, as well as automating AI workflows with AutoGPT and LangChain. Each case study highlights the innovative use of AI technologies to address specific industry challenges.

Uploaded by

Nameless Wonder
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views27 pages

Session 16 Building Application Using Gen AI - Case Studies

The document presents various case studies showcasing applications of Generative AI, including Sahayak AI for legal document translation, AI-Powered Charting for healthcare documentation, and a chatbot for knowledge management at Morgan Stanley. It also discusses the development of a chatbot for PDF documents using LangChain and Pinecone, as well as automating AI workflows with AutoGPT and LangChain. Each case study highlights the innovative use of AI technologies to address specific industry challenges.

Uploaded by

Nameless Wonder
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Building Application using Gen AI - Case Studies

• Practical Case Studies using Cohere


• Build a Chain for Documents with RAG Retriever
• Domain Specific ChatBOT
Ram N Sangwan
• Automate AI Workflows with AutoGPT & LangChain
Practical Case Studies - Apps using Generative AI
Case Study 1
Sahayak AI

• SahayakAI is an AI-driven software tool designed to transform the way legal


documents are translated and understood in India.
• Legal documents in India are predominantly in English, which presents a
formidable challenge in legal understanding and literacy for the vast majority of
the population.
Case Study 1
Sahayak AI

• The existing solutions, such as human translators and generic translation


software, have notable limitations ranging from being too expensive to or lack
of understanding of legal terminologies.
• To solve this, this AI solution translates English 'legalese' into Hindi.
• It uses Cohere's multilingual AI model Aya to achieve this.
Case Study 2
AI-Powered Charting
• Healthcare professionals spend 16 hours per week on paperwork and administrative tasks.
• One out of every five days is dedicated to post-consultation paperwork and referencing HCPCS
codes for insurance companies.
• In the US, an inefficient administration and overhead in healthcare squander $600 billion
annually, accounting for 30% of the total healthcare expenditure.
• Leveraging Cohere's Summarize API and the Coral Chat API, along with OpenAI’s Whisper
model, Charting enables physicians and patients to record conversations during clinical visits or video call
sessions.
Case Study 2
AI-Powered Charting

• The Whisper model generates transcriptions by accessing the device's microphone, and then
• Cohere's Summarize API produces insightful bullet-point summaries of the full transcriptions.
Another key feature of Charting is its Search function.
• Using Cohere's Chat API, physicians and patients can search the internet and obtain ad-free
answers.
• To build trustworthiness, the top 5 sources of these answers, along with their original URLs,
are displayed for further research
Case Study 3
Chatbot for Knowledge Management at Morgan Stanley Wealth Management

• With the help of LLM, Morgan Stanley is changing how its wealth management personnel
locate relevant information.
• The LLM powers an internal-facing chatbot that performs a comprehensive search of wealth
management content and effectively unlocks the cumulative knowledge of Morgan Stanley
Wealth Management.
• LLM’s extraordinary capability to access, process, and synthesize content almost
instantaneously is leveraged by Morgan Stanley’s internal-facing chatbot.
• The chatbot is trained on the company’s vast content repository, which covers insights on
capital markets, asset classes, industry analysis, and economic regions around the globe.
Case Study 4
RAG Fusion with Cohere and Weaviate

• RAG Fusion generates variations of the user's question under the hood.
• It retrieves matching documents for each variation and performs a fusion between them with
re-ranking.
• A variation may match better into a small DB than the original question.
• First a data enrichment technique is used: After QnA for the Cohere Command fine tuning, with
some extra scripts that data is processed further for ingestion into the Weaviate platform.

Check More Use cases at https://lablab.ai/apps/tech/cohere/cohere


Case Study 5
Market AI
• Managing a hedge fund portfolio demands swift and informed trading decisions, a challenge
compounded by the vast data influencing stock prices.
• Market AI leverage Cohere, Langchain, and Weaviate.
• This solution is tailored for investors seeking to efficiently conduct research on publicly traded
companies.
• It offers seamless access to critical information drawn from company filings and provides
interactive capabilities to monitor current portfolio status.
Case Study 6
PyLibrarian

• Working with a new library, SDK, or API, software developers often waste hours
hopelessly poring over a sea of scattered documentation pages to find the one syntax
examples.
• Traditional LLM’s knowledge pools are limited to their training data, so when they are
asked about perhaps newer tech, they may be rendered useless, or even worse,
hallucinate and spew nonsense.
• Pylibrarian grant LLM access to complete documentation for Python’s most popular
libraries using RAG architecture.
https://lablab.ai/event/cohere-coral-hackathon/new-grads/pylibrarian
Case Study 6
PyLibrarian
• Pylibrarian was built by processing, embedding (using cohere.embed), and storing
documentation pages into Weaviate’s vector database.
• Upon a user query, it can search for the most relevant pages of documentation to that
query.
• Using Cohere’s chat endpoint’s document mode, the chatbot synthesizes a response
citing the documents, leading to far more consistent, grounded responses.

https://github.com/bert-luo/docbot
h>ps://lablab.ai/apps/tech/cohere/cohere
https://github.com/NityaSG/Stock-AI
Building ChatBot for a set PDF documents.
Our Use Case - The Reference Project For Study
Chat History

Extended Stand alone Questions


Stand Alone LLM Answer
Prompt +
Questions
Relevant Docs
New
Questions Create Embeddings
“To create an account ….

How do I create an account


Cohere Embedding

Relevant Docs

Check for relevant


Documents that are
embedded

Chunk Chunk Create


Convert Split the text
into chunks Embeddings
PDF to Text Chunk Chunk
Text Vector Store
Docs
Chunk Chunk

Store your Embeddings


Chunk Chunk
Creating a Chatbot for PDF Files

• LangChain is a framework that makes it easier to build scalable AI/LLM apps and chatbots.
• Pinecone is a vectorstore for storing embeddings and your PDF in text to later retrieve similar
docs.
• Study a Project that creates a ChatBot based on existing set PDF Files.
• Fork the repo https://github.com/Sangwan70/cohere-chatbot-pdf-langchain.git
• Now clone or download the ZIP in the VM provided
git clone https://github.com/Sangwan70/cohere-chatbot-pdf-langchain.git
Create a Chatbot for PDF Files
• Install packages. First run
npm install yarn –g
• To install yarn globally (if you haven't already). Then run:
yarn install
• After installation, you should now see a node_modules folder.
Set up your .env file
Copy .env.example into .env Your .env file should look like this:
COHERE_API_KEY=
PINECONE_API_KEY=
PINECONE_ENVIRONMENT=
PINECONE_INDEX_NAME=

• Visit Cohere to retrieve API keys and insert into your .env file.
• Visit pinecone to create and retrieve your API keys, and also retrieve your environment and index name
from the dashboard.
Create a Chatbot for PDF Files
• In the config folder, replace the PINECONE_NAME_SPACE with a namespace where you'd like
to store your embeddings on Pinecone when you run npm run ingest.
• This namespace will later be used for queries and retrieval.
• In utils/makechain.ts chain change the QA_PROMPT for your own use case.
• Please verify outside this repo that you have access to command LLM, otherwise the
application will not work.
Convert your PDF files to embeddings
• Inside docs folder (Create the Folder if needed), add your pdf files or folders that contain pdf files.
yarn run ingest

• To 'ingest' and embed your docs.


• Check Pinecone dashboard to verify your namespace and vectors have been added.
Run the app
• Once you've verified that the embeddings and content have been successfully added to your Pinecone,
you can run the app with
npm run dev

• To launch the local dev environment, and then type a question in the chat interface.
• Launch the browser inside VM and open http://localhost:3000
• Or, use the public IP of the instance with port 3000.
Automate AI Workflows with AutoGPT & LangChain
Automate AI Workflows with AutoGPT & LangChain
This can significantly enhance the efficiency and scalability of your ML operations.
Here's a high-level approach

Define Automation Goals:


• And specific processes you want to automate within your AI workflow. Identify the steps that are
repetitive or require manual intervention.
Automate AI Workflows with AutoGPT & LangChain

Integrate with Automa9on Tools:


• UDlize automaDon tools compaDble with your MLOps stack. For instance, consider using Oracle's
AutoML and LangChain.
• Explore other tools like Terraform, Ansible, or Oracle Cloud Infrastructure's built-in automaDon
features to manage infrastructure provisioning and configuraDon.
Automate AI Workflows with AutoGPT & LangChain
Data Management Automation:
• Automate the ingestion and pre-processing of data from various sources using
tools OCI Data Flow. Configure these tools to integrate with your data sources,
perform data transformations, and load the data into your target data warehouse
or database.
Model Training and Tuning:
• Design a workflow that includes your chosen machine learning libraries (e.g.,
TensorFlow, PyTorch) and integrates with AutoML. AutoML can automatically train
and tune models based on your dataset and specified hyperparameters.
Automate AI Workflows with AutoGPT & LangChain
LangChain Integration:
• Configure LangChain to manage the selection and versioning of these
components.
• Automate the deployment of models and pipelines using LangChain's automation
features, ensuring seamless integration within your AI workflow.
Orchestrate with Workflow Managers:
• Implement workflow managers like Oracle's Cloud Automation Platform to define
and orchestrate your end-to-end AI workflow.
Automate AI Workflows with AutoGPT & LangChain
Monitoring and Feedback Loop:
• Use tools like Prometheus, Grafana, or Oracle Monitoring to collect and analyse
key metrics, logs, and events.
Version Control and Collaboration:
• Implement version control systems like Git to manage versioned datasets, code,
and configuration files.
Testing and Validation:
• Ensure thorough testing and validation of your automated workflows.
Thank You

You might also like