0% found this document useful (0 votes)
25 views3 pages

ML LLM Eng JD

The document outlines a job description for a Machine Learning and LLM Engineer, emphasizing the need for expertise in fine-tuning large language models and implementing retrieval-augmented generation techniques. Key responsibilities include LLM development, knowledge graph design, and building scalable AI pipelines, while required qualifications include a degree in a related field and experience in AI/ML. Candidates are encouraged to apply by submitting their resume and relevant work samples.

Uploaded by

sreelakshmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views3 pages

ML LLM Eng JD

The document outlines a job description for a Machine Learning and LLM Engineer, emphasizing the need for expertise in fine-tuning large language models and implementing retrieval-augmented generation techniques. Key responsibilities include LLM development, knowledge graph design, and building scalable AI pipelines, while required qualifications include a degree in a related field and experience in AI/ML. Candidates are encouraged to apply by submitting their resume and relevant work samples.

Uploaded by

sreelakshmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Machine Learning and LLM Engineer:

Job Description:
We are seeking a highly skilled LLM Engineer with expertise in fine-tuning,
retrieval-augmented generation (RAG), and hyperparameter optimization to join our AI
team. You will play a crucial role in developing cloud-native, enterprise-grade AI systems that
integrate state-of-the-art language models, knowledge retrieval mechanisms, and automation
frameworks.

This role requires deep expertise with LLMs, Machine Learning, LangChain, Llama, vector
databases, and multi-agent AI systems, along with hands-on experience deploying and
optimizing AI/ML pipelines in production environments.

Key Responsibilities:

●​ LLM Development & Optimization: Fine-tune and optimize Large Language Models
(GPT, BERT, LLaMA, etc.) for domain-specific use cases such as document
intelligence, knowledge retrieval, and operational efficiency.
●​ Retrieval-Augmented Generation (RAG): Implement vector-based retrieval
techniques (e.g., FAISS, Milvus, ChromaDB, Weaviate, AWS OpenSearch) and
advanced semantic search methods.
●​ Knowledge Graphs & Structured Retrieval: Design and deploy knowledge graphs
and structured retrieval techniques for improving LLM contextual understanding and
response accuracy.
●​ Multi-Agent AI Frameworks: Work with multi-agent LLM architectures to build
systems capable of complex, multi-step problem-solving.
●​ LLMOps & Model Deployment: Develop scalable AI pipelines for training, evaluation,
and deployment of LLMs in cloud-native environments (AWS, Azure, GCP).
●​ Conversational AI & UX: Build conversational agents, AI-powered chatbots, and
generative AI-based UX interfaces using LangChain and related frameworks.
●​ Security & Guardrails: Implement LLM security best practices, including adversarial
testing, bias mitigation, and ethical AI considerations.
●​ MLOps & DevOps Practices: Establish robust CI/CD pipelines for AI model
deployment, monitoring, and performance evaluation.
●​ Collaboration & Mentorship: Work with cross-functional teams, mentor junior
engineers, and drive AI adoption across the organization.
Required Skills & Qualifications:

Educational Background:

●​ Bachelor's or Master’s degree in Computer Science, Machine Learning, AI, or a


related field.
●​ 3+ years of direct industry experience in AI/ML, specifically in LLM development,
NLP, or generative AI.

Technical Expertise:

●​ Deep understanding of LLMs, fine-tuning, and retrieval-augmented generation


(RAG) workflows.
●​ Strong proficiency in Python, with experience in Hugging Face, PyTorch,
TensorFlow, or JAX.
●​ Experience working with LLM APIs (OpenAI, Azure OpenAI, AWS Bedrock, Google
Gemini, Anthropic Claude, Mistral, etc.).
●​ Hands-on experience with LangChain, LlamaIndex, and embedding-based search.
●​ Strong understanding of embedding spaces and their role in semantic search &
information retrieval.
●​ Proficiency in building vector databases and implementing advanced retrieval
patterns.
●​ Experience in building enterprise-scale, secure data ingestion pipelines for
unstructured data.
●​ Knowledge of machine learning model optimization techniques (e.g.,
hyperparameter tuning, model distillation, quantization, pruning).
●​ Proficiency in building knowledge graphs for production use cases.
●​ Experience with security frameworks for LLM integration, including AI governance,
guardrails, and red teaming.
●​ Cloud-native AI development: AWS, Azure, or GCP cloud platforms.
●​ Containerization & Orchestration: Docker, Kubernetes.
●​ Web Development: TypeScript, Node.js, React.js (for AI-powered UX solutions).
●​ Version Control & CI/CD: GitHub, GitLab, Jenkins.

Nice-to-Have Skills (Preferred but Not Required):

●​ Experience with LLM safety & adversarial robustness techniques.


●​ Familiarity with multi-modal LLMs, combining NLP with computer vision and speech
recognition.
●​ Knowledge of distributed training techniques for large-scale LLMs.
●​ Experience in LLM evaluation metrics and benchmarking methodologies.
Benefits & Perks:

●​

How to Apply:

Interested candidates are encouraged to submit their resume, portfolio, and a cover letter
detailing their experience in LLM development. If you have GitHub projects, research papers,
or Kaggle contributions, please include them in your application.

You might also like