- 🔍 Exploring Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), and Knowledge Grounding
- 💡 Interested in faithful reasoning, document understanding, and multi-modal information retrieval
- 🧩 Building efficient on-premise AI pipelines with
vLLM,LangChain, andHugging Face - ⚙️ Focused on LLM optimization, quantization, and multi-GPU inference for high-performance local systems
- 📊 Exploring methods to evaluate and visualize RAG quality using metrics like Faithfulness and Answer Relevance
- 🧠 RAG System Development: multi-source knowledge retrieval and contextual response generation
- 📄 Document Intelligence: structured parsing and evidence-based question answering
- 🧮 LLM Optimization: efficient model serving using quantized inference and multi-GPU pipelines
- 📈 RAG Evaluation: automated metric-based validation for generated responses
- 🧰 AI System Integration: combining NLP, data engineering, and web backends for end-to-end AI applications
- 💭 “Connecting Human Knowledge, Data, and AI Reasoning.”


