xplore configuring and comparing LLM backends with LangChain. Learn how temperature, max tokens, and model choice impact latency, cost, and quality. Run examples and exercises, swap providers, benchmark responses, and record results to make informed trade‑offs for production-ready model selections.
ex1.ipynb— prompt + chat model basicsex2.ipynb— parameter sweeps and comparison utilitiesex3.ipynb— provider swap and lightweight benchmarking- Exports:
models.html,models.pdf
- Open notebooks and run cells top-to-bottom.
- Modify parameters (e.g., temperature) and model names to observe changes.
- Record latency and response quality to compare providers and settings.
- Environment variables (examples):
GOOGLE_API_KEYfor Gemini models- Optionally
OPENAI_API_KEYor others if you test multiple providers
- Install:
pip install langchain langchain-google-genai python-dotenv- Add extras as needed for your experiments.
- Recommended:
gemini-2.5-flash - Fallbacks (if not enabled):
gemini-1.5-pro,gemini-1.5-flash
- HTML:
jupyter nbconvert --to html models.ipynb - PDF (Chromium):
jupyter nbconvert --to webpdf models.ipynb --WebPDFExporter.allow_chromium_download=True
- Dr. Partha Majumder