I architect machine learning pipelines and cross-platform applications. My work spans modeling with LSTMs, transfer learning, and implementing algorithms deployed via FastAPI microservices.
Current technical focus:
- Training convolutional architectures for computer vision tasks
- Implementing transformer-based models for natural language understanding
- Building with LangChain and LlamaIndex
- Experimenting with model development and fine-tuning pre-trained models on Hugging Face
Frameworks & Libraries:
- Deep Learning: PyTorch, TensorFlow/Keras, JAX
- Classical ML: Scikit-learn, XGBoost
- Computer Vision: OpenCV
- NLP: Hugging Face Transformers, spaCy, LangChain, LlamaIndex
- MLOps: MLflow, ONNX
- Cloud ML: SageMaker, Azure ML
Mobile & Desktop: Flutter/Dart, React Native, Electron, Capacitor
Frameworks: FastAPI, Flask, Django, Gunicorn, Uvicorn
Async Processing: Celery/Redis
Databases: MongoDB, PostgreSQL, MySQL, SQLite, Supabase
Analytics: DuckDB, Polars, Plotly
Distributed Computing: Ray
Type Validation: Pydantic
Production ML: mem0 (persistent agent memory), marimo (reproducible notebooks)
Cloud Platforms: AWS (SageMaker), Azure ML
Version Control: GitHub, GitHub Codespaces
- Training convolutional architectures for computer vision
- Implementing transformer-based models for NLP
- Building LLM applications with LangChain and LlamaIndex
- Fine-tuning pre-trained models on Hugging Face
- High-performance analytics with DuckDB and Polars
- Production ML systems with proper state management