Engineering project
AtlasML
ML infrastructure platform for model registry, inference, evaluation tracking, async jobs, and benchmarking.
FastAPI PostgreSQL SQLAlchemy Redis/RQ Pydantic Docker ML Infra
Problem
ML projects often stop at notebooks, while production workflows need reliable model metadata, reproducible evaluation, serving boundaries, and operational visibility.
Current status
Initial backend architecture and portfolio version.
What I built
- Designed a FastAPI service layer for model registration, inference requests, evaluation runs, and benchmark jobs.
- Modeled model versions, artifacts, metrics, and job state in PostgreSQL with SQLAlchemy and Pydantic schemas.
- Prepared Redis/RQ workers for async evaluation and benchmarking tasks that should not block API requests.
Architecture / system design
- 01
Client
- 02
FastAPI API
- 03
Model Registry / Inference Service / Evaluation Service
- 04
PostgreSQL
- 05
Redis/RQ Worker
- 06
Benchmark Scripts
Technical highlights
- Clear separation between API schemas, persistence models, and service logic.
- Evaluation and benchmark records are first-class entities instead of loose files.
- The design supports local development while leaving room for containerized deployment.
Future work
- Add authenticated artifact storage when the project moves beyond a public skeleton.
- Expand evaluation dashboards and regression checks.
- Add CI fixtures for common model-serving failure modes.
Tech stack
FastAPI PostgreSQL SQLAlchemy Redis/RQ Pydantic Docker pytest
Demo / screenshots
Screenshots and API examples will be added after the first public repository pass.
Resume bullet draft
- Built a production-style ML infrastructure backend for model registry, inference, evaluation tracking, async jobs, and benchmarking.
- Designed service boundaries and relational schemas for model versions, metrics, artifacts, and job orchestration.