FastAPI backend · React+Vite frontend · training pipeline · agentic harness · MIT
Stage 1 · continued pretraining for therapeutics · LoRA r=64 α=256
222,606 AMR records · 8 priority pathogens · ChEMBL + literature + curated negatives
Merged 9:08 walkthrough + 3 individual scenes (agentic flow, system tour, full)
Multi-agent debate · live workspace · real-time scoring · resistance escape map · Pareto frontier
▶ Play (41 MB)Every stage trains a LoRA adapter on top of google/gemma-4-31B-it. All adapters are public on Hugging Face.
When you fire /wf design_with_debate, four agent roles take turns — each is a separate LLM call.
Strategist picks winner + runner-up + next action.
Winner SMILES auto-loads to 2D builder + 3D pocket viewer + 12-axis reward radar + resistance escape map.
192 GB HBM3 lets us fit Gemma 4 31B base in bf16 + LoRA adapter + KV cache + agent context coresident on one GPU. Same GPU trains and serves. No tensor parallelism, no model sharding, no migration step. Total wall-clock for the three-stage fine-tune: ~6 hours on 1 GPU.
git clone https://github.com/Rahul-Rajpurohitk/lysos.git cd lysos python3 -m venv .venv && source .venv/bin/activate pip install -e . uvicorn workspace.api.server:app --host 0.0.0.0 --port 7860 & cd workspace/web && npm install && npm run dev # open http://localhost:5173