Lysos

Open-source generative antibiotic designer for the AMR pandemic.
Three-stage fine-tune of Gemma 4 31B-it on AMD MI300X. Multi-agent debate engine. End-to-end live agentic workspace.
🏆 AMD Developer Hackathon 2026 Track 2 · Fine-Tuning on AMD GPUs MIT License

Live links

Source

GitHub repo →

FastAPI backend · React+Vite frontend · training pipeline · agentic harness · MIT

Production model

rahul24raj/lysos-base-dpo →

Stage 2.5 DPO · LoRA r=32 · adapter on Gemma 4 31B-it

SFT model

rahul24raj/lysos-base →

Stage 2 SFT · LoRA r=64 · 222,606 AMR examples

Pretraining model

rahul24raj/txgemma-4-31b →

Stage 1 · continued pretraining for therapeutics · LoRA r=64 α=256

Training dataset

lysos-amr-stage2 →

222,606 AMR records · 8 priority pathogens · ChEMBL + literature + curated negatives

Demo videos

v1.0 release →

Merged 9:08 walkthrough + 3 individual scenes (agentic flow, system tour, full)

9-MINUTE FULL WALKTHROUGH

Watch the demo

Multi-agent debate · live workspace · real-time scoring · resistance escape map · Pareto frontier

▶ Play (41 MB)

The three-stage fine-tune

Every stage trains a LoRA adapter on top of google/gemma-4-31B-it. All adapters are public on Hugging Face.

STAGE 1
TxGemma-4 31B
LoRA r=64, α=256 · ~2 hr on 1× MI300X
Continued pretraining for therapeutics
STAGE 2
lysos-base
LoRA r=64, α=128 · ~3 hr on 1× MI300X
SFT on 222,606 AMR examples
STAGE 2.5
lysos-base-dpo
LoRA r=32, α=64, β=0.1 · ~45 min on 1× MI300X
DPO on 10K hard-negative Pareto pairs

The multi-agent debate

When you fire /wf design_with_debate, four agent roles take turns — each is a separate LLM call.

DESIGNER ─ proposes 3 ──▶ CRITIC ── critiques ──▶ EDITOR ── refines ──▶ STRATEGIST

Strategist picks winner + runner-up + next action.
Winner SMILES auto-loads to 2D builder + 3D pocket viewer + 12-axis reward radar + resistance escape map.

Why MI300X is load-bearing

192 GB HBM3 lets us fit Gemma 4 31B base in bf16 + LoRA adapter + KV cache + agent context coresident on one GPU. Same GPU trains and serves. No tensor parallelism, no model sharding, no migration step. Total wall-clock for the three-stage fine-tune: ~6 hours on 1 GPU.

Run it locally

git clone https://github.com/Rahul-Rajpurohitk/lysos.git
cd lysos
python3 -m venv .venv && source .venv/bin/activate
pip install -e .
uvicorn workspace.api.server:app --host 0.0.0.0 --port 7860 &

cd workspace/web && npm install && npm run dev
# open http://localhost:5173