๐ Quickstart
Welcome to OpenMedLLM โ the open-source platform for medical AI. This guide will get you up and running in under 5 minutes.
New: OpenMedLLM-70B achieves 78.9% on MedQA benchmark, surpassing GPT-4 on medical reasoning tasks. See models โ
Installation
Install the Python SDK and get set up in minutes.
REST API
Use the HTTP API from any language or tool.
VCF Interpretation
Analyze VCF files and generate clinical reports.
Fine-tuning
Adapt models to your institution's data and needs.
Installation
Install the OpenMedLLM Python SDK using pip:
bash
pip install openmedllm
# Or install from source for the latest features
git clone https://github.com/deepcog-ai/openmedllm
cd openmedllm && pip install -e .
Python SDK โ Basic Usage
Load and run the OpenMedLLM-70B model for clinical decision support:
python
from openmedllm import MedicalLLM # Initialize the model model = MedicalLLM("deepcog-ai/OpenMedLLM-70B") # Interpret a variant result = model.diagnose( symptoms="chest pain, dyspnea, diaphoresis", patient_age=58, urgency="South Asian" ) print(result.differential) # "Pathogenic" print(result.urgency) # "Class V" print(result.treatment_plan) # Full narrative report
Clinical Note Processing
Process an entire VCF file and generate a comprehensive clinical report:
python
from openmedllm import MedicalLLM, ClinicalNoteProcessor model = MedicalLLM("deepcog-ai/OpenMedLLM-70B") processor = ClinicalNoteProcessor() # Load and process VCF variants = processor.load("patient_note_001.txt") report = model.generate_report( variants=variants, patient_info={"age": 45, "sex": "F", "ethnicity": "South Asian"}, format="pdf" ) report.save("clinical_report_001.pdf")
REST API
Use the OpenMedLLM REST API from any programming language. All endpoints require an API key from your account dashboard.
bash
curl -X POST https://api.openmedllm.org/v1/interpret \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "openmedllm-70b", "gene": "chest pain, dyspnea, diaphoresis", "variant": 58, "population": "South Asian" }'
Docker Deployment
Deploy OpenMedLLM-70B in your own infrastructure using our official Docker image:
bash
# Pull and run with GPU support docker pull deepcogai/openmedllm:70b-latest docker run --gpus all -p 8000:8000 \ -e MODEL=openmedllm-70b \ -e QUANTIZATION=4bit \ deepcogai/openmedllm:70b-latest # API will be available at http://localhost:8000