RAG-based, evidence grounded AI follow up system for depression treatment adherence and progress tracking.
Project Title: Adaptive RAG for Evidence-Grounded Depression Care
Core Idea
Build a Retrieval-Augmented Generation (RAG) system that:
Grounds responses in verified clinical sources
Tracks patient progress over time
Monitors treatment adherence
Detects risk signals
Adapts engagement strategy
Escalates safely when required
This is not a therapist. This is a clinical follow-up assistant.
Depression patients often:
Stop medication early
Miss therapy sessions
Drop out of treatment
Fail to report worsening symptoms
Existing chatbots:
Are mostly scripted
Are not evidence-grounded
Do not track longitudinal risk
Do not enforce safety citations
Your system fixes this.
Ensure responses are evidence-based (RAG)
Maintain longitudinal patient state
Predict dropout risk
Detect self-harm risk
Personalize engagement
Provide escalation pathways
Maintain audit traceability
Layer 1: User Interface
Mobile/web chat interface
Mood check-ins
Medication reminders
Progress dashboard
Layer 2: Risk & State Engine
Tracks:
PHQ-9 scores
Mood trend
Medication adherence
Engagement frequency
Sentiment change
Risk score
This becomes your longitudinal state model.
Layer 3: RAG Engine
Pipeline:
User message
↓
Risk classifier
↓
Retriever (vector database)
↓
Evidence selection
↓
LLM generation
↓
Safety filter
↓
Response with citation
All medical claims must be backed by retrieved evidence.
Layer 4: Adaptive Engagement Model
Based on user state:
Low engagement → gentle reminder
Stable mood → maintenance check-in
Worsening trend → increased follow-up
High risk → crisis protocol
This is where your novelty sits.
Sources to index:
WHO depression guidelines
National clinical guidelines
CBT manuals
Patient education materials
Crisis management protocols
Documents are chunked and embedded.
Tools:
SentenceTransformers
FAISS / Chroma
Each chunk stores:
Source
Section
Trust score
Date
Risk Score = weighted function of:
PHQ-9
Negative sentiment
Sudden language shift
Missed medication logs
Drop in engagement
Example:
Risk = (0.4 × PHQ) + (0.3 × sentiment) + (0.2 × engagement drop) + (0.1 × adherence gap)
High risk triggers escalation.
Before sending response:
Block harmful advice
Check for hallucination
Verify citation exists
Add emergency contacts if needed
If suicide ideation detected: Immediate escalation protocol.
Clinical:
PHQ-9 change over 8 weeks
Engagement:
Retention rate
Response frequency
Safety:
False negative risk detection
Escalation accuracy
RAG Quality:
Evidence citation accuracy
Hallucination rate
Use simulated users first. Human trials require ethics approval.
Backend: Python + FastAPI
AI: Llama 3 / Mistral, LangChain / LlamaIndex
Vector DB: FAISS
Database: PostgreSQL
Frontend: React / Streamlit
Deployment: AWS / GCP / Local server
Ethical clearance
Data privacy compliance
Avoiding hallucinations
Avoiding medical liability
Ensuring human-in-loop safety