An AI-powered academic guidance system that provides personalized advice through specialized advisor personas. Get diverse perspectives on your PhD journey from multiple AI advisors, each bringing unique expertise in methodology, theory, and practical guidance.
- Multiple AI Advisor Personas: Chat with 10+ specialized advisors including Methodologist, Theorist, Pragmatist, and more
- Document Upload & Analysis: Upload PDFs, Word documents, and text files for context-aware advice
- Intelligent Document Retrieval (RAG): Advanced semantic search through your uploaded documents
- Multi-LLM Backend: Supports both Gemini API and local Ollama models
- User Authentication: Secure user accounts with persistent chat sessions
- Chat Session Management: Save, load, and manage multiple conversation threads
- Export Capabilities: Export chats and summaries in TXT, PDF, and DOCX formats
- Real-time Chat Interface: Modern, responsive UI with advisor-specific styling
- Technology: React 18 with modern hooks and functional components
- Styling: CSS custom properties with dark/light theme support
- State Management: React Context and hooks
- Authentication: JWT-based authentication with persistent sessions
- Framework: FastAPI with automatic API documentation
- Database: MongoDB for user data and chat sessions
- Vector Database: ChromaDB for document storage and semantic search
- LLM Integration: Support for Gemini API and Ollama models
- Document Processing: PDF, DOCX, and text file extraction with intelligent chunking
- Authentication: JWT tokens with bcrypt password hashing
Before you begin, ensure you have the following installed:
- Python 3.8+ (3.9+ recommended)
- Node.js 16+ and npm
- MongoDB (Community Edition)
- Git
git clone https://github.com/sohank-17/Neon-AI-Project.git
cd Neon-AI-Project
On Windows:
- Download MongoDB Community Server from mongodb.com
- Install with default settings
- MongoDB will run as a Windows Service automatically
On macOS:
# Using Homebrew
brew tap mongodb/brew
brew install mongodb-community
brew services start mongodb/brew/mongodb-community
On Linux (Ubuntu/Debian):
# Import MongoDB public GPG key
wget -qO - https://www.mongodb.org/static/pgp/server-6.0.asc | sudo apt-key add -
# Create list file
echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/6.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list
# Install MongoDB
sudo apt-get update
sudo apt-get install -y mongodb-org
# Start MongoDB
sudo systemctl start mongod
sudo systemctl enable mongod
- Create a free account at MongoDB Atlas
- Create a new cluster
- Get your connection string
- Skip the local MongoDB setup
On Windows:
- Download Ollama from ollama.ai
- Run the installer
- Ollama will start automatically
On macOS:
# Using Homebrew
brew install ollama
# Or download from ollama.ai
On Linux:
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama service
sudo systemctl start ollama
sudo systemctl enable ollama
Once Ollama is installed, download the recommended models:
# Download the default model (recommended for development)
ollama pull llama3.2:1b
# Optional: Download larger, more capable models
ollama pull llama3.2:3b
ollama pull mistral:7b
# Verify installation
ollama list
Note: The llama3.2:1b
model is small (~1.3GB) and fast, perfect for development. For production, consider larger models for better quality.
- Navigate to the backend directory:
cd multi_llm_chatbot_backend
- Create a Python virtual environment:
# Create virtual environment
python -m venv venv
# Activate virtual environment
# On Windows:
venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activate
- Install Python dependencies:
pip install -r requirements.txt
- Set up environment variables:
Create a
.env
file in themulti_llm_chatbot_backend
directory:
# MongoDB Configuration
MONGODB_CONNECTION_STRING=mongodb://localhost:27017
MONGODB_DATABASE_NAME=phd_advisor
# JWT Configuration
JWT_SECRET_KEY=your-super-secret-jwt-key-change-this-in-production-please-make-it-long-and-random
# Gemini API Configuration (Optional - for cloud LLM)
GEMINI_API_KEY=your_gemini_api_key_here
GEMINI_MODEL=gemini-2.0-flash
# Ollama Configuration (for local LLM)
OLLAMA_BASE_URL=http://localhost:11434
# Application Settings
CORS_ORIGINS=http://localhost:3000
Getting a Gemini API Key (Optional):
-
Go to Google AI Studio
-
Create a new API key
-
Add it to your
.env
file -
Start the backend server:
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
The API will be available at http://localhost:8000
with interactive docs at http://localhost:8000/docs
- Navigate to the frontend directory:
cd ../phd-advisor-frontend
- Install dependencies:
npm install
- Start the development server:
npm start
The application will open at http://localhost:3000
- MongoDB is running (check with
mongosh
or MongoDB Compass) - Ollama is running with models downloaded (
ollama list
) - Backend is running on port 8000
- Frontend is running on port 3000
- Create your first user account
-
Create an Account:
- Open
http://localhost:3000
- Click "Sign Up"
- Fill in your details
- Open
-
Start Your First Chat:
- Click "New Chat"
- Ask a question like "I need help with my research methodology"
- Get responses from multiple advisor personas
-
Upload Documents:
- Click the upload button in the chat
- Upload a PDF, DOCX, or TXT file
- Ask questions about your document
-
Manage Chats:
- Save important conversations
- Switch between different chat sessions
- Export chats in various formats
Variable | Description | Default | Required |
---|---|---|---|
MONGODB_CONNECTION_STRING |
MongoDB connection URL | mongodb://localhost:27017 |
Yes |
MONGODB_DATABASE_NAME |
Database name | phd_advisor |
Yes |
JWT_SECRET_KEY |
Secret key for JWT tokens | - | Yes |
GEMINI_API_KEY |
Google Gemini API key | - | No |
GEMINI_MODEL |
Gemini model to use | gemini-2.0-flash |
No |
OLLAMA_BASE_URL |
Ollama server URL | http://localhost:11434 |
No |
The application supports two LLM providers:
-
Ollama (Local, Free):
- Ensure Ollama is running
- Models run locally on your machine
- No API costs, complete privacy
-
Gemini (Cloud, Paid):
- Requires API key
- Higher quality responses
- Faster response times
Switch providers using the API:
curl -X POST "http://localhost:8000/switch-provider" \
-H "Content-Type: application/json" \
-d '{"provider": "ollama"}'
POST /auth/signup
- Create new user accountPOST /auth/login
- Login with email/passwordGET /auth/me
- Get current user profile
POST /chat-sequential
- Get responses from all advisorsPOST /chat/{persona_id}
- Chat with specific advisorPOST /reply-to-advisor
- Reply to specific advisor message
POST /upload-document
- Upload PDF, DOCX, or TXT filesGET /uploaded-files
- List uploaded filesGET /document-stats
- Get document statistics
GET /context
- Get current session contextPOST /reset-session
- Reset current sessionGET /session-stats
- Get session statistics
GET /export-chat
- Export chat (txt, pdf, docx)GET /chat-summary
- Generate chat summary
Full API documentation is available at http://localhost:8000/docs
when the server is running.
Backend won't start:
# Check if port 8000 is already in use
netstat -an | grep :8000
# Check Python virtual environment is activated
which python # Should point to your venv
# Check all dependencies are installed
pip list
MongoDB connection issues:
# Test MongoDB connection
mongosh
# Check if MongoDB service is running
# Windows: Check Services app
# macOS: brew services list | grep mongodb
# Linux: systemctl status mongod
Ollama not working:
# Check if Ollama is running
curl http://localhost:11434/api/tags
# Check downloaded models
ollama list
# Test model directly
ollama run llama3.2:1b "Hello"
Frontend won't connect to backend:
- Verify backend is running on port 8000
- Check CORS settings in backend
.env
- Check browser developer console for errors
-
For faster local LLM responses:
- Use smaller models like
llama3.2:1b
for development - Ensure sufficient RAM (8GB+ recommended)
- Use SSD storage for better model loading
- Use smaller models like
-
For better document search:
- Upload focused, relevant documents
- Use clear, descriptive filenames
- Break large documents into smaller sections
-
For production deployment:
- Use larger, more capable models
- Consider GPU acceleration for Ollama
- Use MongoDB Atlas for cloud database
- Set up proper authentication and HTTPS
# Backend tests
cd multi_llm_chatbot_backend
python -m pytest app/tests/
# Test specific functionality
python app/tests/test_rag_system.py
python app/tests/debug_rag.py
phd-advisor-panel/
βββ multi_llm_chatbot_backend/
β βββ app/
β β βββ api/routes/ # API route handlers
β β βββ core/ # Core business logic
β β βββ llm/ # LLM client implementations
β β βββ models/ # Data models and schemas
β β βββ utils/ # Utility functions
β β βββ tests/ # Test files
β βββ requirements.txt
β βββ .env
βββ phd-advisor-frontend/
β βββ src/
β β βββ components/ # React components
β β βββ pages/ # Page components
β β βββ styles/ # CSS files
β β βββ utils/ # Frontend utilities
β βββ package.json
β βββ public/
βββ README.md
- Edit
app/models/default_personas.py
- Add your persona configuration
- Restart the backend server
- The new persona will be available in chat
- Add new file type to
app/utils/document_extractor.py
- Update the upload endpoint in
app/api/routes/documents.py
- Test with sample files
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
- Check the API Documentation
- Report bugs by opening an issue
- Request features by opening an issue
- Contact the development team
- Built with FastAPI and React
- Powered by Ollama for local LLM support
- Uses ChromaDB for vector storage
- Document processing with PyPDF2 and python-docx
Β© 2025 University of Colorado Boulder. All rights reserved.
This project is developed and maintained by the University of Colorado Boulder for academic and research purposes.