An AI-powered emergency response system that provides first aid instructions based on audio descriptions and scene images.
- Audio Transcription: Record emergency descriptions that are transcribed using Whisper
- Image Analysis: Capture and analyze emergency scenes using YOLO object detection
- First Aid Instructions: Receive AI-generated first aid steps based on the emergency data
- Location Tracking: Automatically capture GPS coordinates for emergency services
- React with Vite
- Tailwind CSS
- JavaScript
- Web APIs (MediaRecorder, Geolocation, etc.)
- FastAPI
- Python
- AI Models:
- Whisper (Audio transcription)
- YOLO (Image analysis)
- OpenAI GPT-4 (First aid response generation)
-
Navigate to the backend directory:
cd emergency/backend -
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate -
Install dependencies:
pip install -r requirements.txt -
Set up environment variables:
- Create a
.envfile in the backend directory - Add your OpenAI API key:
OPENAI_API_KEY=your_api_key_here
- Create a
-
Run the FastAPI server:
uvicorn main:app --reload
-
Navigate to the frontend directory:
cd emergency/frontend -
Install dependencies:
npm install -
Run the development server:
npm start
- Open
http://localhost:3000in your browser - Record an audio description of the emergency
- Take a photo of the emergency scene
- Review the AI-generated first aid instructions
This application is for demonstration purposes only. In a real emergency, always call your local emergency services.
This project was created as part of [Hackathon Name] to demonstrate the potential of AI technologies in emergency response scenarios.