You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README_ENGLISH.md
+26Lines changed: 26 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,6 +21,32 @@ GC-QA-RAG is an **enterprise-grade Retrieval-Augmented Generation (RAG) system**
21
21
- 🚀 **Ready-to-use Solution**: Provides complete code from knowledge base construction (ETL), backend services to frontend interfaces, with Docker one-click deployment support, helping developers quickly build high-quality RAG systems.
22
22
- 📚 **Comprehensive Documentation**: Offers complete documentation covering product design, technical architecture, and implementation experience - not just open-source code, but a reusable methodology.
23
23
24
+
## 🧪 AI Evaluation Validation
25
+
26
+
To validate the system's actual effectiveness, we had multiple mainstream large language models participate in the "HuoZiGe" low-code platform certification exams. The evaluation used three modes: direct answer generation, knowledge base retrieval (GC-QA-RAG), and Agent automatic planning retrieval (based on GC-QA-RAG).
27
+
28
+
**Evaluation Results Summary**:
29
+
30
+
| Exam Subject | Model | Direct Generation | Knowledge Base Retrieval (RAG) | Agent Automatic Planning Retrieval |**Max Improvement**|
-**Agent mode shows the most significant results**: In all tests, Agent automatic planning retrieval mode achieved the highest scores, with maximum improvement of 25.86%
45
+
-**RAG technology significantly improves accuracy**: All models showed substantial improvement after gaining external knowledge base support
46
+
-**Claude-4-sonnet demonstrates the best overall performance**: Achieved the highest scores in Agent mode across all three subjects
0 commit comments