feat: implement streaming support for chat and enhance safety review process
- Updated .env.example to include API key placeholder and configuration instructions. - Refactored main.py to support streaming responses from the LLM, improving user experience during chat interactions. - Enhanced LLMClient to include methods for streaming chat and collecting responses. - Modified safety review process to pass static analysis warnings to the LLM for better code safety evaluation. - Improved UI components in chat_view.py to handle streaming messages effectively.
This commit is contained in:
18
.env.example
18
.env.example
@@ -1,4 +1,16 @@
|
||||
LLM_API_URL=https://api.siliconflow.cn/v1/chat/completions
|
||||
LLM_API_KEY=sk-fxsxbgatrjjhsnjpkdfgfngukqoqqgitjpxfqfxifcipaqpc
|
||||
# ========================================
|
||||
# LocalAgent 閰嶇疆鏂囦欢绀轰緥
|
||||
# ========================================
|
||||
# 浣跨敤鏂规硶锛?# 1. 澶嶅埗姝ゆ枃浠朵负 .env
|
||||
# 2. 濉叆浣犵殑 API Key 鍜屽叾浠栭厤缃?# ========================================
|
||||
|
||||
# SiliconFlow API 閰嶇疆
|
||||
# 鑾峰彇 API Key: https://siliconflow.cn
|
||||
LLM_API_URL=https://api.siliconflow.cn/v1/chat/completions
|
||||
LLM_API_KEY=your_api_key_here
|
||||
|
||||
# 妯″瀷閰嶇疆
|
||||
# 鎰忓浘璇嗗埆妯″瀷锛堟帹鑽愪娇鐢ㄥ皬妯″瀷锛岄€熷害蹇級
|
||||
INTENT_MODEL_NAME=Qwen/Qwen2.5-7B-Instruct
|
||||
GENERATION_MODEL_NAME=Qwen/Qwen2.5-72B-Instruct
|
||||
|
||||
# 浠g爜鐢熸垚妯″瀷锛堟帹鑽愪娇鐢ㄥぇ妯″瀷锛屾晥鏋滃ソ锛?GENERATION_MODEL_NAME=Qwen/Qwen2.5-72B-Instruct
|
||||
|
||||
Reference in New Issue
Block a user