fix: pass editorial brief to LLM prompt + improve missing API key error

- Add 'brief' field to GenerateContentRequest schema
- Pass brief from router to generate_post_text service
- Inject brief as mandatory instructions in LLM prompt with highest priority
- Return structured error when LLM provider/API key not configured
- Show dedicated warning banner with link to Settings when API key missing

Fixes: content ignoring editorial brief, unhelpful API key error messages

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Michele
2026-04-03 17:22:15 +02:00
parent 2ca8b957e9
commit 7d1b4857c2
4 changed files with 49 additions and 6 deletions

View File

@@ -78,9 +78,21 @@ def generate_content(
model = request.model or _get_setting(db, "llm_model", current_user.id)
if not provider_name:
raise HTTPException(status_code=400, detail="LLM provider not configured. Set 'llm_provider' in settings.")
raise HTTPException(
status_code=400,
detail={
"message": "Provider AI non configurato. Vai in Impostazioni → Provider AI per scegliere il provider e inserire la API key.",
"missing_settings": True,
},
)
if not api_key:
raise HTTPException(status_code=400, detail="LLM API key not configured. Set 'llm_api_key' in settings.")
raise HTTPException(
status_code=400,
detail={
"message": "API key non configurata. Vai in Impostazioni → Provider AI per inserire la tua API key.",
"missing_settings": True,
},
)
# Build character dict for content service
char_dict = {
@@ -98,6 +110,7 @@ def generate_content(
llm_provider=llm,
platform=request.effective_platform,
topic_hint=request.topic_hint,
brief=request.brief,
)
# Generate hashtags