fix: pass editorial brief to LLM prompt + improve missing API key error

- Add 'brief' field to GenerateContentRequest schema
- Pass brief from router to generate_post_text service
- Inject brief as mandatory instructions in LLM prompt with highest priority
- Return structured error when LLM provider/API key not configured
- Show dedicated warning banner with link to Settings when API key missing

Fixes: content ignoring editorial brief, unhelpful API key error messages

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Michele
2026-04-03 17:22:15 +02:00
parent 2ca8b957e9
commit 7d1b4857c2
4 changed files with 49 additions and 6 deletions

View File

@@ -15,6 +15,7 @@ def generate_post_text(
llm_provider: LLMProvider,
platform: str,
topic_hint: str | None = None,
brief: str | None = None,
) -> str:
"""Generate social media post text based on a character profile.
@@ -23,6 +24,7 @@ def generate_post_text(
topic_hint: Optional topic suggestion to guide generation.
llm_provider: LLM provider instance for text generation.
platform: Target platform (e.g. 'instagram', 'facebook', 'tiktok', 'youtube').
brief: Optional editorial brief with narrative technique and detailed instructions.
Returns:
Generated post text as a string.
@@ -78,8 +80,17 @@ def generate_post_text(
if topic_hint:
topic_instruction = f" The post should be about: {topic_hint}."
# Brief is the highest-priority instruction — it overrides defaults
brief_instruction = ""
if brief:
brief_instruction = (
f"\n\nISRUZIONI OBBLIGATORIE DAL BRIEF EDITORIALE:\n{brief}\n"
f"Rispetta TUTTE le istruzioni del brief. "
f"Il brief ha priorità su qualsiasi altra indicazione."
)
prompt = (
f"{guidance}{topic_instruction}\n\n"
f"{guidance}{topic_instruction}{brief_instruction}\n\n"
f"Write the post now. Output ONLY the post text, nothing else."
)