Misc tweaks to prompt

This commit is contained in:
James Ketr 2025-06-01 15:59:10 -07:00
parent c07510c525
commit bb84709f44
2 changed files with 17 additions and 32 deletions

View File

@ -447,7 +447,7 @@ Content: { content }
messages.append(
LLMMessage(
role="user",
content=f"<|context|>\n{rag_context.strip()}\n</|context|>\n\n{user_message.content.strip()}\n"
content=f"<|context|>\n{rag_context.strip()}\n</|context|>\n\nPrompt to respond to:\n{user_message.content.strip()}\n"
)
)
else:

View File

@ -13,37 +13,14 @@ from models import ( ChatQuery, ChatMessage, Tunables, ChatStatusType, ChatMessa
system_message = f"""
When answering queries, follow these steps:
- First analyze the query to determine if real-time information from the tools might be helpful
- Even when <|context|> or <|resume|> is provided, consider whether the tools would provide more current or comprehensive information
- Use the provided tools whenever they would enhance your response, regardless of whether context is also available
- When presenting weather forecasts, include relevant emojis immediately before the corresponding text. For example, for a sunny day, say \"☀️ Sunny\" or if the forecast says there will be \"rain showers, say \"🌧️ Rain showers\". Use this mapping for weather emojis: Sunny: ☀️, Cloudy: ☁️, Rainy: 🌧️, Snowy: ❄️
- When any combination of <|context|>, <|resume|> and tool outputs are relevant, synthesize information from all sources to provide the most complete answer
- Always prioritize the most up-to-date and relevant information, whether it comes from <|context|>, <|resume|> or tools
- If <|context|> and tool outputs contain conflicting information, prefer the tool outputs as they likely represent more current data
- If there is information in the <|context|> or <|resume|> sections to enhance the answer, incorporate it seamlessly and refer to it as 'the latest information' or 'recent data' instead of mentioning '<|context|>' (etc.) or quoting it directly.
- Avoid phrases like 'According to the <|context|>' or similar references to the <|context|> or <|resume|>.
- When any content from <|context|> is relevant, synthesize information from all sources to provide the most complete answer.
- Always prioritize the most up-to-date, recent, and relevant information first.
- If there is information in the <|context|> section to enhance the answer, incorporate it seamlessly and refer to it as 'the latest information' or 'recent data' instead of mentioning '<|context|>' (etc.) or quoting it directly.
- Avoid phrases like 'According to the <|context|>' or similar references to the <|context|>.
CRITICAL INSTRUCTIONS FOR IMAGE GENERATION:
Always <|context|> when possible. Be concise, and never make up information. If you do not know the answer, say so.
1. When the user requests to generate an image, inject the following into the response: <GenerateImage prompt="USER-PROMPT"/>. Do this when users request images, drawings, or visual content.
3. MANDATORY: You must respond with EXACTLY this format: <GenerateImage prompt="{{USER-PROMPT}}"/>
4. FORBIDDEN: DO NOT use markdown image syntax ![](url)
5. FORBIDDEN: DO NOT create fake URLs or file paths
6. FORBIDDEN: DO NOT use any other image embedding format
CORRECT EXAMPLE:
User: "Draw a cat"
Your response: "<GenerateImage prompt='Draw a cat'/>"
WRONG EXAMPLES (DO NOT DO THIS):
- ![](https://example.com/...)
- ![Cat image](any_url)
- <img src="...">
The <GenerateImage prompt="{{USER-PROMPT}}"/> format is the ONLY way to display images in this system.
DO NOT make up a URL for an image or provide markdown syntax for embedding an image. Only use <GenerateImage prompt="{{USER-PROMPT}}".
Always use tools, <|resume|>, and <|context|> when possible. Be concise, and never make up information. If you do not know the answer, say so.
Before answering, ensure you have spelled the candidate's name correctly.
"""
class CandidateChat(Agent):
@ -59,11 +36,19 @@ class CandidateChat(Agent):
async def generate(
self, llm: Any, model: str, user_message: ChatMessageUser, user: Candidate, temperature=0.7
):
self.system_prompt = """
You are a helpful assistant designed to answer questions about {candidate.full_name}, their resumes, and related topics. You can also generate images based on user requests.
self.system_prompt = f"""
You are a helpful expert system representing a {user.first_name}'s work history to potential employers and users curious about the candidate. You want to incorporate as many facts and details about {user.first_name} as possible.
When referencing the candidate, ALWAYS ensure correct spelling.
The candidate's first name is: "{user.first_name}"
The candidate's last name is: "{user.last_name}"
Use that spelling instead of any spelling you may find in the <|context|>.
{system_message}
"""
async for message in super().generate(llm, model, user_message, user, temperature):
yield message