Prompt Guidance for Anthropic Claude and AWS Titan on AWS Bedrock
When working with advanced AI models like Anthropic’s Claude and AWS Titan, precise communication is key. Both models require specific prompting techniques to maximize their potential. For Claude, clear instructions, structured prompts, and role assignments lead to better accuracy and responsiveness. On the other hand, AWS Titan thrives on concise, well-defined requests and delivers streamlined outputs by default. This guide explores how to optimize your interactions with these models by leveraging best practices such as XML tags, role prompting, and providing clear examples for improved performance.
Anthropic Claude:
Claude responds best to clear and direct instructions
Assigning roles (aka role prompting):
Claude sometimes needs context about what role it should inhabit. Assigning roles changes Claude’s response in two ways:
1. Improved accuracy in certain situations (such as mathematics)
2. Changed tone and demeanor to match the specified role
Use XML tags:
Disorganized prompts are hard for Claude to comprehend.
Just like section titles and headers help humans better follow information, using delineators like XML tags <></> helps Claude understand the prompt’s structure.
Best Practice is to use XML tags, as Claude has been especially trained on XML tags.
Separating data from instructions
Including input data directly in prompts can make prompts overly long and hard to troubleshoot.
Separating prompt structure from input data allows for:
– Easier editing of the prompt itself
– Much faster processing of multiple datasets
Thinking step by step
Claude benefits from having time to think through tasks before executing. Especially if a task is particularly complex, tell Claude to think step by step before it answers.
It’s Increases intelligence of responses but also increases latency by adding to the length of the output.
Using examples
Examples are probably the single most effective tool for getting Claude to behave as desired. Make sure to give Claude examples of common edge cases.
Generally more examples = more reliable responses at the cost of latency and tokens.
AWS Bedrock:
The following pattern works well with Titan:
Instruction: Your ask from the model (e.g. summarize, answer from the text)
Dialogue and role play
Translation
Classification, meta data extraction and analysis
Prompt and Output Guidance
Outputs concise and short answers by default. Usually single line or paragraph
- Be specific with number of sentences, bullet points, paragraphs
- Can get more detailed answers by adding instructions in the prompt
- For longer {context + input} it’s better to provide the instruction or output indicator at the end to get better results
In this blog, we dive into practical tips for getting the most out of Anthropic Claude and AWS Titan on AWS Bedrock. By using structured prompts, role assignments, and clear examples, you can greatly enhance the accuracy and efficiency of your AI interactions. If you’re looking to fine-tune your AI strategies or need help implementing these best practices, don’t hesitate to reach out to us. We’re here to help you get the best results from your AI models