AI Suggest Prompt Master Guide
AI Suggest is designed to help you build assessments faster by generating sections, subsections, questions, and report text based on your prompts.
You can use it to build entire assessments and sections from scratch or to speed up the parts that usually take the most time, like drafting report feedback for different rating bands or writing questions in a specific style.
But as with any AI tool that relies on context, what you put in really matters. A strong prompt gives the AI a clear direction. A vague one makes it guess. The good news is that, with a little structure and a few good habits, you can achieve great results without much trial and error.
This guide walks through how to write better prompts for AI Suggest, with real examples, tips, and small tweaks that can help you spend less time editing and more time building assessments that work.
Give AI Suggest a flying start
Before you write a prompt for AI Suggest, it’s critical to take a moment to check how you've named and described your assessment. These fields aren't just for your own reference; they act as the core context for AI Suggest when it generates your questions, sections, and feedback. This is the base it uses to understand what your assessment is about, who it’s for, and what sort of tone or structure it should aim for. The more specific and informative this context is, the more precise and relevant the AI’s output will be.
Assessment Title
Instead of something generic like “Leadership”, try something more specific, like “Leadership Skills for Mid-Level Managers in Cross-Functional Teams.” This immediately provides the AI with clues about the target audience, the skill area, and the environment in which those skills are applied.
Assessment Description
This is where you can add nuance and scope. Think about what this assessment is measuring, what kind of outcomes you're aiming for, and what would make a strong or weak performance.
Description Example
"This assessment is designed to help mid-level managers reflect on their leadership capabilities in areas like communication, delegation, and team development. It provides insight into current strengths and highlights areas for improvement. Results are used to support targeted coaching and leadership development planning."
Even if your following prompts are short, this background will do a lot of heavy lifting behind the scenes. If your description is short and vague, you’ll still get output from AI Suggest, but it’s likely to feel generic, misaligned, or simply not that useful. You might find yourself deleting entire sections and starting over because there wasn’t enough to go on.
Get down to the details in your prompts
Whether you’re using AI Suggest to build out sections, subsections, and questions, or generate feedback text, it works best when you’re super clear about what you want it to generate. Being specific about the details will help AI Suggest provide you with more relevant, well-written, and easily incorporated results directly into your assessment, without requiring extensive editing. So, how do you do it?
Give AI “an inch”, and it won’t take a mile.
One of the simplest ways to improve your results with AI Suggest is to tell it exactly how much content you want. AI is good at handling simple instructions, such as “write questions about communication,” but if you don’t provide a specific number, you’re asking it to guess what “enough” looks like. That usually means more editing and back-and-forth than necessary.
Giving numerical direction to AI Suggest sets a clear boundary to work within. It will know how many items to generate, how focused to be, and how to create the response that matches your needs.
Tip
Instead of: "Write questions about budgeting skills."
Try: “Write 5 questions, suitable for multiple choice answers, related to this subsection on Budgeting Skills. Focus particularly on planning and cost awareness."
This kind of specificity will have a big impact on the result, and you’ll get a defined set of content that’s easier to review and edit. But hey, if you don’t love what it gives you, it’s quicker to regenerate a new set than to trim down something that doesn't hit the mark.
Tip
You can use this approach for feedback prompts, too, by saying:
“Keep the feedback under 25 words.”
“Write 3 bullet points, each no longer than 15 words.”
AI Suggest will stay within that scope, which gives you more control over both the volume and quality of what you receive.
Set up your default answers for AI Suggest questions
The answer type you set in your question-level AI Suggest prompts directly influences the kind of questions AI Suggest generates. If you’ve chosen Slider questions on a scale of always-to-never, you’ll get questions that make sense for an always-to-never answer.
This works with any question type, provided that your default answers are already defined. Default answers are super easy to set up.
For example, you can add a scale for aptitude levels, maturity, confidence, or a number scale like 0-8.
This will help you get the most out of our prompting efforts, as you won’t need to rewrite or reformat questions to match the answer type intended.
Set the tone
AI Suggest will write in the tone or style you request. If you want formal language, you can say that. If you’re aiming for something more conversational or approachable, you can also request that. The more you guide the tone, the more the output will feel like it fits your audience.
This matters most when you're creating rating feedback or report text, and tone plays a big role in how the feedback is received. A coaching-style comment might work well for early-career professionals, while a senior audience might expect something more direct and concise.
You can also set expectations around length. If you want bullet points, or short answers, or a detailed paragraph, include that in the prompt. AI Suggest will follow that structure.
Tone Prompt Examples
“Write in a friendly and supportive tone.”
“Use a professional tone, suitable for senior executives and provide feedback in bullet points.”
“Make the feedback conversational and jargon-free.”
By default, the tone is neutral and safe, but that might not be what you want. If you skip this, you may end up rewriting feedback that sounds too vague or too stiff. Setting tone and style upfront helps avoid that and gives you content that’s closer to ready.
Tip
To apply this to all your prompts, update the default prompts in the Assessment Manager.
Show AI Suggest your homework
If your assessment is built around a specific model, framework, or methodology, include that in your prompt. AI Suggest works with both well-known frameworks and your own proprietary ones, as long as it has something to reference.
You can name the framework directly (for example, “Use the GROW coaching model”) or include a URL/link to an article, guide, or resource. AI will take that into account when generating content. Just be aware of plagiarism, and make sure that if you’re using someone else’s framework, you obtain permission to use it, or give it credit.
This method really works well for assessments based on industry standards or internal IP. You don’t need the base framework to be widely known. A public blog post, white paper, or PDF hosted on your website is usually enough.
Example Reference Prompts
“Use the ICO’s GDPR requirements for small businesses to structure the core assessment sections.”
“Refer to our process here: https://example.com/methodology-overview”
“Build questions based on this competency framework: https://yourdomain.com/skills-map”
If AI Suggest understands how you think about a topic, it’s more likely to generate output that fits your approach, meaning less rewriting and more consistency across your assessments.
Speak your language
AI Suggest gives results in whatever language you use when writing the prompt. If you write in English, you will receive an answer in English. If you write in French, Spanish, German, or any other supported language, it will respond in that language. This makes it easy to create assessments for international audiences.
This applies to report text, rating band feedback, and any other content generated through AI Suggest such as sections, subsections, and questions.
You may also use AI Suggest to translate your assessment into different languages. But be clear about the reference point, and bear in mind that multi-lingual assessments must share the exact same structural format to be pooled into a cohort report.
Tip
The more fluent and natural your prompt is, the better the response will be. If you’re writing in another language, write as you normally would. There’s no need to simplify your grammar or vocabulary.
Hide rating band scores
This one is specific to ratings. Ratings are score bands that define the feedback a respondent will receive in their feedback report, and they’re often labeled as High, Medium, and Low, or Green, Orange, and Red (similar to traffic lights). But AI isn’t the best at interpreting these names, so instead, we pull the score band for the rating into the prompt.
If you don’t want AI to reference these scores in the suggested feedback, simply instruct it not to, and it will remove the numbers.
Maintain the anatomy of AI Suggest prompts (merge strings)
In your default prompts, you’ll see some information in curly brackets. That’s not a placeholder! These are merge strings that pulls that all-important context of the assessment into your prompt.
Here’s a quick guide to what they all mean:
Sections
What are the topics I should cover in an assessment on {AssessmentName} (one - two words)
{AssessmentName}: The name of your assessment
Note: You can change the word count suggestion
Subsections
What are the top 5 sub topics I should cover for {SectionName} in an assessment on {AssessmentName}
{SectionName}: The name of the section
{AssessmentName}: The name of your assessment
Questions (and answers)
What questions should I ask to measure them on {SubsectionName} using answers of ({Default Answer Group Name})
{SubsectionName}: The name of the subsection
{Default Answer Group Name}: The type of answers you have chosen (e.g. Always to never, Yes/No, three answers).
Answer Report Text
In a paragraph, what feedback would you recommend to a respondent that answered {Answer} for {QuestionName}
{Answer} This is the answer that has been selected
{QuestionName} This is the question that is associated with the answer.
Assessment Rating Text
In a paragraph, what feedback would you recommend to a respondent that scored {ScoreRange} for {AssessmentName}
{ScoreRange}: The score range for that rating band
{AssessmentName}: The name of your assessment
Section Rating Text
In a paragraph, what feedback would you recommend to a respondent that scored {ScoreRange} for {SectionName}
{ScoreRange}: The score range for that rating band
{SectionName}: The name of your section
Subsection Rating Text
In a paragraph, what feedback would you recommend to a respondent that scored {ScoreRange} for {SubsectionName}
{ScoreRange}: The score range for that rating band
{SubsectionName}: The name of your subsection
Segmentation Rating Text
In a paragraph, what feedback would you recommend to a respondent that scored {ScoreRange} for {SegmentationName}
{ScoreRange}: The score range for that rating band
{SegmentationName}: The name of your subsection
And finally, don’t forget to use your brain when you’re building your assessment
While AI Suggest is a powerful tool, it’s exactly that — a tool to help you build your assessment faster and get up and running. We expect it will be used most by people on a free trial who want to quickly try the system. It’s not a replacement for the expertise and skill that most Brilliant Assessments users bring. AI Interpretation is a different story, combining human expertise with the power of automation. But Suggest does have its limits.
These are useful features to help you get more from the platform, but don’t underestimate the know-how you bring. That insight can’t be replicated by AI.