AI Interpretation Feedback Prompt Master Guide

AI Interpretation is built for assessments that rely on rich, qualitative input like written responses, uploaded files, and complex scoring logic. It helps to scale the time-consuming task of reading through each submission and manually scoring responses by automating the delivery of feedback, summaries, and scores based on prompted analysis cues.

But as any seasoned AI dabbler will know, the quality depends a lot on the context you give, and how clearly you tell it what you need. So, how do you do that?

Your AI Interpretation Prompt Checklist

This guide covers the key things to include in your AI Interpretation feedback prompts to get to most useful results. As a starting point, every prompt should cover the following:

 

Instructions

In the simplest terms, what do you want AI Interpretation to achieve?

Subject

Who is the feedback for? How should the feedback address them?

Format Guides

How Long Should the Feedback Be? In what format?

Merge Strings

All AI Interpretation prompts should have a merge string in them to pull in answers or other elements of the assessment. This is the context that makes AI Interpretation really valuable.

(Cohorts Only) Split Types and Filters

For cohort interpretations that require multiple responses for analysis, a cohort split must be added. This can also be filtered by classifier.

 

Now let’s get into the details of how to approach these checklist items.

Ask prescriptive questions and give detailed instructions

Generic prompts like "Summarise the answer" will give you output, but it won’t be particularly helpful. The best prompts are specific about what you want the AI to focus on, what format the response should take, and how long it should be. This helps the AI deliver something much closer to what you actually need.

Rather than relying on the AI to guess what "good" looks like, spell it out. Be direct about what you're asking for. The more instructions you provide, the less time you will spend editing later.

 

Example Prompts

"In response to [question], summarise the following answer/s": [answer]. Use a paragraph format with clear headings and a maximum of 200 words."

"Highlight three strengths and one area for improvement based on the following answers to [question]: [answer]. Use short bullet points and keep the tone neutral and professional."

"List two things the respondent is doing well and one recommendation for improvement."

"Suggest one action the team could take to address the issue highlighted here: [answer]"

"Provide two coaching questions a manager could ask based on this response: [answer]."

 

Define your subject

Not all feedback should sound the same. A summary written for a compliance lead will need a different tone than one written for a new hire. You can include the audience type or intended recipient in your prompt to guide the AI Interpretation.

 

Example Prompts

“Write this summary for a senior HR manager reviewing leadership capability. Keep it concise and business-focused."

“Provide feedback suitable for a junior team member, using supportive and accessible language."

“Format this as a set of coaching notes a manager could use during a 1:1."

 

This helps the AI stay in line with your tone of voice and ensures the feedback resonates well with your audience.

Specify the desired format and tone

The more you say about how you want the output to be presented, the easier it is to use. You can define tone, length, format, and even the order of information. Formatting guidance keeps your reports consistent and easier to review. Ask: How many points should AI Interpretation make? How much detail? What should the interpretation include or exclude?

Consider where this content is headed. If it’s going into a professional report, you might want a formal, concise tone. If it’s for coaching notes, a supportive voice might be more appropriate.

 

Example Prompts

"Write in bullet points. No more than five bullets per section."

"Use a paragraph format with bolded subheadings for each theme."

"Begin with a one-sentence summary, followed by a numbered list of insights."

 

Add merge strings (ideally more than one)

AI Interpretation won’t read the parts of a response that it’s not told to read. It’s important to be clear about where the AI should focus using merge strings. Merge strings let you pull in specific content from responses, scores, or reference data directly into the AI prompt. This is especially helpful when you want to personalise feedback, add context, or guide how the AI makes comparisons, and they are super easy to build, without typing a single letter inside a curly bracket.

 

Example Prompts

“Summarise the respondent’s input to [Answer Text]."

“Based on the feedback we give about this subject here, [Rating Text], how else could the respondent improve? Here are their scores: [Table of scores]

“Use [Score] to highlight how the respondent performed in a given area."

“Use [RespondentName] to personalise the feedback." 

 

Use split types and filters to target responses in a cohort

If you are prompting a Cohort AI Interpretation with answer text, cohort splits will allow you to attribute the responses. Without a split type, the cohort prompt will fail to function.

The splits available for Cohort AI Interpretation answer text are:

  • Respondent - pass the name of the respondent that entered each answer

  • Role - pass the role of the respondent that entered each answer

  • You can make the responses anonymous by selecting to have no title in the Title rules

You can also apply filters at the Classifier level to refine the cohort grouping for your interpretation.

By adding a table you can provide AI with all of the scores of the cohort at any level. Refer to the help documentation on Tables and Heatmaps to see all of the options

 

Bonus Tips

Prompt in the language you want a response in

AI Interpretation answers in the language your prompt is written in. That means if you ask for output in French, Spanish, or a combination of languages, it will respond accordingly. This is especially useful for organisations working across regions or supporting multiple languages in their assessments.

 

Example Prompts

“Provide the summary in French." (This will give the response in English and French)

"Responda utilizando linguagem comercial formal." (This will give the response in formal Portuguese)

 

You can also use this for bilingual outputs or to test how different language versions of the same assessment might read.

Link to frameworks, models, or supporting material

If your assessment is based on a particular framework, methodology, or process (especially if it’s your own) include that in your prompt as a URL. This provides AI Interpretation with a foundation and a yardstick for generating feedback or recommendations.

Two ways you can do this: 

Name-drop known frameworks

If you're using a widely recognized model (e.g., GROW, Kirkpatrick, DISC, SWOT, etc.), simply reference it in your prompt. The AI will recognise it and use it to inform how it structures or interprets the content.

 

Example Prompts

“Score the written responses using the GROW model as a reference."

“Write feedback using the four levels of the Kirkpatrick Evaluation Model."

 

This helps generate content that aligns with the expectations of clients, teams, or industries already familiar with those models. Just make sure you cite your sources in your reports. Nobody likes plagiarism.

Link to your own frameworks

If you’ve developed a proprietary framework or use internal language that’s less well-known, include a link to a public resource. That could be a blog post, white paper, explainer page, or PDF. 

 

Example Prompts

"Provide recommendations based on our maturity model: https://yourcompany.com/assessment-framework."

 "Write improvement suggestions using the principles outlined here: https://yourblog.com/our-approach-to-team-development."

 

AI Interpretation can use that material as context, allowing the feedback to reflect your unique language, structure, or philosophy.

This is especially helpful when you're:

  • Delivering feedback to clients or participants who expect it to reflect your brand or process

  • Generating coaching actions or next steps that tie directly to your programs

  • Benchmarking responses against a clear model of success

AI Interpretation will work with the input you provide, so the more you input, the closer the feedback will align with your expectations. 

And lastly, test, test, test.

Don’t assume you’ll get the perfect response from AI Interpretation on the first try. It’s a tool, not an employee with 10 years of experience. You can run AI Interpretation across multiple responses and refine the prompt until it fits your needs. If you don’t have any responses yet, make some example responses to test different scenarios with.

If the tone is too vague, the insights feel shallow, or it focuses on the wrong area, go back to the prompt. You can also troubleshoot the output by examining the resolved prompt and identifying the context it used.

The more you test, the more confident you’ll be in the final result, especially if you’re generating dozens or hundreds of reports. 

Sophie Oxley

Founder of Sophie SaaS Marketing - the b2b SaaS marketing agency. AI enthusiast, slightly mad marketer.

https://thisissophie.com
Next
Next

AI Interpretation Scoring Prompt Master Guide