A practitioner perspective on AI in employee surveys.
Once the survey closes, there are two pieces of work to do. The first is understanding - making sense of what employees are telling you in their responses. The second is taking action - having the conversations about the data and planning what actions should follow.
AI can help with both.
AI helps with the understanding work in three ways
Every employee survey that includes open-text questions collects qualitative data that adds depth to the quantitative data. Employees might be citing a specific decision, describing an incident, explaining what the score would not let them say
That data has historically been underused, for a simple reason: reading and synthesising thousands of comments is impractical at scale.
Organisations running surveys with open-text questions now have a reason to take those questions seriously in a way that was previously difficult to sustain. A well-designed comment question — one that asks something specific rather than inviting general feedback — combined with AI-assisted analysis at the back end, can surface the texture of employee experience in a way that scores alone cannot.
The same underlying capability can be applied at the data collection stage. Rather than asking employees to respond to a fixed set of open-text questions, the survey can extend into a dialogue with each respondent — probing, clarifying, surfacing insights that a static question cannot reach.
This is the territory that post-survey focus groups have traditionally occupied — getting beyond the scores to understand what is sitting underneath them. Focus groups do this well in principle, but in practice they are expensive to convene, limited to a small sample of the workforce, and reliant on the people who choose to attend being broadly representative of the people whose perspectives matter most. AI-moderated conversational follow-up offers a way of doing similar work at the scale of the survey itself — every respondent gets the chance to expand on what they said, on their own time, in their own words.
The manager or HR user asks the AI what a particular set of results means, and the AI explains themes, points at patterns, and suggests what might warrant attention. This makes the reporting outputs more accessible, particularly for users who are not survey specialists. It is the same comment-analysis capability applied to a different point in the process — helping the user read the data, rather than processing the data itself.
AI does not make surveys more powerful. It makes the data they already collect more usable.
AI helps with the taking-action in four ways. The data has been understood; the question now is what the organisation will do about it.
Toolkit content triggered by score thresholds, or AI-generated suggestions about what the manager should do given their team's results. The output is a recommendation; the manager decides whether to act on it. AI here is doing the work of matching findings to actions — drawing on a library of practices, or generating suggestions tailored to the specific result.
Material designed to help a manager run a meaningful discussion with their team about what the survey found. The output is not a recommendation for the manager to act on, but a structure for a conversation the manager has with the team. That requires more than presenting the data in prose form. It requires pulling out what is distinctive in this team's results, framing those findings in a way that is usable in a discussion, and generating the questions that open a conversation rather than the statements that close one.
AI generating the messages, talking points, and follow-up communications the manager needs to share what they have decided. The output is a draft the manager edits and sends — to the team, to leadership, to other stakeholders. AI does this well, and it removes friction at the point where managers often stall. The watch-out is that AI-drafted messages need editing to sound like the manager. Teams notice when a message doesn't carry the voice they recognise.
The manager chats to the AI about what they are seeing in the data and how they want to approach the team. The AI prompts, probes, and helps the manager work through their interpretation and intent. The output gives the manager a better-formed sense of what the data means for the team and what to do about it.
The manager's conversations with the AI can be stored and revisited, which changes what the dialogue can do over time. The manager's interpretation of one survey, what they decided to surface, what they thought would change — these are exactly the things worth coming back to when the next survey lands. Did the team confirm the manager's reading, or contradict it? An AI dialogue that draws on the previous conversation could prompt the manager to look back at their own thinking and what came of it.
AI's contribution to understanding is concrete and largely uncontroversial. Qualitative data that was previously impractical to read at scale can now be synthesised in minutes. Conversational data collection produces richer responses than static questions. Chat layers over the dashboard make data more navigable. These are not small gains, and they apply across most organisations running surveys.
The taking-action picture is more varied. Each of the four capabilities does different work, and each can support a manager in a different part of the work that has to follow the survey. None of them remove the underlying responsibility — having the conversation, making the call, owning the change.
For most survey programmes in our experience, the work that matters most sits in the manager-team conversation. This is often the failure point. Not because managers lack a recommendation, a draft message, or an interpretation — but because the conversation between the manager and the team does not happen well, or does not happen at all. That conversation is the moment where survey findings either become real or fade quietly. What good looks like in that conversation, and what gets in the way, is the subject of a separate article in this series.
If you are evaluating AI capabilities in your survey platform or considering how to strengthen your follow-up process, three questions are worth asking.
First, are you using what you already collect? Open-text data from existing surveys is likely richer than your current process treats it. AI-assisted analysis of comments is widely available. If that data is currently being summarised manually or ignored, that is the first gap to close.
Second, what does your manager layer actually look like? Not the toolkit you provide, but what managers actually do with results in practice. In our experience, the answer is often 'less than intended' — not because managers are disengaged, but because what they are given does not make it easy to run the conversation they need to have with their team. Understanding that gap specifically is more useful than adding another recommendation to the dashboard.
Third, how complete is the record of what happened? Most organisations capture some version of this — leaders write up what they did after the last survey, often communicated as 'you said, we did' before the next survey is launched. What does get captured is the official narrative of what was done, not which conversations actually happened, what was tried that did not work, what was quietly abandoned, or whether teams felt the response was connected to what they had said. The loop is not missing. It is incomplete in a specific way: what gets captured is the version designed to be shared, not the version that helps the next decision.
AI makes some of this easier. It does not make any of it automatic. The work that matters most in most survey programmes sits in the manager-team conversation — and that is also the work AI is least able to do for you. The conversation between the manager and the team, the moment where survey findings either become real or fade quietly, still has to happen. That responsibility does not change.
If you want to talk about how your survey programme can be designed to make better use of AI tools, we would be glad to help. Contact us for a no-obligation conversation.
Let’s start a conversation about how employee surveys can help you develop a workplace where people and performance grow together.