Why Conversation Produces Better Data
Traditional 360 surveys ask every rater the same fixed questions regardless of what they say. A rating of 4 out of 5 on “communicates effectively” tells you almost nothing useful — there is no context, no example, no pattern.
The issue is not the questions. It is the format. Surveys cannot probe. A conversation can.
Interval 360 replaces the static survey with a structured conversation guided by AI.
Fixed questions for every rater — no adaptation based on responses
Rating scales produce scores, not insight — "4 out of 5" says nothing specific
Open text fields produce inconsistent depth — some raters write one word, others write paragraphs
No mechanism to ask for an example when an answer is too general
30–50 items creates fatigue — raters rush through later questions
Output is shaped by what raters chose to volunteer, not by what would be most useful
Questions adapt based on what the rater says — the AI probes where it matters
Structured conversation produces specific examples, context, and patterns
Follow-up logic draws out depth consistently — every rater gets the same quality of probing
When a response is too general, the AI asks for a specific example
5–8 focused exchanges — lower burden, higher quality
The AI guides toward the information that makes output useful
The Follow-Up Logic
The AI guides the conversation, probes for specificity, and synthesizes responses into a structured report. It does not interpret the meaning of feedback or make talent decisions.
The AI does not make promotion or succession recommendations. It does not filter or rank feedback providers. It does not have visibility into the leader's HR record or performance history.
The AI is designed to avoid surfacing identifying details in synthesis. Individual responses are not attributed to specific raters in the output. Synthesis is built to reflect patterns across all feedback, not individual voices.
The starting questions are structured to focus on observable behavior and specific examples — not personal attributes. The follow-up logic is designed to draw out patterns across contexts, not to reinforce initial impressions.