Enterprises have to know precisely what their techniques detect, and that definition should keep constant over time. Writing a definition exact sufficient to settle each exhausting case has lengthy been impractical as a result of human annotators can’t maintain a doc that detailed in working reminiscence. In our analysis paper, Single-Supply Security Definitions, we change the human interpreter with AI and present that LLMs can maintain, apply, and keep specs far longer and extra exact than any annotator can, making the definition itself the only supply of reality for classification, labeling, retraining, and customer-facing explanations. For our Cisco AI Protection product portfolio, we’re shifting our full security taxonomy to this AI-first mannequin. We additionally lengthen this strategy past security classifications, as proven in Defining Mannequin Provenance: A Structure for AI Provide Chain Security and Safety.
Cisco’s Built-in AI Safety and Security Framework organizes the threats enterprises face when deploying synthetic intelligence (AI): dangerous content material, aim hijacking, knowledge privateness violations, action-space exploits, and persistence assaults. Every top-level risk breaks down into methods, and each method wants a definition exact sufficient {that a} classifier, an annotator, a buyer, and a compliance reviewer attain the identical choice on the identical enter. Current taxonomies, ours amongst them, haven’t but produced such a definition for a big share of those methods (harassment, hate speech, jailbreak, and others), and the sincere description of how they get determined in apply comes from Justice Potter Stewart’s concurrence in Jacobellis v. Ohio, 378 U.S. 184 (1964): I do know it after I see it. A decide can rule one case at a time, however a guardrail flagging hundreds of conversations an hour can’t debate every borderline case or watch for social consensus. And not using a written specification, we can’t measure efficiency, clarify a flag to a buyer, or assure the identical case is determined the identical means from one month to the following.
Annotation science acknowledges two paths (Röttger et al., 2022). The descriptive path accepts that cheap folks disagree and treats the variation as sign, which scales with people however produces no secure specification. The prescriptive path writes guidelines detailed sufficient that totally different readers converge, however till just lately it was impractical: adjudicating the lengthy tail of edge instances outruns any group’s capability, and the ensuing doc overflows what an annotator can maintain in working reminiscence. Frontier massive language fashions (LLMs) change the economics by re-reading a 300-line specification on each classification and scaling adjudication to manufacturing volumes, and when two fashions from totally different distributors disagree beneath the identical specification, the disagreement locates the sentence that’s nonetheless ambiguous and lets us validate by a focused patch fairly than an open debate.
A single supply of reality, pushed finish to finish by AI
Anthropic’s Constitutional AI confirmed {that a} natural-language doc can work as an executable specification, and their Constitutional Classifiers prolonged the thought to security filtering by distilling a structure into artificial coaching knowledge for a fine-tuned classifier. We lengthen the time period to a per-technique operational specification: one 300+ line doc for each method within the Cisco AI Safety and Security Framework, with required parts, a choice flowchart, boundary rulings in opposition to adjoining methods, labored examples, and amassed edge-case choices. We deal with it as the only supply of reality that each downstream course of adjudicates in opposition to, together with runtime classification (the LLM reads the total doc on each name), synthetic-data technology for retraining, labeling tips, customer-facing documentation, and compliance mappings.
In our workflow the human position reduces to at least one query, what ought to this system imply, answered by a subject-matter skilled who units the intent and scope after which delegates every little thing else to AI. AI drafts the structure from the taxonomy supply, labels manufacturing conversations, diagnoses the place frontier fashions disagree, proposes patches to the accountable sections, and audits throughout constitutions for contradictions and gaps. The skilled evaluations patches and accepts, modifies, or rejects them, with out hand-labeling conversations or holding the total doc in reminiscence.
We additionally introduce a dual-axis formulation that earlier security classifiers don’t produce. Intent captures whether or not the consumer tried to trigger hurt by this system. Content material captures whether or not dangerous materials for this system appeared within the dialog. Intent with out content material means the mannequin was probed and refused. Content material with out intent exposes mannequin misbehavior on a benign request. Each optimistic marks a guardrail failure, and each damaging covers clear conversations, together with discussions a couple of subject. We rating each axes over the total dialog, since multi-turn assaults construct intent progressively.
Are LLMs truly higher evaluators?
We evaluated three methods (Harassment, Non-Violent Crime, Hate Speech) utilizing six LLMs from three distributors. On WildChat conversations, two frontier LLMs studying a paragraph-level definition disagree on as much as 66 conversations per 1,000; beneath the structure, that falls under 3 per 1,000, a discount of as much as 57x. On HarmBench, three frontier LLMs studying a structure attain unanimous intent labels extra usually than three people studying the identical doc.

Non-unanimous instances per 1,000 conversations on HarmBench (decrease is healthier). LLM raters: GPT-5.4, Opus 4.6, Gemini 3.1, every studying the identical structure the people obtained.Â
We traced the human failures to 2 causes. A 300+ line doc exceeds working reminiscence, so annotators compress the written guidelines into remembered heuristics and fall again on instinct. In addition they collapse multi-technique taxonomies into single-label triage, submitting a dialog beneath one sibling method as a substitute of evaluating every structure independently. LLMs keep away from each failures by re-reading the total doc each name and judging every method in isolation. Their remaining failures misapply choice logic in methods we will hint to particular sections, whereas human failures silently skip the principles. We count on the hole to widen: constitutions develop as new edge instances accumulate, human working reminiscence stays mounted, and mannequin instruction following, context size, and reasoning all hold bettering.
Residual disagreement between frontier fashions stops being noise to vote away. Every remaining case factors to a selected sentence that’s ambiguous or incomplete, and our refinement loop converts that sentence into an express ruling.
What this implies for Cisco AI Protection clients
Prospects care much less a couple of analysis quantity than about seeing why the system made a given choice. Each flag traces to a selected rule in a readable doc: the classifier cites the rule it utilized, the weather it discovered, and the boundary notes it checked, and when we don’t flag, the identical doc explains why the case fell exterior the road. Within the close to future clients will have the ability to question this specification immediately by an AI assistant, while not having to be specialists in a class , and get a plain-language reply grounded within the textual content. The identical doc drives retraining, labeling, product, authorized, and go-to-market, so a wording change spreads in every single place from one supply. AI-first just isn’t a slogan however a concrete shift in how we construct these techniques, quicker, less complicated, and extra correct internally and for our clients.
Learn the total analysis paper: Bettering Labeling Consistency with Detailed Constitutional Definitions and AI-Pushed Analysis.














