Skip to main content
Guardrails policies check text (e.g. model output or user messages) for safety, PII, moderation, and other rules. Use the SDK method guard() to evaluate a guardrails policy before sending content to users or downstream systems. For a full explanation of guardrail types (PII, moderation, jailbreak, NSFW, URL filter, etc.) and when to use each, see Policies: Guardrails.

Policies: Guardrails

Learn about guardrail types and how to configure them.

guard(policyKeyOrTag, input)

Evaluates a guardrails policy on a string. input is the text to check (e.g. LLM response or user message). Signature
async guard(
  policyKeyOrTag: string,
  input: string
): Promise<PolicyEvaluationResult>
ParameterTypeRequiredDescription
policyKeyOrTagstringYesPolicy key (e.g. 'content-guardrail') or tag (e.g. '#safety').
inputstringYesText to check (e.g. model output).
Returns: Promise<PolicyEvaluationResult>. See Result shape below. Throws: InvalidPolicyKeyError, InvalidInputError, APIRequestError, NetworkError, TimeoutError.

Policy key or tag

  • Policy key — A single guardrails policy, e.g. 'content-guardrail'.
  • Tag — All policies with that tag are evaluated. Use the tag name with a # prefix, e.g. '#safety'.
When using a tag, the strictest result wins: BLOCKESCALATEALLOW. If any policy blocks, the decision is block; otherwise if any escalates, the decision is escalate; otherwise allow.

Examples

Basic usage

import { Limits } from '@limits/js';

const limits = new Limits({ apiKey: process.env.LIMITS_API_KEY! });

const modelOutput = "Here is the user's email: [email protected] and phone 555-123-4567.";
const decision = await limits.guard('content-guardrail', modelOutput);

if (decision.isBlocked) {
  console.error('Blocked:', decision.data.reason);
  return;
}
if (decision.isEscalated) {
  console.warn('Needs review:', decision.data.reason);
  return;
}
// Allowed — safe to send to user
console.log('Allowed:', decision.data.reason);

By tag

const decision = await limits.guard('#safety', modelOutput);

In middleware (Node/Express)

app.post('/chat', async (req, res) => {
  const userMessage = req.body.message;
  const llmResponse = await callLLM(userMessage);

  const decision = await limits.guard('content-guardrail', llmResponse);
  if (decision.isBlocked) {
    return res.status(400).json({ error: 'Content blocked', reason: decision.data.reason });
  }
  if (decision.isEscalated) {
    return res.status(202).json({ message: 'Pending review', reason: decision.data.reason });
  }
  res.json({ response: llmResponse });
});

Result shape

The API returns a response with meta and result; the SDK normalizes this to the PolicyEvaluationResult shape below. guard() returns the same PolicyEvaluationResult as check() and evaluate():
{
  data: { action: 'ALLOW' | 'BLOCK' | 'ESCALATE'; reason: string };
  isAllowed: boolean;
  isBlocked: boolean;
  isEscalated: boolean;
  errors?: string[];  // validation errors when present
}
FieldTypeDescription
data.action'ALLOW' | 'BLOCK' | 'ESCALATE'Decision.
data.reasonstringHuman-readable reason.
isAllowedbooleandata.action === 'ALLOW'.
isBlockedbooleandata.action === 'BLOCK'.
isEscalatedbooleandata.action === 'ESCALATE'.
errorsstring[] | undefinedValidation errors from the API when present.

Error handling

Error classWhen it is thrown
InvalidAPIKeyErrorKey missing or doesn’t start with sk_.
InvalidPolicyKeyErrorPolicy key/tag empty or invalid.
InvalidInputErrorEmpty string input.
APIRequestErrorAPI returned 4xx/5xx (error.statusCode, error.response).
NetworkError / TimeoutErrorRequest failed or timed out.
import { Limits, APIRequestError } from '@limits/js';

try {
  const decision = await limits.guard('content-guardrail', text);
  // use decision
} catch (err) {
  if (err instanceof APIRequestError) {
    console.error('API error:', err.statusCode, err.message);
    if (err.statusCode === 404) console.error('Policy not found');
    return;
  }
  throw err;
}

Next steps