🎬 That's a Wrap for GraphQLConf 2024! • Watch the Videos • Check out the recorded talks and workshops
DocumentationOperation Complexity Controls

Operation Complexity Controls

GraphQL gives clients a lot of flexibility to shape responses, but that flexibility can also introduce risk. Clients can request deeply nested fields or large volumes of data in a single operation. Without controls, these operations can slow down your server or expose security vulnerabilities.

This guide explains how to measure and limit operation complexity in GraphQL.js using static analysis. You’ll learn how to estimate the cost of an operation before execution and reject it if it exceeds a safe limit.

ℹ️

In production, we recommend using trusted documents rather than analyzing arbitrary documents at runtime. Complexity analysis can still be useful at build time to catch expensive operations before they’re deployed.

Why complexity control matters

GraphQL lets clients choose exactly what data they want. That flexibility is powerful, but it also makes it hard to predict the runtime cost of a query just by looking at the schema.

Without safeguards, clients could:

  • Request deeply nested object relationships
  • Use nested fragments to multiply field resolution
  • Exploit pagination arguments to retrieve excessive data

Certain field types (e.g., lists, interfaces, unions) can also significantly increase cost by multiplying the number of values returned or resolved.

Complexity controls help prevent these issues. They allow you to:

  • Protect your backend from denial-of-service attacks or accidental load
  • Enforce cost-based usage limits between clients or environments
  • Detect expensive queries early in development
  • Add an additional layer of protection alongside authentication, depth limits, and validation

For more information, see Security best practices.

Estimating operation cost

To measure a query’s complexity, you typically:

  1. Parse the incoming operation into a GraphQL document.
  2. Walk the query’s Abstract Syntax Tree (AST), which represents its structure.
  3. Assign a cost to each field, often using static heuristics or metadata.
  4. Reject or log the operation if it exceeds a maximum allowed complexity.

You can do this using custom middleware or validation rules that run before execution. No resolvers are called unless the operation passes these checks.

ℹ️

Fragment cycles or deep nesting can cause some complexity analyzers to perform poorly or get stuck. Always run complexity analysis after validation unless your analyzer explicitly handles cycles safely.

Simple complexity calculation

There are several community-maintained tools for complexity analysis. The examples in this guide use graphql-query-complexity as an option, but we recommend choosing the approach that best fits your project.

Here’s a basic example using its simpleEstimator, which assigns a flat cost to every field:

import { parse } from 'graphql';
import { getComplexity, simpleEstimator } from 'graphql-query-complexity';
import { schema } from './schema.js';
 
const query = `
  query {
    users {
      id
      name
      posts {
        id
        title
      }
    }
  }
`;
 
const complexity = getComplexity({
  schema,
  query: parse(query),
  estimators: [simpleEstimator({ defaultComplexity: 1 })],
  variables: {},
});
 
if (complexity > 100) {
  throw new Error(`Query is too complex: ${complexity}`);
}
 
console.log(`Query complexity: ${complexity}`);

In this example, every field costs 1 point. The total complexity is the number of fields, adjusted for nesting and fragments. The complexity is calculated before execution begins, allowing you to reject costly operations early.

Custom cost estimators

Some fields are more expensive than others. For example, a paginated list might be more costly than a scalar field. You can define per-field costs using fieldExtensionsEstimator, a feature supported by some complexity tools.

This estimator reads cost metadata from the field’s extensions.complexity function in your schema. For example:

import { GraphQLObjectType, GraphQLList, GraphQLInt } from 'graphql';
import { PostType } from './post-type.js';
 
const UserType = new GraphQLObjectType({
  name: 'User',
  fields: {
    posts: {
      type: GraphQLList(PostType),
      args: {
        limit: { type: GraphQLInt },
      },
      extensions: {
        complexity: ({ args, childComplexity }) => {
          const limit = args.limit ?? 10;
          return childComplexity * limit;
        },
      },
    },
  },
});

In this example, the cost of posts depends on the number of items requested (limit) and the complexity of each child field.

ℹ️

Most validation steps don’t have access to variable values. If your complexity calculation depends on variables (like limit), make sure it runs after validation, not as part of it.

To evaluate the cost before execution, you can combine estimators like this:

import { parse } from 'graphql';
import {
  getComplexity,
  simpleEstimator,
  fieldExtensionsEstimator,
} from 'graphql-query-complexity';
import { schema } from './schema.js';
 
const query = `
  query {
    users {
      id
      posts(limit: 5) {
        id
        title
      }
    }
  }
`;
 
const document = parse(query);
 
const complexity = getComplexity({
  schema,
  query: document,
  variables: {},
  estimators: [
    fieldExtensionsEstimator(),
    simpleEstimator({ defaultComplexity: 1 }),
  ],
});
 
console.log(`Query complexity: ${complexity}`);

Estimators are evaluated in order. The first one to return a numeric value is used for a given field. This lets you define detailed logic for specific fields and fall back to a default cost elsewhere.

Enforcing limits in your server

Some tools allow you to enforce complexity limits during validation by adding a rule to your GraphQL.js server. For example, graphql-query-complexity provides a createComplexityRule helper:

import { graphql, specifiedRules, parse } from 'graphql';
import { createComplexityRule, simpleEstimator } from 'graphql-query-complexity';
import { schema } from './schema.js';
 
const source = `
  query {
    users {
      id
      posts {
        title
      }
    }
  }
`;
 
const document = parse(source);
 
const result = await graphql({
  schema,
  source,
  validationRules: [
    ...specifiedRules,
    createComplexityRule({
      estimators: [simpleEstimator({ defaultComplexity: 1 })],
      maximumComplexity: 50,
      onComplete: (complexity) => {
        console.log('Query complexity:', complexity);
      },
    }),
  ],
});
 
console.log(result);
ℹ️

Only use complexity rules in validation if you’re sure the analysis is cycle-safe. Otherwise, run complexity checks after validation and before execution.

Complexity in trusted environments

In environments that use persisted or precompiled operations, complexity analysis is still useful, just in a different way. You can run it at build time to:

  • Warn engineers about expensive operations during development
  • Track changes to operation cost across schema changes
  • Define internal usage budgets by team, client, or role

Best practices

  • Only accept trusted documents in production when possible.
  • Use complexity analysis as a development-time safeguard.
  • Avoid running untrusted operations without additional validation and cost checks.
  • Account for list fields and abstract types, which can significantly increase cost.
  • Avoid estimating complexity before validation unless you’re confident in your tooling.
  • Use complexity analysis as part of your layered security strategy, alongside depth limits, field guards, and authentication.

Additional resources