BETTER-CONVEX

Update

Update rows with Drizzle-style builders

In this guide, we'll learn how to update rows using the ORM's Drizzle-style update() builder. You'll see basic updates, returning clauses, paginated execution, and async batching for large workloads.

Basic Update

Let's start with a mutation that renames a user by id:

convex/functions/users.ts
import { z } from 'zod';
import { eq } from 'kitcn/orm';
import { publicMutation } from '../lib/crpc';
import { users } from '../schema';

export const renameUser = publicMutation
  .input(z.object({ userId: z.string(), name: z.string() }))
  .mutation(async ({ ctx, input }) => {
    await ctx.orm
      .update(users)
      .set({ name: input.name })
      .where(eq(users.id, input.userId));
  });

Important: update() without .where(...) throws unless you call .allowFullScan(). See Querying Data for details on allowFullScan.

Returning

Use .returning() to get back the updated rows. You can return all fields or pick specific columns:

const updated = await ctx.orm
  .update(users)
  .set({ name: 'Mr. Dan' })
  .where(eq(users.id, userId))
  .returning();

const ids = await ctx.orm
  .update(users)
  .set({ name: 'Mr. Dan' })
  .where(eq(users.id, userId))
  .returning({ id: users.id });

The ORM collects matching rows in bounded pages before applying writes. See API Reference -- Safety Limits for defaults and override syntax.

Paginated Update Execution

For large workloads that exceed safety limits, you can process updates page-by-page. This follows Convex's batching pattern and avoids one large transaction.

Here's how to process updates across multiple pages. This requires an index on the filtered field:

// Schema: index('by_role').on(t.role) on users table
const page1 = await ctx.orm
  .update(users)
  .set({ role: 'member' })
  .where(eq(users.role, 'pending'))
  .paginate({ cursor: null, limit: 100 });

if (!page1.isDone) {
  const page2 = await ctx.orm
    .update(users)
    .set({ role: 'member' })
    .where(eq(users.role, 'pending'))
    .paginate({ cursor: page1.continueCursor, limit: 100 });
}

Each page returns:

  • continueCursor -- cursor for the next batch
  • isDone -- true when no more pages remain
  • numAffected -- rows updated in this page
  • page -- returned rows (only when .returning() is used)

Note: paginate() currently supports single-range index plans. Multi-probe filters (inArray, some OR patterns, complement ranges) are not yet supported in paged mutation mode.

Async Batched Update

Updates run in async mode by default. The first batch runs in the current mutation, then remaining batches are scheduled automatically.

Customize batch size and delay per call:

const firstBatch = await ctx.orm
  .update(users)
  .set({ role: 'member' })
  .where(eq(users.role, 'pending'))
  .returning({ id: users.id })
  .execute({ batchSize: 200, delayMs: 0 });

See API Reference -- Async Batched Update Behaviors for resolution precedence details.

To force sync execution (all rows in a single transaction), use .execute({ mode: 'sync' }) or set defineSchema(..., { defaults: { mutationExecutionMode: 'sync' } }).

Drizzle Differences

A few SQL-only features from Drizzle are not applicable in Convex:

  • limit, orderBy, UPDATE ... FROM, and WITH clauses are not supported
  • undefined values passed to .set(...) are ignored (treated as "not provided"). If everything is undefined, the update is a no-op.
  • to explicitly remove a field, use unsetToken: .set({ nickname: unsetToken }) (shallow: unsets the top-level field only)

Note: Unique constraints, foreign keys, and RLS policies are enforced at runtime for ORM mutations. Direct native Convex writes like ctx.db.patch(...) bypass these checks (and are intentionally not exposed on ctx.orm).

You now have everything you need to update data, from simple field changes to large-scale async batching.

API Reference

Safety Limits

The ORM collects matching rows in bounded pages before applying writes. The key defaults are:

  • mutationBatchSize: 400
  • mutationMaxRows: 10000
  • mutationLeafBatchSize: 1600 (async FK fan-out)

If matched rows exceed mutationMaxRows, the update throws. You can customize these values in your schema:

export default defineSchema({ users, posts }, {
  defaults: {
    mutationBatchSize: 200,
    mutationMaxRows: 5000,
  },
});

For the full list of configurable defaults, see Schema Definition -- Runtime Defaults.

Async Batched Update Behaviors

  • With .returning(), you get rows from the first batch only — remaining batches are scheduled
  • Async mode cannot be combined with .paginate()
  • batchSize resolves as: per-call batchSize > defaults.mutationBatchSize > 400
  • delayMs resolves as: per-call delayMs > defaults.mutationAsyncDelayMs > 0
  • Async FK update fan-out (onUpdate: 'cascade', set null, set default) uses mutationLeafBatchSize

Next Steps

On this page