Skip to content

Fix: MongoDB E11000 duplicate key error collection

FixDevs ·

Quick Answer

How to fix the MongoDB E11000 duplicate key error by identifying duplicate fields, fixing index conflicts, using upserts, handling null values, and resolving race conditions.

The Error

You try to insert or update a document in MongoDB and get:

E11000 duplicate key error collection: mydb.users index: email_1 dup key: { email: "john@example.com" }

Or one of these variations:

E11000 duplicate key error collection: mydb.users index: _id_ dup key: { _id: ObjectId('507f1f77bcf86cd799439011') }
MongoServerError: E11000 duplicate key error collection: mydb.orders index: orderNumber_1 dup key: { orderNumber: "ORD-1001" }
WriteError: E11000 duplicate key error collection: mydb.products index: sku_1 dup key: { sku: null }
mongoose.MongoServerError: E11000 duplicate key error collection: test.users index: username_1 dup key: { username: "admin" }

All of these mean the same thing: MongoDB tried to insert or update a document, but the value being written already exists in a field (or combination of fields) that has a unique index. MongoDB refuses to create a duplicate entry and throws error code E11000.

Why This Happens

MongoDB enforces uniqueness through unique indexes. The _id field always has a unique index by default. When you create an additional unique index on a field like email or username, MongoDB guarantees no two documents in that collection can share the same value for that field.

The E11000 error fires when an insert or update would violate that guarantee. Here are the most common causes:

  • Explicit _id conflicts. You manually set the _id field and the value already exists in the collection. This happens frequently during data migrations, seed scripts, or when importing JSON/CSV dumps.
  • Application-level duplicates. Your code inserts a document with a value that genuinely conflicts with existing data. A user tries to register with an email that is already taken, for example.
  • Stale or orphaned indexes. A unique index exists on a field that your application no longer uses or has renamed. The index still enforces uniqueness on the old field, causing unexpected conflicts.
  • Null values treated as duplicates. If a unique index exists on a field and multiple documents are missing that field entirely, MongoDB treats all of them as having the value null. The second document with a missing field triggers E11000. This is one of the most confusing causes — similar to how Python KeyError surprises developers when a key is unexpectedly absent.
  • Race conditions. Two requests try to insert documents with the same unique value at nearly the same time. One succeeds, the other gets E11000.
  • Compound index mismatches. A unique index spans multiple fields, and the combination of values across those fields already exists, even if individual fields appear unique.

Understanding the exact cause is critical. The error message itself tells you which collection, which index, and which value caused the conflict. Start there.

Fix 1: Identify the Duplicate Field

Before applying any fix, parse the error message carefully. Every E11000 error tells you exactly what went wrong.

Break down this error:

E11000 duplicate key error collection: mydb.users index: email_1 dup key: { email: "john@example.com" }
  • Collection: mydb.users — the collection where the conflict occurred.
  • Index: email_1 — the name of the unique index that was violated.
  • Dup key: { email: "john@example.com" } — the exact value that already exists.

Find the existing document that holds the conflicting value:

db.users.find({ email: "john@example.com" })

Check all unique indexes on the collection:

db.users.getIndexes()

This returns an array of index definitions. Look for any index with "unique": true:

[
  { v: 2, key: { _id: 1 }, name: '_id_' },
  { v: 2, key: { email: 1 }, name: 'email_1', unique: true },
  { v: 2, key: { username: 1 }, name: 'username_1', unique: true }
]

Now you know which fields enforce uniqueness and can decide whether to update the existing document, remove the conflicting index, or change the value you are trying to insert.

Pro Tip: In a Mongoose application, the index name in the error message maps directly to the field in your schema that has unique: true. If the index name is email_1, the field is email. The _1 suffix indicates an ascending index. If you see a compound index name like firstName_1_lastName_1, the uniqueness constraint spans both fields together.

Fix 2: Fix _id Field Conflicts

The _id field is the most fundamental unique constraint in MongoDB. If you manually set _id values, you are responsible for ensuring they do not collide.

Check if a document with that _id already exists:

db.users.findOne({ _id: ObjectId("507f1f77bcf86cd799439011") })

If you are importing data or running a seed script, you have two options. Either remove the _id field from your documents and let MongoDB generate them automatically:

// Before - explicit _id causes conflicts on re-import
{ _id: ObjectId("507f1f77bcf86cd799439011"), name: "Alice" }

// After - let MongoDB generate _id
{ name: "Alice" }

Or drop the collection before re-importing:

db.users.drop()

If you need to keep existing data and handle conflicts gracefully, use bulkWrite with updateOne and upsert: true:

db.users.bulkWrite([
  {
    updateOne: {
      filter: { _id: ObjectId("507f1f77bcf86cd799439011") },
      update: { $set: { name: "Alice", email: "alice@example.com" } },
      upsert: true
    }
  }
])

This inserts the document if the _id does not exist, or updates it if it does.

Fix 3: Drop and Recreate a Problematic Index

Sometimes a unique index should not exist. This happens when you remove a unique: true constraint from your schema but the index persists in the database. MongoDB does not automatically remove indexes when you change application code.

List all indexes:

db.users.getIndexes()

Drop the problematic index by name:

db.users.dropIndex("email_1")

If you still need the index but not the uniqueness constraint, recreate it without unique:

db.users.createIndex({ email: 1 })

If the index is correct and should remain unique, the problem is in your data. Find and remove or update the duplicate documents instead:

// Find all duplicate values for the email field
db.users.aggregate([
  { $group: { _id: "$email", count: { $sum: 1 }, docs: { $push: "$_id" } } },
  { $match: { count: { $gt: 1 } } }
])

This aggregation pipeline groups documents by email, counts them, and returns only groups with more than one document. You can then decide which duplicates to remove or merge.

Warning: Dropping an index on a production database can temporarily impact query performance. Queries that relied on that index will fall back to collection scans until the index is recreated. Plan index changes during low-traffic windows.

This type of index management issue is conceptually similar to constraint violations in relational databases — if you have worked with PostgreSQL, you may recognize the pattern from fixing duplicate key constraint violations there.

Fix 4: Use Upsert Instead of Insert

If your application logic should either create a new document or update an existing one, replace insertOne with updateOne using upsert: true. This eliminates E11000 errors entirely for that operation.

Instead of this:

// This throws E11000 if email already exists
db.users.insertOne({ email: "john@example.com", name: "John", role: "user" })

Use this:

db.users.updateOne(
  { email: "john@example.com" },
  { $set: { name: "John", role: "user" } },
  { upsert: true }
)

The filter ({ email: "john@example.com" }) checks if a matching document exists. If it does, the $set operation updates it. If it does not, MongoDB inserts a new document combining the filter and the update fields.

For bulk operations, use bulkWrite:

const operations = users.map(user => ({
  updateOne: {
    filter: { email: user.email },
    update: { $set: user },
    upsert: true
  }
}));

db.users.bulkWrite(operations);

In Mongoose, the equivalent is findOneAndUpdate with upsert:

await User.findOneAndUpdate(
  { email: "john@example.com" },
  { name: "John", role: "user" },
  { upsert: true, new: true }
);

The new: true option returns the updated document instead of the original.

Common Mistake: Do not use $set on fields that are part of your filter when using upserts. If your filter is { email: "john@example.com" } and your update is { $set: { email: "john@example.com", name: "John" } }, the email field in $set is redundant. MongoDB already uses the filter value when creating a new document. While this is not harmful, it adds unnecessary overhead and makes the code harder to read.

Fix 5: Fix Unique Index on Null or Missing Fields

This is one of the most common and confusing causes of E11000. If you have a unique index on a field like phone, and multiple documents do not have a phone field at all, MongoDB treats the missing field as null. The second document without phone violates the unique constraint because null already exists.

Check how many documents have a null or missing value for the field:

db.users.countDocuments({ phone: null })

The fix is to use a partial filter expression on the index. This tells MongoDB to only enforce uniqueness on documents where the field actually exists:

// Drop the old index
db.users.dropIndex("phone_1")

// Create a partial unique index
db.users.createIndex(
  { phone: 1 },
  {
    unique: true,
    partialFilterExpression: { phone: { $exists: true } }
  }
)

Now documents without a phone field are excluded from the unique index entirely. Two documents can both lack a phone field without conflict.

If you also want to exclude empty strings:

db.users.createIndex(
  { phone: 1 },
  {
    unique: true,
    partialFilterExpression: { phone: { $type: "string", $gt: "" } }
  }
)

In Mongoose schemas, you handle this by combining unique with sparse or by defining a partial filter:

const userSchema = new mongoose.Schema({
  email: { type: String, required: true, unique: true },
  phone: {
    type: String,
    index: {
      unique: true,
      partialFilterExpression: { phone: { $exists: true } }
    }
  }
});

Note: The older sparse: true option also skips documents where the field is missing, but it does not handle cases where the field is explicitly set to null. Partial filter expressions give you more control and are the recommended approach for MongoDB 3.2 and later.

Fix 6: Fix Mongoose Unique Validation

Mongoose’s unique: true schema option is not a validator — it creates a MongoDB unique index. This distinction causes several issues that trip up developers.

Problem 1: Index not yet created. If you add unique: true to a schema and immediately try to insert documents, the index may not be built yet. Mongoose creates indexes asynchronously on application startup.

Wait for indexes to build before inserting:

const User = mongoose.model('User', userSchema);

// Wait for all indexes to be created
await User.init();

// Now inserts will properly enforce uniqueness
await User.create({ email: "john@example.com" });

Or listen for the index event:

User.on('index', (error) => {
  if (error) {
    console.error('Index creation failed:', error);
  }
});

Problem 2: Changing unique constraints does not update existing indexes. If you remove unique: true from a field in your Mongoose schema, the index still exists in the database. You must drop it manually.

Connect to MongoDB and drop the stale index:

await User.collection.dropIndex("email_1");

Problem 3: Mongoose error message format. Mongoose wraps the E11000 error but does not format it cleanly by default. Use a plugin or middleware to convert it into a validation-style error:

userSchema.post('save', function (error, doc, next) {
  if (error.name === 'MongoServerError' && error.code === 11000) {
    const field = Object.keys(error.keyValue)[0];
    next(new Error(`A document with that ${field} already exists.`));
  } else {
    next(error);
  }
});

This middleware catches the E11000 error after a save operation and replaces it with a human-readable message. You can also use the popular mongoose-unique-validator package, though handling it manually gives you more control.

If your Mongoose application connects to a remote MongoDB instance and you are seeing connection-related issues alongside E11000 errors, check your connection configuration — network problems can sometimes cause retry logic to send duplicate inserts. Similar connection troubleshooting applies as with Redis WRONGTYPE errors where the root cause is often environmental rather than logical.

Fix 7: Handle Race Conditions with Retry Logic

In high-concurrency applications, two requests can try to insert documents with the same unique value at almost the same time. Both check if the value exists, both get “no,” and both try to insert. One succeeds, the other gets E11000.

The fix is to catch the error and retry with an upsert:

async function createUser(userData) {
  try {
    return await db.collection('users').insertOne(userData);
  } catch (error) {
    if (error.code === 11000) {
      // Document was created by another request, update instead
      return await db.collection('users').updateOne(
        { email: userData.email },
        { $set: userData }
      );
    }
    throw error;
  }
}

For cases where you want “insert if not exists” without updating existing data, use $setOnInsert:

async function ensureUser(userData) {
  const result = await db.collection('users').updateOne(
    { email: userData.email },
    { $setOnInsert: userData },
    { upsert: true }
  );

  return result;
}

The $setOnInsert operator only applies the update when a new document is created. If the document already exists, nothing changes. This is an atomic operation, so it is safe against race conditions.

In Mongoose:

async function ensureUser(userData) {
  return await User.findOneAndUpdate(
    { email: userData.email },
    { $setOnInsert: userData },
    { upsert: true, new: true }
  );
}

For bulk inserts where some documents may conflict, use ordered: false to continue inserting even when some documents fail:

try {
  await db.collection('users').insertMany(documents, { ordered: false });
} catch (error) {
  if (error.code === 11000) {
    // Some documents were duplicates, but the rest were inserted
    console.log(`Inserted ${error.result.nInserted} of ${documents.length} documents`);
  } else {
    throw error;
  }
}

With ordered: false, MongoDB attempts to insert all documents regardless of failures. It collects all errors and reports them at the end. This is significantly faster than inserting one at a time when you expect some duplicates.

This pattern of defensive coding around database operations applies broadly — similar to how you would handle MySQL syntax errors by validating queries before execution rather than relying solely on error handling.

Fix 8: Fix Compound Unique Index Issues

A compound unique index enforces uniqueness across a combination of fields, not each field individually. This is a common source of confusion.

Given this index:

db.orders.createIndex({ customerId: 1, orderDate: 1 }, { unique: true })

These two documents can coexist because the combination is different:

{ customerId: "C001", orderDate: "2026-03-10" }
{ customerId: "C001", orderDate: "2026-03-11" }  // different date, OK

But this would fail:

{ customerId: "C001", orderDate: "2026-03-10" }
{ customerId: "C001", orderDate: "2026-03-10" }  // same combination, E11000

If you are getting E11000 on a compound index, check the full combination of values. Inspect the index to see which fields are included:

db.orders.getIndexes()

Look for the index name from the error message and check its key field.

Find duplicates on a compound index:

db.orders.aggregate([
  {
    $group: {
      _id: { customerId: "$customerId", orderDate: "$orderDate" },
      count: { $sum: 1 },
      docs: { $push: "$_id" }
    }
  },
  { $match: { count: { $gt: 1 } } }
])

If the compound index should not enforce uniqueness, drop it and recreate it without unique:

db.orders.dropIndex("customerId_1_orderDate_1")
db.orders.createIndex({ customerId: 1, orderDate: 1 })

If the uniqueness constraint is correct but your data has duplicates from before the index was created, clean up the duplicates first. MongoDB will refuse to create a unique index on a collection that already contains duplicate values for the indexed fields.

A common scenario involves adding a timestamp or a counter to break the uniqueness:

db.orders.createIndex(
  { customerId: 1, orderDate: 1, sequenceNumber: 1 },
  { unique: true }
)

This allows multiple orders per customer per day, as long as each has a different sequenceNumber.

Compound index issues are conceptually related to how connection handling can go wrong when MongoDB cannot reach the server at all. If you are troubleshooting MongoDB connectivity alongside data errors, see Fix: MongoDB connect ECONNREFUSED.

Still Not Working?

If none of the fixes above resolved your E11000 error, try these less obvious approaches:

  • Check for hidden indexes. Run db.collection.getIndexes() and look for indexes you do not recognize. ORMs, migration tools, and previous developers may have created indexes that are no longer relevant. Drop any that should not be there.

  • Check capped collections. Capped collections have restrictions on updates that change document size. If you are using a capped collection and getting E11000, verify your update operations are not inadvertently creating conflicts.

  • Inspect write concern. If you are using writeConcern: { w: 0 } (unacknowledged writes), errors are silently swallowed. Switch to { w: 1 } or { w: "majority" } temporarily to see if errors are occurring that you are not catching.

  • Check replica set lag. In a replica set, reading from a secondary and writing to the primary can cause your application to think a value does not exist (stale read from secondary) and then insert it (write to primary), only to discover a duplicate. Use readPreference: "primary" for reads that precede unique-constrained inserts.

  • Look at TTL indexes. If you have a TTL (time-to-live) index on the collection, documents may not be deleted immediately. MongoDB’s TTL deletion runs every 60 seconds. If you delete and re-insert a document with the same unique value within that window, you may hit E11000 because the original has not been physically removed yet.

  • Restart your application. If you are using Mongoose and changed schema definitions, cached models may hold stale index definitions. A full application restart forces Mongoose to reconcile indexes with the database.

  • Use MongoDB Compass. The GUI tool lets you visually inspect indexes, browse documents, and run aggregation pipelines. Sometimes seeing the data makes the conflict obvious in a way that shell output does not.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles