How To Check for Duplicates in Google Sheets (Step-by-Step Guide)

Finding and handling duplicates in Google Sheets is something almost everyone runs into—whether you’re cleaning a contact list, reviewing survey results, or checking for repeated IDs. Google Sheets gives you a few different ways to spot and manage duplicates, from quick visual checks to formulas and built-in tools.

This guide walks through practical methods to check duplicates in Google Sheets, what each method is good for, and when they might fit different types of users.


What “Duplicates” Mean in Google Sheets

Before you start, it helps to be clear on what you actually consider a duplicate:

  • Duplicate cell: The exact same value appears more than once in a single column or row (e.g., the email [email protected] appears twice in column B).
  • Duplicate row: Entire rows have identical values across some or all columns (e.g., the same person’s full record appears multiple times).
  • Partial duplicate: Only some fields match, like the same email but a slightly different name spelling.

Google Sheets doesn’t automatically know which of these you care about. How you check for duplicates depends on what you’re trying to deduplicate:

  • One column (e.g., emails, IDs, product codes)
  • Multiple columns together (e.g., first name + last name + date)
  • Entire rows

Method 1: Visually Highlight Duplicates with Conditional Formatting

This is the most common way to see duplicates at a glance—no formulas needed.

Highlight duplicates in a single column

  1. Select the column you want to check (for example, click the A at the top of column A).

  2. Go to Format → Conditional formatting.

  3. In the right sidebar, under “Apply to range”, make sure it shows your selected column (e.g., A:A).

  4. Under “Format rules”, open the dropdown and choose:

    • “Custom formula is”
  5. In the formula box, enter:

    =COUNTIF($A:$A, A1) > 1 
  6. Choose a fill color or text style to highlight duplicates.

  7. Click Done.

Now, every value in column A that appears more than once is highlighted.

Highlight duplicates across multiple columns

If you want to mark duplicates in a specific range (say A2:C100 as a group), you can still use conditional formatting:

  1. Select the range (e.g., A2:C100).
  2. Format → Conditional formatting.
  3. For “Apply to range”, confirm A2:C100.
  4. Choose “Custom formula is”.
  5. Use a formula depending on what you mean by “duplicate”:

Option A: Duplicate based on one column (e.g., emails in column C)

=COUNTIF($C$2:$C$100, $C2) > 1 

Apply the rule to the whole range A2:C100 so that all columns in a duplicate row get highlighted.

Option B: Duplicate based on a combination (e.g., First Name + Last Name)

If column A is First Name and column B is Last Name:

=COUNTIFS($A$2:$A$100, $A2, $B$2:$B$100, $B2) > 1 

This highlights rows where both first and last name are repeated together.

What this method is best for

  • Quick visual scan of duplicates.
  • When you don’t want to delete anything yet—just see where repeats are.
  • When your data is still changing; conditional formatting updates automatically.

Method 2: Use the Built-In “Remove Duplicates” Tool

Google Sheets has a Remove duplicates feature that can identify and optionally strip out duplicates for you.

How to use Remove duplicates

  1. Select the range of data you want to check (for example, A1:C500).
  2. Go to Data → Remove duplicates.
  3. In the popup:
    • Check “Data has header row” if your first row contains column titles.
    • Select the columns you want to use for duplicate checking:
      • Select one column (e.g., Email) to remove rows where that value is repeated.
      • Select multiple columns (e.g., First Name + Last Name + Email) to only treat rows as duplicates if all those fields match.
  4. Click Remove duplicates.

Sheets will show a small summary of how many duplicates were removed and how many unique rows remain.

Important note

  • This tool changes your data: it deletes duplicates, keeping only the first occurrence.
  • There’s no detailed list of what was removed; if you need a record, consider copying your data to another sheet first or using formulas instead.

What this method is best for

  • One-time cleanup of a large dataset.
  • When you already have a backup or don’t mind deleting extra copies.
  • Simple, column-based duplicate detection (e.g., unique IDs).

Method 3: Mark Duplicates with a Formula

If you’d rather label duplicates instead of just highlighting or deleting them, formulas give you more control.

Mark duplicates in a column with “Duplicate” / “Unique”

Assume your values are in column A (starting from A2), and you want labels in column B.

In cell B2, enter:

=IF(COUNTIF($A$2:$A$100, A2) > 1, "Duplicate", "Unique") 

Then fill down (drag the small square at the bottom-right of B2) to copy the formula down the column.

What it does:

  • COUNTIF checks how many times the value in A2 appears in the range A2:A100.
  • If it’s more than 1, B2 shows “Duplicate”, otherwise “Unique”.

Mark only the second and later occurrences

Often you want to keep the first occurrence and only treat later ones as duplicates. For that, use:

In B2:

=IF(COUNTIF($A$2:A2, A2) > 1, "Duplicate", "First occurrence") 
  • This only counts from the start of the column down to the current row.
  • The first time a value appears, it’s “First occurrence”.
  • Any later repetition is labeled “Duplicate”.

Check duplicates across multiple columns

If a “duplicate row” means the same combination of, say, First Name and Last Name:

Assume:

  • First Name in column A
  • Last Name in column B

In C2:

=IF(COUNTIFS($A$2:$A$100, A2, $B$2:$B$100, B2) > 1, "Duplicate", "Unique") 

Or, to treat only later copies as duplicates:

=IF(COUNTIFS($A$2:A2, A2, $B$2:B2, B2) > 1, "Duplicate", "First occurrence") 

What this method is best for

  • When you need a clear, text-based flag (e.g., for reports or filters).
  • Situations where you might sort or filter based on duplicates.
  • When you don’t want to delete data automatically but want to see which entries are repeats.

Method 4: Use UNIQUE and COUNT Functions to Summarize Duplicates

Sometimes you don’t want to mark every duplicate row—you just want to know which values appear more than once.

List only the values that are duplicates

Assume your original values are in A2:A100.

  1. First, list unique values (for example, in column C):

    In C2:

    =UNIQUE(A2:A100) 
  2. Next to that list, count how many times each appears (in D2):

    =COUNTIF($A$2:$A$100, C2) 
  3. Fill down in column D.

  4. Now you can filter column D for values greater than 1 to see only duplicates.

You can also combine this idea into a single formula with QUERY or by applying filter views, but even this simple two-column setup already gives a clear summary of duplicates, including counts.

What this method is best for

  • Overview of which entries are repeated and how often.
  • Analyzing survey responses, product IDs, or user accounts.
  • When you care more about unique values and their counts than row-by-row tagging.

Method 5: Sort and Manually Scan for Duplicates

This is a low-tech method, but still useful for small datasets.

  1. Select the column (or range) you care about.
  2. Go to Data → Sort range:
    • Choose “Advanced range sorting options” if you need to sort by a specific column and keep rows together.
  3. Sort A → Z (ascending) or Z → A (descending).

Once sorted, duplicate values sit next to each other, making them easier to spot. You can then:

  • Manually delete rows (for small lists).

  • Use a helper column with a formula like:

    =A2 = A1 

    to mark if the current row matches the previous one.

What this method is best for

  • Very small lists where setting up formatting or formulas isn’t worth it.
  • Quick visual checks.
  • When you want to manually review each duplicate before acting.

Key Factors That Change How You Check Duplicates

Not every method suits every situation. A few variables shape which approach works best:

1. Data size

  • Small datasets (dozens or low hundreds of rows):
    • Manual sorting and scanning can be enough.
    • Simple conditional formatting or a single helper column works well.
  • Large datasets (thousands of rows or more):
    • You’ll likely rely on Remove duplicates, formulas like COUNTIF / COUNTIFS, and UNIQUE.
    • Efficiency and performance start to matter more.

2. Type of data

  • IDs, emails, usernames:
    • Usually treated as strictly unique per row.
    • Column-based duplicate checks (COUNTIF, Remove duplicates on that column) are typically enough.
  • Names, addresses, descriptive text:
    • More likely to contain typos or variations:
      • Jon vs John
      • 123 Main St. vs 123 Main Street
    • You might treat these as duplicates or not, depending on your rules.
    • Basic duplicate tools won’t catch near-duplicates without more complex logic.

3. Structure of your sheet

  • Single key column (like a “Customer ID”):
    • Makes duplicate checking straightforward: focus on that column.
  • Multiple columns define uniqueness:
    • You’ll use COUNTIFS, multi-column Remove duplicates, or combined conditional formatting rules.
  • Data spread across multiple sheets:
    • You may need cross-sheet formulas referencing ranges from other tabs to check whether a value appears elsewhere.

4. Your comfort with formulas

  • Beginner / casual user:
    • Likely to prefer:
      • Remove duplicates in the Data menu
      • Conditional formatting with prebuilt rules
  • Intermediate / advanced user:
    • May combine:
      • COUNTIF, COUNTIFS
      • UNIQUE, FILTER, QUERY
    • To build more flexible deduplication workflows.

5. What you want to do with duplicates

  • Just see them:
    • Conditional formatting is ideal.
  • Label and maybe sort or filter later:
    • Helper columns with formulas (Duplicate / Unique flags) work well.
  • Permanently clean up the sheet:
    • Remove duplicates is faster, but more destructive.
  • Report-level summary:
    • Use UNIQUE + COUNTIF to get counts of repeated values.

Different User Scenarios and How Results Vary

How you experience these tools—and the kind of “best” approach for you—varies a lot from case to case:

  • Freelancer managing client lists
    Might mainly care about duplicate emails when sending updates. A simple column-based duplicate check with conditional formatting or Remove duplicates might be plenty, but the exact column choice and how cautious they need to be about deleting rows will change their setup.

  • Small business tracking orders
    Could need to catch duplicate order IDs or repeated invoice numbers to avoid billing mistakes. They may rely more heavily on formulas that flag only the later occurrences, to preserve data history.

  • Teacher or researcher analyzing survey results
    Might want to see if the same person submitted multiple responses by email or ID, while still keeping original entries for comparison. They might use formulas and helper columns rather than automatic removal, and the columns they treat as “unique identifiers” will vary.

  • Operations or data team with large spreadsheets
    Often handles thousands of rows and might need more complex rules for duplicates across multiple fields or sheets. Performance, collaboration, and version history become factors, shaping whether they lean on formulas, scripts, or manual tools.

Each of these cases uses the same core tools in Google Sheets, but with different columns selected, different rules for what “duplicate” means, and different levels of caution about deleting data.


Bringing It Back to Your Own Sheet

Google Sheets gives you several solid ways to check for duplicates—visual highlighting, built-in removal, formula-based flags, and summary lists. Which one fits best depends on:

  • Your data size
  • Whether you define duplicates by one column or several
  • How sensitive your data is to accidental deletion
  • Your own comfort level with formulas
  • Whether you need a quick visual check, a stable label, or a cleaned-up dataset

Once you’re clear on those details in your own file, the right combination of methods and rules tends to fall into place.