JSON Validation Common Errors: Trailing Commas, Quotes, and Control Characters
DevTools

JSON Validation Common Errors: Trailing Commas, Quotes, and Control Characters

Site DeveloperSite Developer
2025-12-25

JSON Validation Common Errors: Trailing Commas, Quotes, and Control Characters

Quick answer: Most JSON validation failures are simple syntax issues. Trailing commas, single quotes, and unescaped newlines are the top offenders. Validate first, fix the smallest thing, and validate again. Use /json-formatter to validate and get an exact error location.

Trailing commas (the classic copy/paste problem)

Many editors allow trailing commas, but JSON does not. This is common when JSON was edited by hand or generated from a relaxed format. A trailing comma appears right before a closing } or ].

How to fix:

  1. Validate to find the approximate location.
  2. Look for a comma immediately before } or ].
  3. Remove the comma and validate again.
  4. Repeat until the JSON parses cleanly.

Why it matters:

  • A single trailing comma can break an entire API request.
  • Gateways may return generic 400 errors that hide the real cause.

Key takeaways

  • Definition: Trailing commas (the classic copy/paste problem) clarifies what the input represents and what the output should mean.
  • Why it matters: correct interpretation prevents downstream bugs and incorrect conclusions.
  • Validation: confirm assumptions before changing formats, units, or encodings.
  • Repeatability: use the same steps each time so results are consistent across environments.

Common pitfalls

  • Mistake: skipping validation and trusting the first output you see in Trailing commas (the classic copy/paste problem).
  • Mistake: mixing formats or layers (for example, decoding the wrong field or using the wrong unit).
  • Mistake: losing the original input, making it impossible to reproduce the issue.

Quick checklist

  1. Identify the exact input format and whether it is nested or transformed multiple times.
  2. Apply the minimal transformation needed to make it readable.
  3. Validate the result (structure, encoding, expected markers) before acting on it.
  4. Stop as soon as the result is clear; avoid over-decoding or over-normalizing.

Single quotes and unquoted keys (not JSON)

JavaScript allows patterns that JSON does not. JSON requires double quotes for strings and for object keys. This matters when data comes from a JS object printed without serialization.

What to check:

  • Keys like userId without quotes are invalid JSON.
  • Strings like 'abc' are invalid JSON.
  • A quick replace is risky if the data contains apostrophes inside strings.

Safer workflow:

  1. Convert the source to real JSON using a serializer (JSON.stringify).
  2. If you must edit by hand, change only the minimal required quotes.
  3. Validate after each change so you do not introduce new problems.

Key takeaways

  • Definition: Single quotes and unquoted keys (not JSON) clarifies what the input represents and what the output should mean.
  • Why it matters: correct interpretation prevents downstream bugs and incorrect conclusions.
  • Validation: confirm assumptions before changing formats, units, or encodings.
  • Repeatability: use the same steps each time so results are consistent across environments.

Common pitfalls

  • Mistake: skipping validation and trusting the first output you see in Single quotes and unquoted keys (not JSON).
  • Mistake: mixing formats or layers (for example, decoding the wrong field or using the wrong unit).
  • Mistake: losing the original input, making it impossible to reproduce the issue.

Quick checklist

  1. Identify the exact input format and whether it is nested or transformed multiple times.
  2. Apply the minimal transformation needed to make it readable.
  3. Validate the result (structure, encoding, expected markers) before acting on it.
  4. Stop as soon as the result is clear; avoid over-decoding or over-normalizing.

Control characters inside strings (newlines, tabs, and encoding)

JSON strings cannot contain raw newlines or tabs. They must be escaped as \n and \t. This happens often when log lines or multi-line text are embedded without escaping.

How to debug:

  • If the error points “near” a long string value, suspect a hidden newline.
  • Copy the problematic string into a tool and re-escape it.
  • Confirm the file is UTF-8 and not a mixed encoding.

Common pitfalls:

  • Copying text from a terminal that includes invisible characters.
  • Storing user input without escaping, then exporting it as JSON.

Key takeaways

  • Definition: Control characters inside strings (newlines, tabs, and encoding) clarifies what the input represents and what the output should mean.
  • Why it matters: correct interpretation prevents downstream bugs and incorrect conclusions.
  • Validation: confirm assumptions before changing formats, units, or encodings.
  • Repeatability: use the same steps each time so results are consistent across environments.

Common pitfalls

  • Mistake: skipping validation and trusting the first output you see in Control characters inside strings (newlines, tabs, and encoding).
  • Mistake: mixing formats or layers (for example, decoding the wrong field or using the wrong unit).
  • Mistake: losing the original input, making it impossible to reproduce the issue.

Quick checklist

  1. Identify the exact input format and whether it is nested or transformed multiple times.
  2. Apply the minimal transformation needed to make it readable.
  3. Validate the result (structure, encoding, expected markers) before acting on it.
  4. Stop as soon as the result is clear; avoid over-decoding or over-normalizing.

Non-JSON values (undefined, NaN, Infinity, comments)

These appear in JavaScript output but are not valid JSON:

  • undefined
  • NaN
  • Infinity
  • Comments (// or /* ... */)

Fix options:

  • Replace undefined with null, or omit the field entirely.
  • Replace NaN/Infinity with null or a string representation.
  • Remove comments and keep documentation elsewhere.

Key takeaways

  • Definition: Non-JSON values (undefined, NaN, Infinity, comments) clarifies what the input represents and what the output should mean.
  • Why it matters: correct interpretation prevents downstream bugs and incorrect conclusions.
  • Validation: confirm assumptions before changing formats, units, or encodings.
  • Repeatability: use the same steps each time so results are consistent across environments.

Common pitfalls

  • Mistake: skipping validation and trusting the first output you see in Non-JSON values (undefined, NaN, Infinity, comments).
  • Mistake: mixing formats or layers (for example, decoding the wrong field or using the wrong unit).
  • Mistake: losing the original input, making it impossible to reproduce the issue.

Quick checklist

  1. Identify the exact input format and whether it is nested or transformed multiple times.
  2. Apply the minimal transformation needed to make it readable.
  3. Validate the result (structure, encoding, expected markers) before acting on it.
  4. Stop as soon as the result is clear; avoid over-decoding or over-normalizing.

A safe repair workflow (repeatable and low-risk)

This is the quickest way to fix JSON in production incidents. The goal is to make minimal, reversible changes with confidence. Treat the original payload as evidence and do not overwrite it.

Steps:

  1. Validate and capture the first error location.
  2. Fix one issue (comma, quote, escape) and validate again.
  3. Once it validates, format it to confirm structure and intent.
  4. If needed, apply schema checks (required fields and types) separately.

Why this workflow works

  • A safe repair workflow (repeatable and low-risk) reduces guesswork by separating inspection (readability) from verification (correctness).
  • It encourages small, reversible steps so you can pinpoint where things go wrong.
  • It preserves the original input so you can always restart from a known-good baseline.

Detailed steps

  1. Copy the raw input exactly as received (avoid trimming or reformatting).
  2. Inspect for markers (delimiters, prefixes, repeated escape patterns, or known headers).
  3. Decode or convert once, then check if the result is now readable.
  4. If it is still encoded, decode again only if you can explain why (nested layers are common).
  5. Validate the final output (parse JSON/XML, check timestamps, confirm expected fields).

What to record

  • Save a working sample input and the successful settings as a reusable checklist for your team.

FAQ

Why does it validate in one tool but not in another?

Some tools accept relaxed “JSON-like” inputs. Use strict validation when the target system expects strict JSON.

How do I prevent these errors long-term?

Serialize data at the boundary (JSON.stringify) and avoid manual edits. Add schema validation in your service so failures are explicit and actionable.

What should I do if the output still looks encoded?

Decode step-by-step. If you still see obvious markers, the data is likely nested or transformed multiple times.

What is the safest way to avoid bugs?

Keep the original input, change one thing at a time, and validate after each step so the fix is reproducible.

Should I use the decoded value in production requests?

Usually no. Decode for inspection and debugging, but send the original encoded form unless the protocol expects decoded text.

Why does it work in one environment but not another?

Different environments often have different settings (time zones, keys, encoders, parsing rules). Compare a known-good sample side-by-side.

References

Key takeaways

  • Definition: References clarifies what the input represents and what the output should mean.
  • Why it matters: correct interpretation prevents downstream bugs and incorrect conclusions.
  • Validation: confirm assumptions before changing formats, units, or encodings.
  • Repeatability: use the same steps each time so results are consistent across environments.

Common pitfalls

  • Mistake: skipping validation and trusting the first output you see in References.
  • Mistake: mixing formats or layers (for example, decoding the wrong field or using the wrong unit).
  • Mistake: losing the original input, making it impossible to reproduce the issue.

Quick checklist

  1. Identify the exact input format and whether it is nested or transformed multiple times.
  2. Apply the minimal transformation needed to make it readable.
  3. Validate the result (structure, encoding, expected markers) before acting on it.
  4. Stop as soon as the result is clear; avoid over-decoding or over-normalizing.
Back to Blog

Found this helpful?

Try Our Tools