In this comprehensive guide, I will walk you through everything you need to know about transforming SQL Server JSON to table formats. We will explore the built-in functions, the new native JSON data type introduced in SQL Server 2025, and best practices for performance.
SQL Server JSON To Table
Why You Need to Convert JSON to Table in SQL Server
Transforming this data into a table allows you to:
- Perform standard SQL queries: Use
JOIN,GROUP BY, andWHEREclauses on JSON data. - Ensure Data Integrity: Move semi-structured data into typed columns.
- Improve Reporting: Connect traditional BI tools like Power BI or Tableau to your JSON-sourced data.
Prerequisites:
Ensure your environment meets the necessary requirements.
- Compatibility Level: To use the
OPENJSONfunction, your database must have a compatibility level of 130 or higher. - Version: While basic JSON support exists in 2016+, I highly recommend using SQL Server 2022 or 2025 to take advantage of performance optimizations and the new native JSON data type.
- Permissions: Ensure you have the authority to execute
SELECTandINSERTstatements on your target schema.
Native JSON Data Type (SQL Server 2025)
For years, we were forced to store JSON in NVARCHAR(MAX) or VARBINARY(MAX) columns. While effective, these were essentially “black boxes” to the database engine. With the release of SQL Server 2025, we finally have a dedicated JSON data type.
In my experience, this is a game-changer. The native JSON type stores data in a binary format, which is much faster to parse than standard text. If you are building a new system in 2026, I strongly suggest using this type for any column intended to hold JSON payloads.
The Core Function: Using OPENJSON
The most powerful tool in your arsenal for converting JSON to a table is the OPENJSON function. It is a table-valued function that parses JSON text and returns rows and columns.
1. Default Output (No Schema Defined)
If you simply pass a JSON string to OPENJSON, it returns a default set of three columns: key, value, and type.
This is incredibly useful for ad-hoc exploration of a JSON object. For example, if I receive a profile update for a user, I can quickly see the structure of the incoming data without defining a schema.
2. Understanding the “Type” Column
When using the default output, the type column provides a numeric value representing the data type of the JSON element. This is crucial for logic-based parsing.
| Type Value | Data Type |
| 0 | Null |
| 1 | String |
| 2 | Number |
| 3 | Boolean |
| 4 | Array |
| 5 | Object |
3. Explicit Schema with the WITH Clause
To create a “proper” table with specific column names and data types, we use the WITH clause. This is where you define exactly how the JSON properties map to your SQL table columns.
When I am mapping data for a client like Miller Financial Group in New York, I use the WITH clause to ensure that “AccountBalance” becomes a DECIMAL and “LastLogin” becomes a DATETIME2. This level of control is what makes OPENJSON so robust.
Handling Nested JSON and Arrays
Real-world JSON is rarely flat. You will often encounter objects nested within objects or arrays of items. To handle this, we utilize the CROSS APPLY operator in conjunction with OPENJSON.
Shredding Arrays
Suppose you have a JSON object representing a sales order from a warehouse in Dallas. This order contains an array of “LineItems”. To turn those line items into individual rows, you would use OPENJSON to reach the “LineItems” path and then CROSS APPLY to flatten them.
Deep Nesting
If the data is nested three or four levels deep—common in complex enterprise schemas—you can chain multiple CROSS APPLY statements. I often describe this to my junior developers as “unwrapping a gift.” Each OPENJSON call unwraps one layer of the hierarchy until you reach the scalar values you need.
Comparison: OPENJSON vs. JSON_VALUE vs. JSON_QUERY
It is easy to get confused between the different JSON functions available in Transact-SQL. Here is a breakdown of when to use what:
JSON_VALUE
Use this when you need to extract a single scalar value (like a string or a number) from a JSON string. It is perfect for calculated columns or quick filters in a WHERE clause.
- Example: Getting the “ZipCode” from a user’s address object.
JSON_QUERY
Use this when you need to extract an entire object or array. Unlike JSON_VALUE, it does not return a simple string; it returns a JSON fragment.
- Example: Grabbing the entire “Preferences” object to pass it to another process.
OPENJSON
As we’ve discussed, use this when you need a rowset. If you want to transform a list of users into a table, OPENJSON is your only choice.
Pro Tip: In my performance tuning sessions, I always emphasize that
OPENJSONwith aWITHclause is generally more efficient than callingJSON_VALUEmultiple times on the same document.
Performance Best Practices
Converting JSON to tables can be resource-intensive, especially with large datasets. Here are my top strategies for keeping your queries fast:
- Use Indexes on Computed Columns: If you frequently filter by a JSON property, create a non-clustered index on a computed column that uses
JSON_VALUE. - Prefer the Native JSON Type: If you are on SQL Server 2025, the binary storage of the
JSONtype significantly reduces CPU overhead during parsing. - Minimize Scope: Use the
pathargument inOPENJSONto point directly to the array you need. Don’t make the engine parse the whole document if you only need a small part. - Watch Your Data Types: In the
WITHclause, ensure your SQL data types match the JSON values as closely as possible to avoid expensive implicit conversions.
Step-by-Step Tutorial: Transforming a Complex JSON
Imagine we have a dataset of customer interactions for a retail company.
Step 1: Validate the JSON
I always start by using the ISJSON() function. There is nothing more frustrating than a long-running script failing because of a missing comma in the source data.
Step 2: Define the Target Table
Before writing the query, I define the structure of the destination table. This includes identifying Primary Keys and Foreign Keys that might need to be linked back to the parent object.
Step 3: Map the Root Elements
I use OPENJSON to extract the top-level properties like “CustomerID” and “RequestDate”.
Step 4: Expand the Nested Collections
Using CROSS APPLY, I dive into the nested arrays—such as “OrderedProducts” or “InteractionLogs”—to create the child rows.
Step 5: Execute the Insert
Once the SELECT statement looks perfect, I wrap it in an INSERT INTO ... SELECT block to populate the production tables.
Real-World Scenario
Let’s look at a conceptual scenario. A tech company receives thousands of support tickets in JSON format. Each ticket has a “Header” (ID, Date, Customer) and a “Messages” array.
By using the methods described above, I can take these messy JSON blobs and turn them into two clean tables: Tickets and TicketMessages. This allows the management team to run SQL queries to find the average response time or identify which products are generating the most tickets—tasks that would be nearly impossible with raw JSON files.
Troubleshooting Common Issues
Here are the most common hurdles I see:
- Case Sensitivity: By default, JSON keys are case-sensitive. If your JSON has “firstName” but your SQL path specifies “FirstName”, you will get
NULLvalues. - Lax vs. Strict Mode: SQL Server uses “lax” mode by default, which returns
NULLif a path isn’t found. Using “strict” mode will throw an error instead, which is helpful for debugging but can break production scripts if the schema is inconsistent. - NVARCHAR(MAX) Truncation: If you are using
JSON_VALUEand the result is over 4000 characters, it will returnNULLin lax mode. Always useOPENJSONfor large text values.
Conclusion
Knowing the transition from SQL Server JSON to table is critical for database professionals. With the tools provided in SQL Server—from the time-tested OPENJSON to the cutting-edge native JSON type of 2025—you have everything you need to bridge the gap between semi-structured data and relational power.
After working for more than 15 years in the Software field, especially in Microsoft technologies, I have decided to share my expert knowledge of SQL Server. Check out all the SQL Server and related database tutorials I have shared here. Most of the readers are from countries like the United States of America, the United Kingdom, New Zealand, Australia, Canada, etc. I am also a Microsoft MVP. Check out more here.