SQL Server Import Data from CSV into Existing Table

If you are looking to master how to SQL Server import data from CSV into an existing table, you are in the right place. In this article, I will walk you through the most robust, efficient, and professional methods to get that data where it belongs: inside your database. I’ll cover everything from the visual Import and Export Wizard to the high-performance T-SQL BULK INSERT command.

Let’s dive in.

SQL Server Import Data from CSV into Existing Table

Why Importing CSV Data Correctly Matters

Before we start typing commands, it is crucial to understand why we need specific methods. In my experience working with large datasets, data integrity is paramount. A simple drag-and-drop might work for ten rows.

Bad imports can lead to:

  • Data Type Mismatches: Dates formatted as MM/DD/YYYY getting confused with DD/MM/YYYY.
  • Truncation Errors: Text strings being cut off because the destination column is too short.
  • Performance Bottlenecks: Locking up the production database because the import method wasn’t optimized.

We will avoid all of these.

Prerequisites: Setting the Stage

For this tutorial, I am assuming you have Microsoft SQL Server Management Studio (SSMS) installed and running. I will use a scenario that I see often: importing a monthly sales report.

Let’s assume we have a CSV file located at C:\Raj\Test3.csv.

The CSV Structure:

Code snippet

TransactionID,CustomerName,State,SaleAmount,SaleDate
1001,John Smith,NY,450.00,2025-12-01
1002,Jane Doe,CA,120.50,2025-12-02
1003,Robert Brown,TX,950.00,2025-12-03

The Target Table:

Before importing, we usually need an existing table. If you haven’t created one yet, run this script in your query window. I always recommend using explicit data types that match your USA locale settings (like DATE and DECIMAL).

SQL

CREATE TABLE dbo.MonthlySales (
    TransactionID INT PRIMARY KEY,
    CustomerName VARCHAR(100),
    State CHAR(2),
    SaleAmount DECIMAL(10, 2),
    SaleDate DATE
);
SQL Import Data from CSV into Existing Table

Now that our destination is ready, let’s look at the methods.

Method 1: The SQL Server Import and Export Wizard (No Code Required)

If you prefer a graphical interface (GUI) or if you are doing a one-off import where you don’t want to write a script, the Import and Export Wizard is your best friend. It’s built right into SSMS and handles a lot of the heavy lifting, like mapping columns, for you.

Step-by-Step Walkthrough

  1. Launch the Wizard:Open SSMS and connect to your database instance. In the Object Explorer, right-click on your specific database (e.g., SalesDB), go to Tasks, and then select Import Data…. This opens the wizard.
  2. Choose a Data Source:On the first screen, you need to tell SQL Server where the data is coming from.
    • Data Source: Select Flat File Source from the dropdown.
    • File Name: Browse to C:\Raj\Test3.csv.
    • Text Qualifier: If your text columns are wrapped in quotes (e.g., "New York, NY"), enter a double quote " here. If not, leave it as <none>.
    • Tip: Always click “Columns” in the left-hand menu to verify the preview looks correct.
  3. Choose a Destination:
    • Destination: Select Microsoft OLE DB Driver for SQL Server (or the older “SQL Server Native Client”).
    • Server Name: Verify your local or remote server name is correct.
    • Database: Ensure SalesDB is selected.
  4. Map Source to Sink:Select the table [dbo].[MonthlySales]. The wizard is smart enough to guess mappings, but you should click Edit Mappings. Here, ensure that TransactionID in the CSV maps to TransactionID in the SQL table.Critical Check: Ensure the “Delete rows in destination table” is unchecked if you want to append data. If you want to wipe the table clean and reload, check it.
  5. Run the Package:Click Next until you reach the end, then click Finish. You will see a progress bar showing the rows transferring. Check out the screenshots below for reference.
how to import data from csv file to existing table in sql server
how to import data from csv file to existing table in sql

Method 2: T-SQL BULK INSERT (The Professional Standard)

While the wizard is great, it’s not reproducible. You can’t easy schedule it to run every night at 3 AM. For that, we use T-SQL commands. The BULK INSERT statement is the fastest way to sql server import data from csv into existing table. It reads the file directly from the disk and pushes it into the table with minimal logging.

I use this method for 90% of my automation scripts.

Basic Syntax

Here is the script I would write to import our sales data.

SQL

BULK INSERT dbo.MonthlySales
FROM 'C:\Raj\Test3.csv'
WITH (
    FORMAT = 'CSV',
    FIRSTROW = 2,
    FIELDTERMINATOR = ',',
    ROWTERMINATOR = '\n',
   TABLOCK  
);
import data from csv to sql server using query

Understanding the Parameters

To show authority in your scripts, you need to understand what these switches actually do. I’ve compiled a table of the most important parameters you will use.

ParameterDescriptionWhy I Use It
FIRSTROWSpecifies the number of the first row to load.Essential for skipping headers. usually set to 2.
FIELDTERMINATORThe character separating columns.Usually a comma , for CSVs or a pipe `
ROWTERMINATORThe character separating rows.\n is standard, but some Windows files need \r\n.
TABLOCKSpecifies that a table-level lock is acquired.Performance Booster. This allows for parallel loading and reduces logging overhead.
ERRORFILEPath to a file where rejected rows are written.Lifesaver for debugging bad data without stopping the whole import.
KEEPIDENTITYPreserves identity values from the imported file.Use this if your table has an IDENTITY column but you want to force your own IDs.

Handling Errors Like a Pro

SQL

BULK INSERT dbo.MonthlySales
FROM 'C:\Data\Imports\USA_Sales_Nov2024.csv'
WITH (
    FORMAT = 'CSV',
    FIRSTROW = 2,
    FIELDTERMINATOR = ',',
    ROWTERMINATOR = '\n',
    MAXERRORS = 10,
    ERRORFILE = 'C:\Data\Imports\Logs\ImportErrors.csv'
);

With this script, if fewer than 10 rows fail, the script continues, and the bad rows are dumped into ImportErrors.csv for me to analyze later.

Method 3: OPENROWSET (The Flexible Ad-Hoc Method)

Sometimes, you don’t want to insert the data immediately. Maybe you want to SELECT read from the CSV file, join it with an existing SQL table, and then insert the result. This is where OPENROWSET it shines. It treats a text file like a SQL table.

I often use this when I need to filter data before it hits the database. For example, maybe I only want to import sales from “California” (CA).

The Syntax

You must enable “Ad Hoc Distributed Queries” on your server first (ask your SysAdmin if this is allowed, as it has security implications).

SQL

-- Enable the feature (One time setup)
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'Ad Hoc Distributed Queries', 1;
RECONFIGURE;

Once enabled, you can run a query like this:

SQL

INSERT INTO dbo.MonthlySales (TransactionID, CustomerName, State, SaleAmount, SaleDate)
SELECT TransactionID, CustomerName, State, SaleAmount, SaleDate
FROM OPENROWSET(
    BULK 'C:\Data\Imports\USA_Sales_Nov2024.csv',
    FORMATFILE = 'C:\Data\Imports\SalesFormat.xml',
    FIRSTROW = 2
) AS T;

Note: OPENROWSET often requires a format file (XML or non-XML) to map the columns correctly if the CSV is simple. It is more complex to set up than BULK INSERT but offers superior flexibility for transformations.

Method 4: PowerShell Automation (The Modern DevOps Approach)

In modern American tech companies, we automate everything. If you are managing multiple servers or need to loop through a folder of 50 files, T-SQL can be clunky. PowerShell is the perfect tool for this.

I use the dbatools module (a community standard) or the native SqlBulkCopy class. Here is a simplified script using standard PowerShell libraries to import our CSV.

PowerShell

# Define variables
$csvFilePath = "C:\Data\Imports\USA_Sales_Nov2024.csv"
$sqlServer = "localhost"
$database = "SalesDB"
$table = "dbo.MonthlySales"

# Read CSV Data
$dataTable = Import-Csv $csvFilePath | Out-DataTable 

# Open SQL Connection
$connectionString = "Server=$sqlServer;Database=$database;Integrated Security=True;"
$bulkCopy = New-Object Data.SqlClient.SqlBulkCopy($connectionString)
$bulkCopy.DestinationTableName = $table

# Map Columns (Source CSV Header -> Destination SQL Column)
$bulkCopy.ColumnMappings.Add("TransactionID", "TransactionID")
$bulkCopy.ColumnMappings.Add("CustomerName", "CustomerName")
$bulkCopy.ColumnMappings.Add("State", "State")
$bulkCopy.ColumnMappings.Add("SaleAmount", "SaleAmount")
$bulkCopy.ColumnMappings.Add("SaleDate", "SaleDate")

# Execute Write
try {
    $bulkCopy.WriteToServer($dataTable)
    Write-Host "Successfully imported data to SQL Server." -ForegroundColor Green
}
catch {
    Write-Host "Error importing data: $_" -ForegroundColor Red
}

This script is robust. It maps columns explicitly, so even if the columns in the CSV are in a different order than the database table, it will still work perfectly.

Common Pitfalls and Troubleshooting

Even with years of experience, I still run into issues. Here are the most common “gotchas” when you sql server import data from csv into existing table, and how to fix them.

1. The “File In Use” Error

Symptom: You run BULK INSERT and get an error saying the file cannot be opened.

Cause: You likely have the CSV open in Excel. Excel puts a lock on the file.

Fix: Close the file in Excel. If the file is on a network share, ensure the SQL Server service account (e.g., NT SERVICE\MSSQLSERVER) has read permissions on that folder.

2. Date Format Disasters

Symptom: Conversion failed when converting date and/or time from character string.

Cause: The CSV has dates like 31/01/2024 (European style) but your server is set to US English, expecting 01/31/2024.

Fix: Use the SET DATEFORMAT dmy; command before your insert statement, or ensure your CSV generation process sticks to the ISO standard YYYY-MM-DD, which works universally.

3. Identity Insert Issues

Symptom: Cannot insert explicit value for identity column in table…

Cause: Your table has an auto-incrementing ID, but your CSV also has an ID column.

Fix: If you want to use the IDs from the CSV, you must run SET IDENTITY_INSERT dbo.MonthlySales ON; before the insert and turn it OFF afterward. Note that BULK INSERT has a specific KEEPIDENTITY flag for this.

Best Practices for Performance

  • Drop Indexes Before Loading: If you are loading millions of rows, non-clustered indexes slow down the insert process significantly. It is often faster to drop the indexes, perform the BULK INSERT, and then recreate them.
  • Batch Your Inserts: In the BULK INSERT command, use the BATCHSIZE option. For example, BATCHSIZE = 10000. This commits the data in chunks. If the import fails at row 50,000, the previous 40,000 are already saved.
  • Use Minimal Logging: Ensure your database is in “Simple” or “Bulk-Logged” recovery model during the load operation to prevent your transaction log from exploding in size.

Conclusion

Importing data doesn’t have to be a headache. Whether you choose the user-friendly Import Wizard for a quick one-off task, the powerful BULK INSERT for your production scripts, or PowerShell for complex automation, you now have the tools to handle any CSV that comes your way.

Remember, the key to a successful import is preparation: verify your CSV format, ensure your target table structure matches, and always test with a small subset of data before opening the floodgates.

You may also like the following articles: