How To Migrate From SQL Server To Snowflake
Over the past three years our teams have noticed a pattern. Many companies looking to migrate to the cloud go from SQL Server to Snowflake.
There are many reasons this makes sense. One of the reasons and common benefits was that teams found it far easier to manage that SQL Server and in almost every case started pulling in even more data sets from other sources.
All because it was far easier to manage compute, there weren’t limitations on storage and so on. However, in order to get to a place where the data teams could take advantage of Snowflake by migrating, you have to have a solid migration plan and process.
This is where many data teams fail.
So let’s talk about how you can migrate from SQL Server to Snowflake.
SQL Server Vs Snowflake – Migrating To Snowflake
Before jumping into migration, it’s critical to understand how SQL Server and Snowflake differ under the hood. Obviously there are technical differences, but also some philosophical differences as well.
Here are some differences you should consider when migrating from SQL Server to Snowflake.
Architecture
SQL Server is a traditional RDBMS that tightly couples storage and compute. Whether you run it on-premises or in the cloud (via Azure SQL), you’re generally dealing with a fixed machine size and resource allocation.
Snowflake separates storage and compute completely. Storage scales independently, and compute clusters (called “warehouses”) can be sized up, down, or even auto-scaled depending on your workload. This separation means you pay for exactly what you use, and can optimize compute costs based on demand. I’d also like to point out, the fact that you can easily size up and down your “warehouses’ is a huge benefit and difference from SQL Server. With SQL Server, if you ever wanted more compute or storage, you’d have to migrate to a new Server altogether.
Scaling
In SQL Server, scaling typically means vertical scaling, adding more CPU, RAM, or disk to the same machine. It’s resource-constrained and often expensive at larger sizes.
In Snowflake, you can horizontally scale by running multiple compute clusters on the same data. Need to support more concurrent users? Simply spin up an additional warehouse. You don’t have to move or duplicate your data. I already referenced some of this above, but the fact that it is so easy to scale is one of the many benefits many of my clients see.
Data Types and Features
- Snowflake and SQL Server share many similar data types (e.g., VARCHAR, INT), but not all types map 1:1. Some types like DATETIMEOFFSET or XML require special handling.
- Indexes and constraints work differently.
- SQL Server uses indexes heavily for performance.
- Snowflake doesn’t use traditional indexes; instead, it relies on automatic micro-partitioning and metadata for fast reads as well as clustering keys.
- Stored procedures in SQL Server use T-SQL, while Snowflake now supports procedural scripting, but with differences. You may need to rewrite complex procedures, especially if they involve dynamic SQL, loops, or error handling.
Cost Model
SQL Server costs include licensing fees (per core or CALs), server costs, and ongoing maintenance. Even cloud versions involve resource reservation.
Snowflake offers a pay-for-what-you-use model
- Storage is billed separately at a flat rate.
- Compute (warehouse usage) is billed by the second when active, with auto-suspend and auto-resume features to minimize idle costs.
You can see the cost plan in the image below:
Migrating isn’t just about moving data, it’s about embracing a new, cloud-optimized way of working. This is a huge paradigm shift so do be ready to go beyond lifting and shifting.
So let’s talk about your migration plan.
Plan Your Migration – Migrating From SQL Server To Snowflake
A successful migration doesn’t start with moving data, it starts with careful planning. In fact, that’s why I wrote a whole article about the problems that occur during migrations. Because I have seen so many poorly planned.
Also, I want to point out that migrating from SQL Server to Snowflake is the perfect time to audit, prioritize, and modernize your environment. It’s not just about switching tools, its a great time to remove workflows that aren’t used.
That all starts with an audit.
Inventory and Audit
- Begin by cataloging everything:
- Understanding what you have is critical. Otherwise, you risk missing critical components or overcomplicating the migration.
Assess Compatibility
- Identify incompatibilities early.
- T-SQL specifics: Common table expressions (CTEs), temp tables, error handling, dynamic SQL.
- Features Snowflake doesn’t support directly, like traditional triggers or XML-specific functionality.
- Some rewrites are inevitable. Flag areas that will need manual rework or redesign.
Prioritize
- Tackle easy wins first
- Migrate low-complexity tables and simple views early to validate your process.
- Use LLMs where possible(this helped cut much of the code rework down in my experience)
- Defer heavy transformations(when possible, some complex transforms are key to the rest of your workflows)
- Complex stored procedures or deeply nested ETL processes should be reviewed and possibly rearchitected.
The goal here is to gain momentum. It helps keep the project moving.
Now I do want to say, truthfully, you should have a clear plan. I like to migrate workflows in order. Meaning start with the raw tables, move to staging, and so on. Overall, the big lesson I have learned here is make sure you have a clear plan and checklist. Too many times I’ve come into migration projects where no one really knew what was migrated and what still needed to be migrated. So create a checklist!
At a minimum, you need to:
- List everything that needs to be migrated—tables, workflows and any other objects and artifacts.
- Prioritize the order of migration—some dependencies need to move first.
- Secure buy-in from beyond engineering—migrations impact more than just the data team.
So, start by listing out every object. Most databases let you access the information schema, so pull a list of all tables, stored procedures, tasks, and anything else relevant.
For dashboards, Excel reports, or other user-facing tools, you may need to manually gather details—unless you’re lucky enough to have a lineage tool. Truthfully, this is what made migration projects at Facebook much easier; I could easily list all the dependencies. Of course, coordinating across dozens of teams was still a challenge.
If you don’t have an automated way to track dependencies, get ahead of the problem. Ask every team that relies on your data warehouse to submit a list of what they need migrated, set a deadline, and hold them to it.
A little upfront planning can save you months of headaches.
Choose Tools
You’ll need to decide how you physically move your schema and data as well as what tools you’ll rely on in the future for your data workflows.
- Native tooling – Snowflake provides migration guides and SQL translation tips.
- Third-party tools – Tools like Estuary, AWS DMS, or dbt(which you might use for your future workflows) can automate much of the movement and transformation.
- DIY migration: For full control, scripting in Python, using Azure Data Factory pipelines, or even adapting SSIS can work.
Choosing the right approach depends on your budget, timeline, and the complexity of your environment.
Data Migration Strategies
Once your planning is in place, it’s time to think about how you’ll actually move your data, and whether you’ll simply lift and shift it, or take the opportunity to redesign for the cloud.
Lift and Shift vs Modernization
The first decision is whether to lift and shift your existing environment or modernize it during the move.
A lift and shift migration means moving your databases, schemas, and data into Snowflake with minimal changes. It’s faster and simpler, but often brings over legacy designs that aren’t well-optimized for Snowflake’s cloud architecture. This approach can be a good starting point if you’re under tight deadlines or want to minimize disruption.
A modernization approach rethinks how your data should be structured. Instead of just copying tables and procedures, you redesign them to take advantage of Snowflake’s strengths. Modernization requires more upfront work but sets you up for better long-term performance and cost management.
Many teams do a combination: a basic lift and shift to get into Snowflake quickly, followed by phased modernization.
Testing and Validation
Once your data and code are migrated, it’s tempting to declare victory. But migration isn’t complete until you’ve tested everything thoroughly.
Good testing protects you from hidden issues that could undermine trust in the new system, slow queries, incorrect numbers, broken dashboards, or worse.
Data Validation
Start with the basics…Is the data actually correct?
- Row counts: Compare the number of rows in each table between SQL Server and Snowflake.
- Checksums or hashes: Create simple checksum queries (e.g., sum of a column) to verify that data hasn’t been corrupted or changed in transit.
- 1:1 Comparison: I’ve seen some articles suggest you run random sampling. Perhaps if your data is so large you can’t compare 1:1 in terms of tables. However, too often I’ve seen edge cases get through “random sampling”. You want to be 100% sure that the data you’ve moved over matches the prior data and the data workflows re creating the same output.
Performance Testing
Next, check that Snowflake queries perform as expected.
- Compare typical query times before and after migration – I usually have a list of the longest running queries and compare the times from SQL Server to Snowflake.
- Look at how resource consumption (credits) aligns with expectations
- Identify slow queries that might need optimization, often related to how data is clustered or how queries are written
Remember: performance tuning in Snowflake often looks different than in SQL Server (no manual index creation, more focus on partition pruning and warehouse sizing).
Functional Testing
It’s not just about the tables. Make sure all downstream systems work:
- Business Intelligence dashboards (Power BI, Tableau, Looker, etc.)
- APIs and applications that query the database
- Reporting jobs, batch processes, or alerting workflows
Wherever possible, involve business users in the testing phase.
They’ll catch things that technical validation might miss, like strange reporting numbers or missing metrics.
Security Testing
Finally, validate that your roles, users, and permissions are working properly.
Test:
- Different users seeing the correct data
- Access controls around sensitive fields (e.g., PII)
- Restrictions on administrative operations
Snowflake’s Role-Based Access Control is powerful, but misconfigurations can cause surprises, better to catch them early.
Go-Live and Post-Migration
After testing, it’s time to cut over from your old SQL Server environment to Snowflake.
How you manage this step will depend on your risk tolerance, business needs, and technical complexity.
Cutover Strategy
You have three primary options:
- Big Bang Cutover – Stop all writes to SQL Server, move final delta data, and switch everything to Snowflake at once.
Best for smaller systems or when downtime is acceptable. - Phased Cutover – Migrate in stages, perhaps one application, business unit, or data domain at a time. Often safer for large or complex environments, but requires managing dual systems during the transition.
- Parallel System Migration – It’s not uncommon to see companies juggle both the old and new system as they migrate. This is very expensive and causes a whole lot of other issues. But it can provide the most safety or at least peace of mind.
Some teams also do a dual-run period, where both systems are kept in sync temporarily, giving stakeholders time to validate side-by-side before turning off the old system.
Monitoring and Optimization
Once live, your job isn’t done. You’ll want to monitor usage closely and tune for both performance and cost.
Ongoing Cost Management
Snowflake’s pay-per-use model is powerful, but I always like to say, infinite compute leads to infinite cloud bills. Many data teams make the mistake of adding more and more in terms of new data workflows.
It makes sense. You can do so much more suddenly. However, without discipline, it’s easy to start spending too much.
Here were a few tips from a recent webinar with Jeff Skoldberg on keeping an eye on your snowflake costs..
Good cost practices include – Right-size your warehouses. Use the smallest warehouse (XS) as your default. Then scale-up as you need.
Auto-suspend aggressively – Change every warehouse’s auto-suspend timeout to 1 minute (instead of the 5–10 minute defaults) to avoid billing for idle time or overlapping one-minute increments.
Improve utilization by batching work – Stack ETL jobs (or DBT models) to run concurrently on the same warehouse so you pay for fewer “turn-on” events and higher CPU utilization.
Optimize query frequency and schedules – Reduce unnecessary runs: e.g., skip weekend jobs (–29%), limit to business hours (–50%), or otherwise align schedules with real needs.
Clean up unused objects – Quarterly identify and DROP unused tables to reclaim storage costs.
Setting up regular reviews of usage and spend, even a simple monthly audit, can save substantial amounts over time.
Picking New Data Pipeline Tools
Now we just wrote an article about setting up your data infrastructure in 2025. So you should check that one out to better understand all the various tools. But most data teams want to move away from SSIS and stored procedures when they migrate away from SQL Server.
So here are some alternatives for your data workflows. Specifically if you go down the low-code path.
Low-Code Tools
In particular, many teams use tools like Estuary, Fivetran, and Portable. These tools can help you set up data pipelines through a UI, with minimal (or no) code. They connect directly to many popular tools and APIs and automatically keep the data in sync.
They specifically focus on ingestion and allow you to spend less time focusing on pulling data into Snowflake and more time on more valuable areas such as the transforms.
Thats whre tools like dbt or SQLMesh can come into play.
But do check out our recent article if you’d like a deeper dive.
Final Thoughts
Migrating SQL Server to Snowflake is a common choice for many data teams as it provides many benefits. However, it’s far from easy. You want to make sure you have a clear migration plan.
Think through the cloud, and how to ensure that you keep costs low when switching to Snowflake. Many approaches you might have used when using SQL Server might be very expensive on Snowflake. That doesn’t mean there aren’t benefits.
If your team is looking for help migrating from SQL Server to Snowflake, then do reach out for a free consultation.
Also! Don’t forget to check the articles below.
ETLs vs ELTs: Why are ELTs Disrupting the Data Market? – Data Engineering Consulting
NetSuite to Snowflake Integration: Ultimate Guide to 2 Effective Methods
Bridging the Gap: A Data Leader’s Guide To Helping Your Data Team Create Next Level Analysis
The Data Engineer’s Guide to ETL Alternatives
What Is Snowflake – Breaking Down What Snowflake Is, How Snowflake Credits Work And More
Explaining Data Lakes, Data Lake Houses, Table Formats and Catalogs
How to cut exact scoring moments from Euro 2024 videos with SQL
How To Modernize Your Data Strategy And Infrastructure For 2025