Designing Reliable and Cost-Effective SQL Server Backups to Azure
Introduction
Backups are one of those things we rarely think about—until the moment we need them the most.
In a recent scenario, I had to design a solution to move and retain large SQL Server backups (around 2TB) from on-premises to Azure for a short duration. What initially seemed simple quickly evolved into a design problem involving cost, performance, automation, and reliability.
This post walks through the thinking, decisions, and lessons learned.
The Problem
- Multiple SQL Server databases
- One large database (~2TB)
- Daily backups required for 2 weeks
- Fast restore capability needed
- Cost-sensitive solution
This wasn’t just backup—it was controlled, intentional data movement.
The Questions That Matter
This is where the real engineering begins:
- Do we need Hot, Cool, or Archive storage?
- What if restore is needed immediately?
- Should we copy
.bakfiles or backup directly? - How do we automate safely?
- What happens if authentication expires?
- How do we handle large databases efficiently?
These questions shape architecture more than tools do.
The Approach
After evaluating options, the solution became clear:
- Use native SQL Server Backup-to-URL
- Store in Azure Blob Storage (Cool tier)
- Automate with SQL Server Agent
- Use SAS-based authentication
- Optimize large DB backups with striping
Implementation
🔹 Azure Setup
- Storage Account → Standard + LRS
- Private Blob Container
- SAS Token with:
- Write
- Create
- List
SQL Credential
CREATE CREDENTIAL [https://<storageaccount>.blob.core.windows.net/<container>]
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = '<SAS_TOKEN>';
For SQL server 2014
SQL Server 2014 was built before "Shared Access Signatures" (SAS) were fully integrated into the engine's backup logic. Unlike later versions (2016+), SQL Server 2014 generally expects you to use your Storage Account Access Key and a slightly different syntax.
Here is how to fix the "Specified without a Credential" error for your specific version:1. Change the Credential to use the Access Key
Instead of a SAS token, go to the Azure Portal, navigate to your Storage Account > Access Keys, and copy Key1.
Then, recreate your credential like this:
-- Replace 'YourAccountName' with the actual name of your storage account-- Replace 'YourAccessKey' with the long key from the portalCREATE CREDENTIAL [MyAzureCredential]
WITH IDENTITY = 'YourAccountName',
SECRET = 'YourAccessKey';
Note: In SQL 2014, the IDENTITY is your account name, not 'Shared Access Signature'.
2. Explicitly Reference the Credential in the Backup Command
In SQL 2014, the engine doesn't always "auto-match" the URL to the credential. You must tell it which one to use using the WITH CREDENTIAL clause:
BACKUP DATABASE [YourDatabaseName]TO URL = 'https://youraccount.blob.core.windows.net/yourcontainer/yourbackup.bak'
WITH CREDENTIAL = 'MyAzureCredential',
STATS = 10;
Backup Command for later versions
BACKUP DATABASE YourDatabase
TO URL = 'https://<storageaccount>.blob.core.windows.net/<container>/YourDatabase.bak'
WITH COMPRESSION, STATS = 10;
Handling Large Databases
For large databases (~2TB):
BACKUP DATABASE BigDB
TO
URL = '..._1.bak',
URL = '..._2.bak',
URL = '..._3.bak',
URL = '..._4.bak'
WITH COMPRESSION, STATS = 10;
Improves speed
Reduces bottlenecks
Challenges & Lessons
This is where things get real:
- Credential mismatch → Msg 3289
- SAS token formatting (
?issue) - Azure AD vs SAS confusion
- Network throughput limitations
- Large DB performance tuning
- A single slash in a credential name can stop everything.
Cost Optimization
- Cool tier → balanced choice
- Archive tier → avoided (slow restore)
- Lifecycle policy → auto delete after retention
Result: low cost + high readiness
Key Takeaways
- Design backups for restore, not storage
- Automation reduces operational risk
- Cost optimization must not impact recovery
- Test restore, not just backup
- Small misconfigurations can cause major failures
Comments
Post a Comment