What we’re doing (in plain English)

You have a column in SQL Server that stores a big chunk of bytes (a “blob”), like a photo or a PDF. In old databases this column type is often image. You want to grab that data in C# using Dapper and do something with it (save to a file, send to the browser, etc).

Quick note: image is deprecated. If you can, change it to varbinary(max). But the code below works for both.

What you need

  • A table with a blob column, e.g.: CREATE TABLE Files ( Id INT PRIMARY KEY, FileName NVARCHAR(200), FileData IMAGE -- or VARBINARY(MAX) );
  • C# project with:
    • System.Data.SqlClient (or Microsoft.Data.SqlClient)
    • Dapper (dotnet add package Dapper)

The basic idea with Dapper

Dapper maps columns to C# properties. For blob fields, you use byte[].

Model

public class FileRecord
{
    public int Id { get; set; }
    public string FileName { get; set; } = "";
    public byte[] FileData { get; set; } = Array.Empty<byte>();
}

Read one row by Id

using System.Data.SqlClient; // or Microsoft.Data.SqlClient
using Dapper;

var connString = "Server=.;Database=YourDb;Trusted_Connection=True;";
// or use a real connection string

using var conn = new SqlConnection(connString);

var sql = "SELECT Id, FileName, FileData FROM Files WHERE Id = @Id";
var file = await conn.QuerySingleOrDefaultAsync<FileRecord>(sql, new { Id = 123 });

if (file is null)
{
    Console.WriteLine("Not found");
}
else
{
    Console.WriteLine($"Got {file.FileName}, {file.FileData.Length} bytes");
}

That’s it. Dapper will put the blob into byte[] FileData.

Save the blob to a file

var outputPath = Path.Combine(Environment.CurrentDirectory, file.FileName);
await File.WriteAllBytesAsync(outputPath, file.FileData);
Console.WriteLine($"Saved to {outputPath}");

Read only the blob (no model)

If you don’t need the whole row, you can fetch just the bytes:

var bytes = await conn.ExecuteScalarAsync<byte[]>(
    "SELECT FileData FROM Files WHERE Id = @Id",
    new { Id = 123 }
);

if (bytes != null)
{
    await File.WriteAllBytesAsync("download.bin", bytes);
}

Read many rows

var files = await conn.QueryAsync<FileRecord>(
    "SELECT Id, FileName, FileData FROM Files WHERE Id IN @Ids",
    new { Ids = new[] { 1, 2, 3 } }
);

foreach (var f in files)
{
    await File.WriteAllBytesAsync(f.FileName, f.FileData);
}

Common pitfalls (and easy fixes)

  1. Huge files = lots of memory
    • byte[] loads the whole file into RAM. If files are very large, consider streaming (see below) or keep files in object storage.
  2. image vs varbinary(max)
    • Code is the same, but if you can, migrate to varbinary(max): ALTER TABLE Files ALTER COLUMN FileData VARBINARY(MAX);
  3. Null blobs
    • If the column can be null, make the property byte[]? and check for null before saving.
  4. Wrong column selection
    • Make sure your SELECT includes the blob column (FileData) and the names match your property names (or use column aliases).

Bonus: simple streaming approach

If files are very large, one trick is to stream from SQL to disk without keeping the entire thing in memory. Dapper focuses on mapping, so here we use a plain SqlCommand reader for the blob, but still keep Dapper for other queries.

using var conn = new SqlConnection(connString);
await conn.OpenAsync();

using var cmd = new SqlCommand(
    "SELECT FileName, FileData FROM Files WHERE Id = @Id", conn);
cmd.Parameters.AddWithValue("@Id", 123);

using var reader = await cmd.ExecuteReaderAsync(System.Data.CommandBehavior.SequentialAccess);
if (await reader.ReadAsync())
{
    var fileName = reader.GetString(0);
    var outputPath = Path.Combine(Environment.CurrentDirectory, fileName);

    using var fs = new FileStream(outputPath, FileMode.Create, FileAccess.Write, FileShare.None);
    long bytesRead = 0;
    long fieldOffset = 0;
    const int chunkSize = 81920; // ~80KB

    var buffer = new byte[chunkSize];
    while ((bytesRead = reader.GetBytes(1, fieldOffset, buffer, 0, buffer.Length)) > 0)
    {
        await fs.WriteAsync(buffer.AsMemory(0, (int)bytesRead));
        fieldOffset += bytesRead;
    }
}

Use this only when needed; for normal-sized files, the Dapper byte[] approach is simpler.

Quick checklist

  • Map blob column to byte[].
  • Use parameterized queries (@Id) to avoid SQL injection.
  • Save with File.WriteAllBytesAsync.
  • Consider streaming for very large files.
  • Prefer varbinary(max) over image for new work.