Blog

March 26, 2026 · 10 min read

Database Configuration Done Right — Security, Scalability, and Performance from Day One

The database is the foundation every other layer of your application depends on. Get the configuration wrong and no amount of clean code compensates for it. Here is the complete Do's and Don'ts guide.

Why Database Configuration Is the Most Important Decision You Make Early

Most architectural decisions in a new project can be changed later — at some cost. The database configuration is different. The connection pooling strategy, the security model, the indexing approach, the schema design — these decisions compound over time. A misconfigured database in week one becomes a production incident in month six.

This post covers the Do's and Don'ts across three areas: security, scalability, and performance. These apply whether you are using PostgreSQL, MySQL, or SQL Server, and whether your backend is .NET, Node.js, or anything else.

Security — Protecting the Most Sensitive Layer of Your Stack

The database holds your users' personal data, your application's state, and often your business's most sensitive information. Security failures at this layer tend to be catastrophic and public.

Do: Store Connection Strings Outside Source Code

A connection string in a config file that gets committed to git is a credential leak waiting to happen. Anyone who has ever had access to the repository — current team members, former employees, contractors — can read it.

// Wrong — connection string hardcoded
var connectionString = "Host=localhost;Database=myapp;Username=postgres;Password=supersecret";

// Correct — read from environment or secrets manager
var connectionString = builder.Configuration.GetConnectionString("DefaultConnection");
// In appsettings.json (non-sensitive local default only):
// "DefaultConnection": "Host=localhost;Database=myapp_dev;Username=appuser;Password="
// Actual password in environment variable: ConnectionStrings__DefaultConnection

In production, use a dedicated secrets manager:

  • Azure Key Vault — integrates directly with ASP.NET Core configuration
  • AWS Secrets Manager — works with ECS, Lambda, and EC2 via IAM roles
  • HashiCorp Vault — infrastructure-agnostic, widely used in Kubernetes deployments
  • Environment variables — minimum viable approach, acceptable for small deployments

Do: Create a Least-Privilege Database User

Your application almost never needs to create tables, drop indexes, or manage users at runtime. Create a dedicated database user with only the permissions the application actually uses:

-- PostgreSQL: create a restricted application user
CREATE USER appuser WITH PASSWORD 'strong_random_password';

-- Grant only what is needed
GRANT CONNECT ON DATABASE myapp TO appuser;
GRANT USAGE ON SCHEMA public TO appuser;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO appuser;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO appuser;

-- Do NOT grant:
-- GRANT ALL PRIVILEGES ON DATABASE myapp TO appuser;
-- This allows schema changes, dropping tables, and creating new users
-- MySQL equivalent
CREATE USER 'appuser'@'%' IDENTIFIED BY 'strong_random_password';
GRANT SELECT, INSERT, UPDATE, DELETE ON myapp.* TO 'appuser'@'%';
FLUSH PRIVILEGES;

Run migrations with a separate, more privileged migration user — not the runtime application user. This way, a compromised application account cannot alter the schema.

Do: Enforce SSL for All Database Connections

Even on internal networks, database traffic should be encrypted. Unencrypted connections expose credentials and data to anyone with network access — a real risk in shared cloud environments.

// PostgreSQL with SSL enforced in connection string
"Host=db.example.com;Database=myapp;Username=appuser;Password=xxx;SSL Mode=Require;Trust Server Certificate=false"

// Entity Framework Core — add to DbContext options
services.AddDbContext<AppDbContext>(options =>
    options.UseNpgsql(connectionString, npgsqlOptions =>
        npgsqlOptions.EnableRetryOnFailure(3)
    )
);

Don't: Use Default Ports Exposed to the Internet

PostgreSQL runs on 5432, MySQL on 3306 by default. These are the first ports automated scanners probe. Your database should never be directly accessible from the public internet — it should only accept connections from your application server's private IP range or VPC.

If you need to access the database remotely for administration, use an SSH tunnel or a VPN — not an open inbound rule on the database port.

Don't: Log Full Query Strings in Production

Query logs are useful for debugging slow queries. Full query logs in production often capture parameterised values — email addresses, names, financial figures — that should never appear in log files. Use slow query logging with a sensible threshold instead:

-- PostgreSQL: log only queries slower than 500ms
log_min_duration_statement = 500

Scalability — Building for Growth from the Start

A database that handles 100 concurrent users fine will often collapse under 1,000 if connection management and query design were not considered early. The fixes are not difficult, but they are much harder to apply to a running production system than to get right at the start.

Do: Configure Connection Pooling

Databases have a hard limit on the number of simultaneous connections they can handle — PostgreSQL defaults to 100, and each connection consumes memory on the database server. Without pooling, an application under load opens a new connection per request and exhausts this limit quickly.

Connection pooling maintains a pool of pre-established connections that requests borrow and return:

// Entity Framework Core — configure pool size
services.AddDbContextPool<AppDbContext>(options =>
    options.UseNpgsql(connectionString), poolSize: 128
);

// Or configure via connection string
"Host=localhost;Database=myapp;Username=appuser;Password=xxx;Maximum Pool Size=50;Minimum Pool Size=5"

For PostgreSQL specifically, PgBouncer is the industry-standard connection pooler. It sits between your application and PostgreSQL, managing thousands of client connections against a smaller number of actual database connections:

# PgBouncer configuration (pgbouncer.ini)
[databases]
myapp = host=localhost port=5432 dbname=myapp

[pgbouncer]
pool_mode = transaction       # connection released after each transaction
max_client_conn = 1000        # max total clients
default_pool_size = 25        # connections to PostgreSQL per database/user pair
min_pool_size = 5

Do: Index Foreign Keys on the Referencing Table

PostgreSQL automatically creates an index on primary keys. It does not automatically create indexes on foreign keys on the referencing table. This means every JOIN from the child table to the parent table results in a full scan unless you add the index manually:

-- orders.user_id references users.id
-- PostgreSQL does NOT automatically index orders.user_id

CREATE INDEX idx_orders_user_id ON orders (user_id);
CREATE INDEX idx_order_items_order_id ON order_items (order_id);
CREATE INDEX idx_order_items_product_id ON order_items (product_id);

In Entity Framework Core, define indexes in your model configuration so they are included in migrations:

modelBuilder.Entity<Order>()
    .HasIndex(o => o.UserId)
    .HasDatabaseName("idx_orders_user_id");

modelBuilder.Entity<OrderItem>()
    .HasIndex(i => i.OrderId)
    .HasDatabaseName("idx_order_items_order_id");

Do: Separate Read and Write Operations for Heavy Workloads

Reporting queries — dashboards, exports, analytics — are often full-table scans that run for seconds. Running these against your primary database blocks writes and degrades response times for all other operations. Read replicas handle this without affecting the primary:

// Register separate DbContext instances for read and write
services.AddDbContext<WriteDbContext>(options =>
    options.UseNpgsql(configuration["ConnectionStrings:Primary"]));

services.AddDbContext<ReadDbContext>(options =>
    options.UseNpgsql(configuration["ConnectionStrings:Replica"])
           .UseQueryTrackingBehavior(QueryTrackingBehavior.NoTracking));
           // No tracking on reads — Entity Framework won't watch these entities for changes

Don't: Open Unlimited Connections

Setting Maximum Pool Size to an unlimited or very large number does not make your application scale better — it shifts the bottleneck to the database server, which runs out of memory managing thousands of simultaneous connections. Set a pool size that reflects what the database can handle, monitor pool saturation, and scale by adding application instances rather than connection count.

Don't: Filter Large Result Sets in Application Code

This is one of the most common scalability problems in backend applications:

// Wrong — loads all orders into memory, filters in C#
var pendingOrders = await _db.Orders.ToListAsync();
var filtered = pendingOrders.Where(o => o.Status == "pending").ToList();

// Correct — filter in the database, only transfer matching rows
var pendingOrders = await _db.Orders
    .Where(o => o.Status == "pending")
    .ToListAsync();

The first version transfers every row across the database connection and allocates memory for all of them before filtering. On a table with 500,000 orders, this is the difference between a fast query and an out-of-memory exception.

Performance — Queries That Stay Fast as Data Grows

Performance problems are rarely noticed in development. Tables have hundreds of rows, queries complete in milliseconds regardless of how they are written. In production, with hundreds of thousands of rows and dozens of concurrent users, poorly written queries surface as timeouts, high CPU, and slow page loads.

Do: Verify Queries Use Indexes With EXPLAIN

-- PostgreSQL: check if an index is used
EXPLAIN ANALYZE
SELECT * FROM orders WHERE user_id = 5 AND status = 'pending';

-- What you want to see:
-- Index Scan using idx_orders_user_id on orders
-- Index Cond: (user_id = 5)
-- Filter: (status = 'pending')
-- Actual time: 0.084 ms

-- What you don't want to see:
-- Seq Scan on orders
-- Filter: ((user_id = 5) AND (status = 'pending'))
-- Rows Removed by Filter: 498,743
-- Actual time: 873.412 ms
-- MySQL: same check
EXPLAIN SELECT * FROM orders WHERE user_id = 5 AND status = 'pending';
-- Check the "key" column — NULL means no index was used
-- Check the "rows" column — a high number relative to the result means a scan

Do: Select Only the Columns You Need

// Wrong — SELECT * transfers all columns including large text fields and blobs
var users = await _db.Users.ToListAsync();

// Correct — project only what the caller needs
var users = await _db.Users
    .Select(u => new { u.Id, u.Email, u.DisplayName })
    .ToListAsync();

This is especially important for tables with TEXT, JSONB, or BYTEA columns — selecting these when you only need the user's name transfers significant data across the connection unnecessarily.

Do: Avoid N+1 Queries

An N+1 query is one query to get a list, then one additional query per item in the list to fetch related data. It is one of the most common performance problems in ORM-based backends:

// Wrong — 1 query for orders + 1 query per order for the user = N+1
var orders = await _db.Orders.ToListAsync();
foreach (var order in orders)
{
    var user = await _db.Users.FindAsync(order.UserId); // separate query per order
    Console.WriteLine($"{order.Id}: {user.Email}");
}

// Correct — 1 query with a JOIN
var orders = await _db.Orders
    .Include(o => o.User)
    .ToListAsync();

For read-only scenarios where you only need specific fields from the join, a projection is even better:

var orders = await _db.Orders
    .Select(o => new OrderSummary
    {
        OrderId   = o.Id,
        UserEmail = o.User.Email,
        Status    = o.Status,
        Total     = o.Total
    })
    .ToListAsync();

Do: Keep Transactions Short

A database transaction holds locks on the rows it has modified until the transaction commits or rolls back. Long-running transactions block other operations from accessing those rows:

// Wrong — transaction stays open during external HTTP call
await using var transaction = await _db.Database.BeginTransactionAsync();
var order = await _db.Orders.FindAsync(orderId);
order.Status = "processing";
await _db.SaveChangesAsync();

// This HTTP call to a payment provider can take 3-5 seconds
// The lock on the order row is held that entire time
var paymentResult = await _paymentService.ChargeAsync(order.Total);

await transaction.CommitAsync();

// Correct — do the work outside the transaction, wrap only the database writes
var paymentResult = await _paymentService.ChargeAsync(amount); // outside transaction

await using var transaction = await _db.Database.BeginTransactionAsync();
var order = await _db.Orders.FindAsync(orderId);
order.Status = paymentResult.Success ? "completed" : "failed";
order.PaymentReference = paymentResult.Reference;
await _db.SaveChangesAsync();
await transaction.CommitAsync();

Do: Cache Frequently Read, Rarely Changed Data

Reference data — country lists, product categories, configuration values — is read thousands of times and almost never changes. Hitting the database for every request is wasteful. Cache it:

// Register Redis cache
builder.Services.AddStackExchangeRedisCache(options =>
{
    options.Configuration = builder.Configuration["Redis:ConnectionString"];
});

// In your service
public async Task<IEnumerable<Category>> GetCategoriesAsync()
{
    const string cacheKey = "categories:all";

    var cached = await _cache.GetStringAsync(cacheKey);
    if (cached is not null)
        return JsonSerializer.Deserialize<IEnumerable<Category>>(cached)!;

    var categories = await _db.Categories.OrderBy(c => c.Name).ToListAsync();

    await _cache.SetStringAsync(
        cacheKey,
        JsonSerializer.Serialize(categories),
        new DistributedCacheEntryOptions { AbsoluteExpirationRelativeToNow = TimeSpan.FromHours(1) }
    );

    return categories;
}

Don't: Use OFFSET Pagination on Large Tables

OFFSET-based pagination is standard but has a performance problem: to return page 100 of 20 records, the database still reads and discards the first 1,980 rows. This gets progressively slower as users page deeper:

-- Slow on large tables — database must scan and skip 10,000 rows
SELECT * FROM orders ORDER BY created_at DESC LIMIT 20 OFFSET 10000;

-- Faster — cursor-based pagination using the last seen ID
-- Client tracks the last created_at value from the previous page
SELECT * FROM orders
WHERE created_at < '2026-03-01T10:00:00'
ORDER BY created_at DESC
LIMIT 20;

Cursor-based pagination is more complex to implement but stays constant-time regardless of how deep into the results the user has navigated.

Putting It Together — The Configuration Checklist

  • Connection strings stored in environment variables or a secrets manager — never in source code
  • Dedicated, least-privilege database user for the application runtime
  • Separate, more privileged user for running migrations only
  • SSL enforced on all database connections
  • Database port not exposed on the public internet
  • Connection pooling configured with a sensible maximum
  • Foreign keys indexed on the referencing table
  • EXPLAIN run on all queries that touch tables expected to grow beyond 50,000 rows
  • No SELECT * on tables with wide schemas — project only needed columns
  • No N+1 query patterns — use Include or projections
  • Transactions kept short — no external calls inside a transaction block
  • Frequently read reference data cached in Redis or in-memory cache

The Foundation Everything Else Depends On

Every layer of a web application — the API, the business logic, the UI — ultimately depends on the database being fast, available, and correct. The decisions covered in this post are not advanced optimisation techniques. They are baseline configuration that prevents the most common, most painful production problems: credential leaks, connection exhaustion under load, and query timeouts as data volume grows.

Getting these right at the start of a project is a fraction of the effort it takes to fix them later, when data is live, users are affected, and the system is under pressure.

If you are setting up a new backend and want the database layer configured correctly from the start, see my .NET backend development services or full-stack web development services. Get in touch to discuss your project.