Why Structure Matters on a Shared RDP Server
A Windows Server running multiple applications via IIS is a shared environment. Every application on that server competes for the same resources, shares the same IIS instance, and — if managed carelessly — shares the same risk surface. One poorly timed deployment, one accidental iisreset, or one developer testing against the live database can affect every application running on the machine simultaneously.
The solution is not more careful developers. It is a structure that makes the risky actions impossible by default and the safe actions the path of least resistance. This post covers the four pillars of that structure: separate GitHub repositories per service, a pull-test-deploy workflow, correct IIS app pool management, and a staging environment that mirrors production.
One Repository Per Service
The most common mistake on multi-application servers is keeping multiple services in a single monorepo or — worse — deploying directly from a developer's local machine with no repository involved at all. Either approach makes it impossible to know what version of code is actually running on the server.
The rule is simple: each application or service gets its own GitHub repository. This means:
- A change to the authentication API cannot accidentally include an unreviewed change to the booking service
- Each service has its own deployment history — you can see exactly what changed and when
- Rolling back one service does not affect any other service on the same server
- Access control is per-repository — a contractor working on the frontend does not have access to the backend API repository
A clean server structure follows the repository structure:
C:\inetpub\wwwroot\
├── client-portal-api\ ← from github.com/org/client-portal-api
├── booking-service\ ← from github.com/org/booking-service
├── admin-dashboard\ ← from github.com/org/admin-dashboard
└── public-website\ ← from github.com/org/public-website
Each folder maps directly to one IIS site or application, and each maps directly to one GitHub repository. There is no ambiguity about what is deployed where.
The Pull, Test, Deploy Workflow
Direct deployment from a local machine to a live server — copying files via FTP, dragging folders over RDP — removes version control from the process entirely. The server ends up in an unknown state, and the only way to know what is running is to inspect the files directly.
The correct workflow is:
# 1. RDP into the server
# 2. Navigate to the service directory
cd C:\inetpub\wwwroot\booking-service
# 3. Pull the latest from the main branch
git pull origin main
# 4. Build and verify — do not skip this step
dotnet build --configuration Release
# 5. Run any automated tests
dotnet test
# 6. Publish the release build
dotnet publish --configuration Release --output .\publish
# 7. Recycle only this application's pool (not iisreset — covered below)
# Done via IIS Manager or PowerShell
This workflow means every deployment is traceable. The git log on the server tells you exactly when each version was deployed and what changed. If something breaks, you know the last commit that was pulled, and rolling back is a single command:
# Roll back to the previous commit if the deployment caused issues
git revert HEAD
dotnet publish --configuration Release --output .\publish
# Recycle the app pool
App Pool Recycle — Never iisreset
This is the most operationally important habit on a shared IIS server, and the one most often skipped in favour of the easier command.
iisreset stops and restarts the entire IIS service. On a server running five applications, iisreset takes all five applications offline simultaneously — including applications that had nothing to do with the deployment you just made. Any requests in flight at that moment are dropped. Any connected users lose their sessions. Scheduled tasks that were mid-execution get terminated.
The correct tool is app pool recycling. Each application in IIS has its own application pool. Recycling one pool restarts only that application, while every other application on the server continues running without interruption:
# PowerShell — recycle a specific app pool by name
Import-Module WebAdministration
Restart-WebAppPool -Name "BookingServicePool"
# Or stop and start if a full restart is needed
Stop-WebAppPool -Name "BookingServicePool"
Start-WebAppPool -Name "BookingServicePool"
# Check the pool state
Get-WebAppPoolState -Name "BookingServicePool"
You can also recycle from IIS Manager without touching the command line: expand Application Pools in the left panel, right-click the target pool, and click Recycle. The other pools are not affected.
Name your app pools explicitly after the application they serve. A pool named DefaultAppPool shared between three applications means recycling it affects all three. One application, one pool, one name:
Application Pools:
├── ClientPortalApiPool → Client Portal API site
├── BookingServicePool → Booking Service site
├── AdminDashboardPool → Admin Dashboard site
└── PublicWebsitePool → Public Website site
When should you use iisreset? Almost never in production. The only legitimate case is a server-level IIS configuration change that cannot be applied without a full service restart — a scenario that should go through a planned maintenance window, not a routine deployment.
Staging Server — Test Before It Reaches Live
A staging server is a second environment that mirrors the production server in configuration but serves no real users. Every deployment goes to staging first. It is tested there. Only then does it go to production.
The minimum viable staging setup mirrors the production directory structure on a separate machine (or a separate IIS site on the same machine, if budget does not allow a second server):
Production server (live):
C:\inetpub\wwwroot\booking-service\ ← git branch: main
Staging server (test):
C:\inetpub\wwwroot\booking-service\ ← git branch: staging
The workflow with staging in place:
- Developer pushes changes to a feature branch on GitHub
- Feature branch is merged into the
stagingbranch via pull request - The staging server pulls from the
stagingbranch and deploys - QA or the developer tests on the staging environment against the staging database
- If tests pass, the staging branch is merged into
main - The production server pulls from
mainand deploys
# On the staging server
cd C:\inetpub\wwwroot\booking-service
git pull origin staging
dotnet publish --configuration Release --output .\publish
Restart-WebAppPool -Name "BookingServicePool-Staging"
# On the production server — only after staging is verified
cd C:\inetpub\wwwroot\booking-service
git pull origin main
dotnet publish --configuration Release --output .\publish
Restart-WebAppPool -Name "BookingServicePool"
Staging Database vs Live Database
A staging server connected to the live database is not a staging server. It is a production server with a different application version — and it is one bad test away from corrupting live user data.
The staging database must be a separate database instance. It can run on the same database server but must be a distinct database with its own connection string:
-- Production database
Server: db.internal
Database: bookingapp_production
User: appuser_prod
-- Staging database
Server: db.internal
Database: bookingapp_staging
User: appuser_staging
Connection strings are managed via environment variables or a secrets manager — never hardcoded. The staging and production application pools run under different service accounts, and those accounts only have access to their respective databases.
In ASP.NET Core, the environment determines which connection string is used:
// appsettings.json — production values (empty, read from environment)
{
"ConnectionStrings": {
"DefaultConnection": ""
}
}
// appsettings.Staging.json — staging-specific overrides
{
"ConnectionStrings": {
"DefaultConnection": "Host=db.internal;Database=bookingapp_staging;Username=appuser_staging;Password=..."
}
}
// Set the environment variable on the staging server
ASPNETCORE_ENVIRONMENT=Staging
On the production server, ASPNETCORE_ENVIRONMENT is set to Production and the connection string is injected from the secrets manager or environment variable — never from a file checked into the repository.
Working With Live Databases Safely
There are legitimate reasons to access the live database directly: investigating a bug that only reproduces in production, running a one-time data migration, or inspecting data that a support ticket references. The practice is not inherently dangerous — the approach can be.
Rules for live database access:
- Read-only by default. Connect with a read-only database user for any investigative work. Only switch to a write-capable connection when you have a specific, planned operation to execute.
- No schema changes on live without a migration. Any column addition, index creation, or table modification on the live database must go through the application's migration tooling — not applied manually via a query window. Manual changes are untraceable and unrepeatable.
- Test destructive queries on staging first. Any UPDATE or DELETE with a WHERE clause should be run on the staging database first. Confirm the row count affected matches expectations before running it on production.
- Wrap data changes in a transaction. When running manual data corrections on the live database, wrap them in a transaction and verify the results before committing:
-- Always use a transaction for manual data changes on live
BEGIN TRANSACTION;
UPDATE orders
SET status = 'refunded'
WHERE payment_reference = 'PAY-2026-00142'
AND status = 'completed';
-- Verify before committing
SELECT * FROM orders WHERE payment_reference = 'PAY-2026-00142';
-- If the result looks correct
COMMIT;
-- If anything looks wrong
ROLLBACK;
- Keep a staging copy current. Refresh the staging database from a production backup periodically. This ensures staging tests run against realistic data volumes, and any performance issues surface in staging before they hit live.
# Restore a production backup to staging (PostgreSQL example)
pg_dump -h db.internal -U appuser_prod bookingapp_production > prod_backup.sql
psql -h db.internal -U appuser_staging bookingapp_staging < prod_backup.sql
The Complete Structure at a Glance
GitHub:
org/service-a (main, staging, feature/* branches)
org/service-b (main, staging, feature/* branches)
Staging Server:
IIS Sites: service-a-staging → ServiceAPool-Staging
service-b-staging → ServiceBPool-Staging
Environment: ASPNETCORE_ENVIRONMENT=Staging
Database: bookingapp_staging (separate DB, staging credentials)
Production Server:
IIS Sites: service-a → ServiceAPool
service-b → ServiceBPool
Environment: ASPNETCORE_ENVIRONMENT=Production
Database: bookingapp_production (live DB, production credentials)
Deployment flow:
feature/* → staging branch → staging server → verified → main → production server
What This Structure Prevents
Every rule in this structure exists because someone, somewhere, learned it the hard way:
- iisreset during business hours — takes every application offline. App pool recycle affects only the one being deployed.
- Testing against the live database — a test that inserts or deletes records corrupts real user data. Staging database prevents this entirely.
- Deploying untested code directly to production — a broken build that compiled locally may fail to start on the server. The staging deployment catches this before users see it.
- Not knowing what is running on the server — without git on the server, there is no deployment history. With git,
git log --oneline -5tells you the last five changes deployed and when. - One repository for everything — a change to service A that accidentally includes a half-finished change to service B deploys both. Separate repositories make this structurally impossible.
These are not advanced DevOps practices. They are the baseline habits that keep a shared server predictable, deployments reversible, and live user data protected. If you are setting up a new server environment or inheriting one that lacks this structure, the git-per-service rule and the app pool recycle habit are the two highest-priority changes to make first.
If you are building or managing a .NET backend that needs a proper deployment setup, see the .NET backend development services page or get in touch.