Series: SharePoint 2019 → Subscription Edition Migration | Post 5 of 12
Reading time: ~10 minutes
It is 11:47 PM on cutover night. The migration scripts ran clean. Central Administration opened without errors. You navigate to the first service application — and it hangs. You check the ULS logs. A cascade of System.Net.Sockets.SocketException errors. Port 1433 between the App Server and the SQL cluster. Blocked. Your network team opened a firewall ticket six weeks ago. The ticket was closed as resolved. It was not.
You spend the next four hours on a conference call with a network engineer who is reading firewall logs in one window and a ticket system in another, while your migration window bleeds out.
This scenario is not hypothetical. It happens on SharePoint migrations regularly, and almost always to teams who assumed the firewall was handled. The ports you need open are not always the ones that get opened.
This post gives you the test to run before the window — a complete, automated, role-aware port connectivity check for a SPSE farm, covering every inter-server path, a remediation loop for anything that fails, and an HTML report your infrastructure team can sign off on. Run it before the migration. Run it the day before cutover. Run it two hours before go-live.
Why Port Connectivity Testing Is Non-Negotiable Before Migration

The Most Common Firewall-Related Migration Failures
SharePoint is not a single service. It is a distributed application where every component role — Web Front End, Application Server, SQL Server, Search — communicates across specific TCP and UDP ports. Block one of those channels and the failure mode depends entirely on which port was blocked.
The most common firewall-related failures during a SPSE migration follow a predictable pattern:
SQL unreachable from SharePoint servers. Farm configuration runs, Central Administration opens, and then everything that requires a database write fails. Service applications refuse to provision. Timer jobs queue but do not execute. The root cause is port 1433 blocked between one or more SharePoint servers and the SQL cluster. This is the most frequently missed port because network teams often open it only for the server that requests it — and SharePoint requires every App Server and WFE to reach SQL directly.
WCF service bus blocked (port 808). The SharePoint workflow service and several internal service application channels use WCF transport on port 808. When this port is blocked between farm servers, specific service applications fail in ways that do not clearly point to a network cause. The User Profile Service and the Managed Metadata Service are the first to exhibit symptoms.
Search component communication blocked (ports 16500–16519). The search service application provisions correctly in Central Administration. The topology looks right. Crawls start and return no results. No error in the crawl log — just silence. The search index component, query component, and analytics processing component communicate across the 16500–16519 range. These ports are almost never opened by default in enterprise firewall standards, because most network engineers have never seen them.
Why “We Checked the Firewall” Is Not Enough
Enterprise firewall standards are built for common services — HTTP, HTTPS, SQL, LDAP. They are not built for the specific topology of a SharePoint farm. The ports that get missed are the non-standard ones:
- 808 — WCF transport between farm servers. Not in any default ruleset.
- 32843–32846 — Service application communication between WFE and App Server. Unknown to most network engineers unless specifically requested.
- 16500–16519 — Search inter-component traffic. Rarely documented outside Microsoft’s own port reference.
- 9556 — SharePoint State Service. Small surface area, but required for session state across WFEs.
Getting verbal confirmation that “the firewall is open” is not a test. An automated connectivity check from the actual source server to the actual destination server on the actual port is the only test that counts. That is what this post covers.
There is also a meaningful difference between SP2019 and SPSE port requirements: SPSE ships with modern authentication (OIDC) as the default, which adds outbound HTTPS paths to your ADFS or Azure AD endpoints. If your SP2019 farm ran on classic Windows authentication, these outbound paths did not exist in your previous firewall policy. They need to be explicitly opened and tested for the SPSE target farm.
SharePoint Subscription Edition Port Requirements
Use this table as the reference you hand to your network team. Every port listed here needs to be confirmed open before a migration window starts.
| Port | Protocol | Purpose | Servers Involved |
|---|---|---|---|
| 80 | TCP | HTTP — client browser access, redirects to HTTPS | Client → WFE |
| 443 | TCP | HTTPS — primary web traffic, modern auth token exchange | Client → WFE, All SP → ADFS/IdP |
| 1433 | TCP | SQL Server — default instance connections | All SP servers → SQL |
| 1434 | UDP | SQL Server Browser — named instance resolution | All SP servers → SQL |
| 808 | TCP | WCF channel — internal service communication bus | All SP servers (internal) |
| 32843 | TCP | Service Application communication — HTTP | WFE ↔ App Server |
| 32844 | TCP | Service Application communication — HTTPS | WFE ↔ App Server |
| 32845 | TCP | Service Application communication — net.tcp | WFE ↔ App Server |
| 32846 | TCP | User Profile Replication service | WFE ↔ App Server |
| 16500–16519 | TCP | Search inter-component traffic (index, query, crawler, analytics) | Search ↔ App Server, Search internal |
| 25 | TCP | SMTP — outbound email alerts and notifications | App Server → SMTP relay |
| 9556 | TCP | SharePoint State Service — session state across WFEs | All SP servers (internal) |
| 445 | TCP | SMB/CIFS — UNC path access, file shares, backup targets | SP servers → file servers |
| 135 | TCP | RPC Endpoint Mapper — used by certain service app comms | SP servers (internal) |
| 389 | TCP | LDAP — Active Directory queries | All SP servers → Domain Controllers |
| 636 | TCP | LDAPS — AD queries over SSL | All SP servers → Domain Controllers |
Port reference: The ports in this table align with SharePoint Server farm communication requirements documented in Microsoft’s architecture and firewall planning guidance. Ports 80, 443, 1433, and 1434 are universally well-established. Ports 808, 32843–32846, 16500–16519, and 9556 are SharePoint-specific and referenced across Microsoft support documentation and TechNet. Before your migration window, cross-reference this list against the current Microsoft firewall planning documentation and your SPSE version’s release notes to confirm nothing has changed in your specific build.
Note on named SQL instances: If your SPSE farm connects to a named SQL instance rather than the default instance, port 1434 UDP (SQL Server Browser) is required for instance name resolution. Missing this port produces intermittent connection failures that are difficult to diagnose because they only occur when the SQL connection pool is rebuilt — which can appear random.
Note on SPSE and modern authentication: Outbound port 443 from all SharePoint servers to your ADFS or Azure AD endpoints is required for OIDC token exchange. If your SP2019 farm used Windows integrated auth exclusively, this outbound path may not exist in your current firewall policy. Add and test it explicitly.
Config-Driven Port Mapping with config.json
Before looking at how to run the tests, it is worth understanding the design decision behind the toolkit. Most port-testing scripts either hardcode a fixed port list or accept a flat list of servers and ports as parameters. Neither approach works well for a multi-role SharePoint farm, because the correct port set is different for each server role.
The toolkit uses a config.json file that maps server roles to their expected port sets. At runtime, Test-SPPortConnectivity.ps1 calls Get-PortsForServer to resolve the correct port list for each server before running any tests. The test scope is entirely configuration-driven.
{ "servers": [ { "name": "SPSE-WFE01", "role": "WFE", "ports": [80, 443, 32843, 32844] }, { "name": "SPSE-APP01", "role": "AppServer", "ports": [808, 32843, 32844, 32845, 32846] }, { "name": "SPSE-SQL01", "role": "SQL", "ports": [1433, 1434] }, { "name": "SPSE-SRH01", "role": "Search", "ports": [16500, 16501, 16502, 16519] } ]}
To adapt this for your farm topology, update the name values to match your actual server hostnames and add or remove entries to reflect your server count. If you are running a co-located search server on an App Server, merge the port sets for both roles on that server entry.
For named SQL instances, remove port 1434 from the SQL server entry only if you are using the default instance and the SQL Server Browser service is disabled. If you have any named instances in your farm — including instances for other databases like the Distributed Cache — leave 1434 UDP in the config and confirm UDP connectivity explicitly, which the toolkit handles separately from TCP.
Running the Three-Phase Connectivity Test
Test-SPSE-InterServer.ps1 orchestrates the full connectivity test in three phases. Run it from any server in the farm that has network access to all other farm servers — typically an App Server.
Phase 1 — Test All Connections
The first phase iterates every server and port combination defined in config.json, calls Test-PortConnectivity for each TCP port and the equivalent UDP test for port 1434, and writes a colour-coded result to the console: green for open, red for blocked or timeout, yellow/warning for reachable but high latency.
The core of the TCP test is Test-PortConnectivity in functions.ps1. The function uses an async connect pattern with a configurable timeout — the default is 3,000 ms, which is long enough to distinguish a blocked port (instant RST or no response) from a temporarily slow path (slow route, firewalled with a delay).
function Test-PortConnectivity { param( [string]$Server, [int]$Port, [int]$TimeoutMs = 3000 ) try { $client = New-Object System.Net.Sockets.TcpClient $connect = $client.BeginConnect($Server, $Port, $null, $null) $wait = $connect.AsyncWaitHandle.WaitOne($TimeoutMs, $false) if (-not $wait) { $client.Close() return [PSCustomObject]@{ Server = $Server Port = $Port Status = "Blocked" Latency = "Timeout after ${TimeoutMs}ms" } } $client.EndConnect($connect) $client.Close() return [PSCustomObject]@{ Server = $Server Port = $Port Status = "Open" Latency = "< ${TimeoutMs}ms" } } catch { $client.Close() return [PSCustomObject]@{ Server = $Server Port = $Port Status = "Error" Latency = $_.Exception.Message } }}
The async connect pattern matters here. A synchronous TcpClient.Connect() will block the entire script for the full TCP timeout if a port is unreachable — which on an enterprise network can be 30–60 seconds per port. With the async pattern and a 3-second timeout, the full sweep of a four-server farm completes in under two minutes even when blocked ports are present.
After Phase 1, you have a complete map of every open and blocked port across the farm. At this point, the correct next step depends on your environment.
Phase 2 — Firewall Remediation (Optional)
If Phase 1 finds blocked ports and you are working in an environment where the SharePoint servers manage their own Windows Firewall rules — a common setup for lab environments, freshly built SPSE target farms, or environments where Windows Firewall is not GPO-managed — Test-SPSE-InterServer.ps1 can optionally invoke firewall-rules.ps1 to add the missing inbound and outbound rules on the affected server.
The remediation step is automatic for any port marked as blocked in Phase 1 results. It adds named Windows Firewall rules using New-NetFirewallRule with the exact port, protocol, and direction required.
Important: Do not use the auto-remediation step in environments where Windows Firewall policy is managed by Group Policy or a centralised security team. In those environments, adding local rules will either be overwritten on the next GPO refresh or create a policy conflict. Instead, export the Phase 1 results and hand them to the appropriate team with the port table from this post as context. The HTML report described in the next section is the right artefact for that handoff.
Phase 3 — Re-Verify After Remediation
After firewall rules are added, Test-SPSE-InterServer.ps1 re-runs the same connectivity tests against every previously failed path. This is not optional and should not be skipped.
Creating a firewall rule and confirming that traffic now passes are two different things. A rule can be created with a typo in the port number, applied to the wrong network profile (domain vs. private vs. public), or targeted at the wrong interface. Phase 3 catches all of those errors. The only acceptable exit state is all green on the re-verify pass.
Certificate Validity — The Hidden Connectivity Killer
Port tests confirm that TCP channels are open. They do not confirm that HTTPS traffic will succeed end-to-end. For a SPSE farm, that distinction matters more than it did for SP2019.
SPSE ships with OIDC (OpenID Connect) as the default authentication model. Every user authentication, every service-to-service token exchange, and every connection between SharePoint and ADFS or Azure AD travels over HTTPS and depends on valid, trusted TLS certificates at both ends. An expired or misconfigured certificate does not produce a “certificate error” in the SharePoint UI — it produces an authentication failure, which shows up in ULS logs as a generic token validation error, and in the user experience as a login redirect loop.
Test-CertificateValidity in functions.ps1 checks each configured HTTPS endpoint for three conditions:
- Expiry date — flags any certificate expiring within 30 days as a warning, and expired certificates as a failure.
- Subject / SAN match — confirms that the certificate’s Subject Alternative Names include the hostname being used. A certificate issued to
sharepoint.contoso.comthat is serving traffic forintranet.contoso.comwill cause TLS failures that look like network problems. - Chain trust — validates that the full certificate chain is trusted on the server running the test. An internal CA certificate that is not distributed to the SPSE servers will cause chain validation failures regardless of whether the certificate itself is valid.
Farms migrating from SP2019 classic Windows authentication to SPSE OIDC are particularly exposed here. The ADFS token-signing certificate is a common problem: it may have been renewed between the time SPSE was configured and the migration date, without the SharePoint relying party trust being updated to match. Test-CertificateValidity will catch this before it becomes a 2 AM authentication outage.
Reading the HTML Connectivity Report
After tests run — Phase 1 alone, or after the full three-phase cycle — report-generator.ps1 calls New-HTMLReport to produce a self-contained HTML file with the complete results.
The report uses three colour states:
Green — port open, latency acceptable. The connection completed within the timeout threshold and latency is within the normal range for the network segment. This is the only acceptable state for every port before a migration window starts.
Red — port blocked or timeout. The TCP connect attempt either received an explicit RST (port rejected by the host) or timed out (port blocked by a firewall with no response). Every red result needs to be resolved before migration begins. The report includes the source server, destination server, and port for each failure — exactly what the network team needs to identify and update the relevant firewall rule.
Yellow / Warning — reachable but latency elevated. The port is open, but the connection time is higher than expected. Common causes include a firewall performing deep packet inspection on service application traffic, a misconfigured route that adds unnecessary hops between farm servers, or a network interface running at reduced capacity. Warning states should be investigated before cutover even if they do not block farm operation — elevated latency on service application ports compounds under load.
The report is exportable. Export-TestResultsToCSV in functions.ps1 writes the same results to a CSV file that can be attached to a change request, included in a migration runbook, or shared with a security team for pre-cutover sign-off. An all-green HTML report is the sign-off artefact you want before a change management board approves a migration maintenance window.
Don’t Discover Firewall Blocks at 2 AM on Cutover Night
The snippets in this post illustrate the approach: an async TCP test function, a config-driven role-to-port mapping, a three-phase test → fix → re-verify loop, and an HTML report. If you are building this for a single migration on a straightforward topology, you have enough here to assemble a working test.
If you want the production-ready toolkit — all six scripts pre-wired and tested, the config.json pre-configured for common SPSE topologies (WFE, App Server, SQL, dedicated Search), the HTML report your infrastructure team can sign off on, and a usage guide that walks through the full three-phase execution — it is available as a standalone download.
What is in the toolkit:
Test-SPPortConnectivity.ps1— the main per-server port sweepTest-SPSE-InterServer.ps1— the three-phase orchestrator (test → remediate → re-verify)functions.ps1— shared functions:Test-PortConnectivity,Test-CertificateValidity,Get-FirewallRules,Test-FirewallRule,Export-TestResultsToCSVfirewall-rules.ps1— Windows Firewall rule creation for any blocked portconfig.json— pre-configured for standard SPSE four-server topology, ready to customisereport-generator.ps1—New-HTMLReportwith green/red/warning output- Usage guide — step-by-step execution instructions with expected output at each phase
The difference between a confident cutover and a 2 AM debugging session is usually whether this test ran before the window.
Interested in the complete Port & Firewall Toolkit? Contact sudharsan_1985@live.in to get all six scripts pre-wired and tested, with the full usage guide.
Conclusion
Port and firewall readiness is not a checkbox — it is a gate. Every blocked port between SharePoint server roles is a potential outage waiting to surface at the worst possible moment. The test is straightforward, the remediation loop is automatable, and the HTML report gives your infrastructure and security teams the evidence they need to approve the migration window with confidence.
Run the test early. Run it again before cutover. Resolve every red result. Then move to the next phase.
In Post #6, the focus shifts to SQL log shipping — the mechanism that keeps source and target databases in sync through a live migration window. That work depends entirely on uninterrupted, bidirectional connectivity between source SQL and target SQL on port 1433. Once your port tests are all green, you are ready to start that configuration.
Series Navigation
← Previous: Post #4 — How to Audit SharePoint Workflows Before Migration (SP 2013 & Nintex)
→ Next: Post #6 — Setting Up SQL Log Shipping for Zero-Downtime SharePoint Migration
↑ Series Home: Post #1 — The Complete Guide to SharePoint 2019 to Subscription Edition Migration
Post #5 of 12 — SharePoint 2019 → Subscription Edition Migration Series