TL;DR: Before any migration work begins, run three PowerShell scripts against your SharePoint 2019 farm: 1.DB_List.ps1 to enumerate every content database, 2.DB_Health.ps1 to check database health (PASS/WARN/FAIL), and Get-SPInventoryReport.ps1 to produce a full HTML report and CSV exports covering web applications, site collections, lists, and permissions. This inventory is your migration baseline. Everything else — wave planning, cutover sizing, risk triage — runs off this data.
If you have read Post #1 in this series, you know the migration from SharePoint 2019 to Subscription Edition is a database-attach upgrade — not a lift-and-shift, not a cloud sync, not a content migration tool exercise. That means everything depends on knowing exactly what databases you have, what is inside them, and whether they are healthy enough to move. Running a complete SharePoint farm inventory before migration is how you build that picture — systematically, automatically, and before anything can go wrong.
Most migrations that hit a wall during cutover share one root cause: someone started moving databases before they had a complete, accurate picture of the farm. A content database attached to SQL Server but not registered in Central Admin. A site collection with broken permission inheritance on 12,000 items. Classic-mode authentication on a web application that does not exist in Subscription Edition. These are not edge cases — they are the norm on farms that have been running for five or more years.
This post gives you a systematic, automated approach to answering every question that will come up in a migration kickoff meeting before it gets asked. Run the inventory first. Everything else moves faster because of it.
What a Farm Inventory Actually Covers
Before running any script, understand what you are collecting and why each category matters. Skipping a category is not a time-saver — it is a deferred problem.
Web Applications and Site Collections
Your inventory needs a complete map of every web application: URL, port, authentication type (Windows Claims vs. Classic vs. Kerberos), host header vs. path-based configuration, and managed paths. In Subscription Edition, Classic-mode authentication does not exist. If any of your web applications are still running Classic auth, you need to know now — converting them is a pre-migration task, not a post-migration cleanup.
Missing a web application in your inventory means missing every site collection, every database, and every permission set underneath it. That is not a gap you can fill during a cutover window.
Content Databases
Every content database must be accounted for — including any that exist in SQL Server but are not visible in Central Admin. This happens more often than you would expect: a DBA restores an old backup for a one-off request, it never gets cleaned up, and it sits in SQL for years. When you run the database-attach migration, that database is either a liability (if you accidentally pick it up) or a mystery (if no one can explain what is in it).
1.DB_List.ps1 addresses this directly. It enumerates every content database attached to the farm and produces a table with four columns: database name, associated web application, size in GB, and status. A sample output looks like this:
| Database Name | Web Application | Size (GB) | Status |
|---|---|---|---|
| WSS_Content_Intranet | https://intranet.contoso.com | 42.3 | Online |
| WSS_Content_Archive | (none) | 3.1 | Online |
| WSS_Content_Portal | https://portal.contoso.com | 118.7 | Online |
That second row — WSS_Content_Archive with no web application association — is the row that causes problems during cutover. The script catches it. A manual Central Admin review does not.
Lists and Libraries
List and library counts matter for two reasons: migration wave sizing, and the 5,000-item threshold. Any list with more than 5,000 items needs to be noted before migration. These lists may have view thresholds, indexing issues, or query-based web parts that will not behave the same way after a database attach if column types or indexed fields change between versions.
The inventory does not fix these problems — it surfaces them so you can decide how to handle each one before the cutover window.
Permissions and Groups
Permissions are the hardest thing to recreate from memory, and the most expensive thing to get wrong. The inventory captures: site collection administrators, SharePoint groups and their members, permission levels, and — critically — where inheritance has been broken.
Broken permission inheritance on hundreds or thousands of items is a migration signal, not just a documentation note. When that many unique permissions exist, post-migration validation becomes significantly more complex. Flag these sites in your inventory and plan extra validation time for them.
Service Applications
Managed Metadata, User Profile, and Search service applications each have migration considerations of their own. This post does not go deep on service applications — Post #6 (SQL Log Shipping Setup) and later posts in the series cover the service application database layer specifically. For the inventory, note which service applications are running, their database names, and their health status. That is enough context for now.
Database Health — The Pre-Migration Gate Nobody Talks About
Most migration guides tell you to assess your environment. None of them tell you what to do when you find a content database in a degraded state. The answer is: you stop, you remediate, and you do not proceed until the database is clean.
Why Database Health Matters Before You Touch Anything
A consistency error on a content database does not fix itself during a database-attach upgrade. It carries forward. If DBCC CHECKDB reports allocation errors on WSS_Content_Portal today, it will report the same errors on the Subscription Edition farm after the attach. The migration succeeds, the database is attached, and then your users start hitting errors that look like SPSE bugs — but they are not. They are pre-existing issues that the migration surfaced.
The same applies to log shipping, which is covered in Post #6. You cannot configure SQL log shipping on a database that has open consistency issues. The secondary replica will reflect the same corruption the moment it finishes seeding.
Orphaned databases are a different problem. They waste cutover time, raise questions in post-migration reviews, and in some cases contain content that someone will eventually ask about. Identify them in the inventory, make a documented decision about each one — migrate, archive, or delete — and do not let them be an unresolved question on cutover day.
What 2.DB_Health.ps1 Checks
2.DB_Health.ps1 is a targeted pre-migration health check. It examines every content database and produces a status output per database:
- PASS — database is Online, no consistency errors detected, size within expected range
- WARN — database is Online but has a condition worth reviewing (zero site collections, large size outlier, recent backup age)
- FAIL — database is in Suspect, Offline, or Recovery Pending state, or has open consistency errors
A clean farm should produce all PASS results. If you see any WARN or FAIL entries, document them and remediate before any migration work begins. A FAIL on a database is a hard stop — do not proceed until it is resolved.

How to Run the SharePoint Farm Inventory Scripts
Prerequisites
Before running any of these scripts, confirm:
- You are running from the SharePoint Management Shell, not Windows PowerShell or PowerShell 7
- The account running the script is a SharePoint Farm Administrator
- The script is executed on a SharePoint server in the farm — not from a workstation or jump box
- Read access to SQL Server is sufficient — the scripts make no schema changes and write nothing to any database
- No third-party PowerShell modules are required
The scripts use the Microsoft.SharePoint.PowerShell snap-in. If it is not already loaded, Get-SPInventoryReport.ps1 loads it automatically via Add-PSSnapin at the start of execution.
Running 1.DB_List.ps1
Run this first. It is the fastest of the three scripts and gives you the database enumeration baseline.
.\1.DB_List.ps1
Output is written to the console and to a CSV file in the current directory. Execution time on most farms: under two minutes.
Running 2.DB_Health.ps1
Run this second, immediately after you have the database list. Having the list in hand makes the health results easier to correlate.
.\2.DB_Health.ps1
If you see FAIL status on any database, stop. Do not run the full inventory until those issues are documented and a remediation path is identified. The full inventory will still complete, but the health findings need to be treated as blockers before any migration phase begins.
Running Get-SPInventoryReport.ps1
This is the comprehensive inventory. Plan for it to run between 10 and 50 minutes depending on farm size — a farm with 50 site collections will finish quickly; a farm with 500 site collections and heavy permission structures will take longer. Run it during off-peak hours if you are on a production farm.
.\Get-SPInventoryReport.ps1 -OutputPath "C:\MigrationAudit"
The script calls these key functions internally:
Test-SPEnvironment— pre-flight checks that validate the snap-in is loaded and the executing account has Farm Administrator rights. If pre-flight fails, the script exits with a clear error before touching anything.Get-TargetWebApplications— scopes the inventory to the web applications you want to include. By default it collects all web applications; you can pass a-WebApplicationparameter to scope to a single URL.Get-SPDatabases— enumerates all content and configuration databases attached to the farm.Get-SPSiteCollections— iterates every site collection in scope and collects URL, owner, template, storage used, and item counts.Get-SPListsAndLibraries— for each site collection, collects list and library names, item counts, and whether any list exceeds the 5,000-item view threshold.Get-SPGroupsAndPermissions— snapshots site collection administrators, SharePoint groups, group members, and permission inheritance status.New-HTMLReport— takes all collected data and generates the HTML report. This runs last and is the longest step proportionally on large farms.
What You Get as Output
HTML report — a single HTML file named with the farm name and run timestamp (e.g., SPInventory_CONTOSO-FARM_20250601_143022.html). The report opens in any browser, is section-based with a navigation sidebar, and does not require an internet connection. This is what you share with stakeholders and project managers.
CSV exports — one file per major inventory category: web applications, site collections, databases, lists, and permissions. These feed directly into Excel for wave planning, project tools for task tracking, and documentation templates for migration plans.

Reading the Inventory Report — What to Look For
Generating the report is the easy part. Knowing what to act on is where the work begins.
Red Flags in the Web Application Section
- Classic-mode authentication — Subscription Edition does not support Classic auth. Any web application still configured for Classic mode requires a pre-migration conversion to Claims-based authentication. This is not optional and it is not quick. Find these early.
- Non-standard ports — Web applications running on ports other than 80 and 443 need to be explicitly planned for in the new farm. Verify these ports will be available and that DNS and load balancer configurations will carry forward.
- Host header sites — These require DNS changes in the new environment. Map every host header site to its DNS record and confirm the DNS change is in the cutover plan.
Real example: “Classic mode authentication detected on https://intranet.contoso.com — conversion required before SPSE mount.” If this appears in your report and you are two weeks from cutover, your timeline just changed.
Red Flags in the Database Section
- Zero site collections on a database — Almost always an orphaned database. Document it, confirm with the DBA whether it is safe to detach, and make a decision before migration begins.
- Databases not associated with any web application — Same as above. These will appear in
1.DB_List.ps1output with a blank web application column. - Large size outliers — A single content database at 400 GB will drive your backup and restore window. Know which databases are outliers before you plan the cutover timeline.
Red Flags in the Permissions Section
- Broken inheritance on thousands of items — This is a performance signal as much as a permissions complexity signal. Document the count per site collection. Sites with extremely high unique permission counts will need extra post-migration validation time.
- External users or claims identities — Verify these identities will resolve in the new farm’s claims provider configuration. Claims mappings that work on your 2019 farm may behave differently depending on how the new farm’s authentication is configured.
- Missing site collection administrators — The inventory will list who the current site collection admins are. If any of those accounts are service accounts or disabled accounts, re-establish them before migration.
Using the Report for Wave Planning
The wave planning model is simple: small, low-risk databases first; large, business-critical databases last. Use the database size and site collection count data from the inventory to group databases into waves:
| Wave | Criteria | Example |
|---|---|---|
| Wave 1 | < 10 GB total, low permission complexity, no Classic auth | 3 databases, 8 site collections |
| Wave 2 | 10–100 GB total, standard permissions, Claims auth | 5 databases, 45 site collections |
| Wave 3 | > 100 GB total, high complexity, business-critical | 2 databases, 12 site collections |
Wave 1 is where you validate your migration process end-to-end. By the time you reach Wave 3, your team has run the playbook twice and the cutover execution is known and documented. Post #7 (Parallel Database Backup and Restore) covers how to execute the wave migration efficiently across multiple databases simultaneously.
The PowerShell Pattern Behind the Farm Inventory Script
The snippet below shows the core enumeration pattern that powers the inventory. This is a simplified version — the production script adds error handling, retry logic, logging via Write-Log, output formatting, and the HTML report generation layer. But the enumeration structure is the same.
# Load SharePoint snap-in if not already loadedif ((Get-PSSnapin -Name Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue) -eq $null) { Add-PSSnapin Microsoft.SharePoint.PowerShell}# Collect web applications$webApps = Get-SPWebApplicationforeach ($wa in $webApps) { Write-Log "Processing Web Application: $($wa.Url)" $siteCols = Get-SPSite -WebApplication $wa -Limit All foreach ($site in $siteCols) { $web = $site.RootWeb $lists = $web.Lists | Where-Object { -not $_.Hidden } # ... log and export }}# Database health check$contentDBs = Get-SPContentDatabaseforeach ($db in $contentDBs) { $healthStatus = if ($db.Status -eq "Online") { "PASS" } else { "WARN" } Write-Log "DB: $($db.Name) | Status: $($db.Status) | Health: $healthStatus"}
- Scoped iteration — starting from
Get-SPWebApplicationensures you capture every site collection within each web application’s boundary, not just those visible from Central Admin -Limit AllonGet-SPSite— without this flag, SharePoint returns a maximum of 20 site collections per web application by default; this is the most common cause of incomplete inventories- Hidden list exclusion —
Where-Object { -not $_.Hidden }filters out system-generated lists that would inflate counts and clutter the report - Status-based health classification — the
$db.Statuscheck maps directly to the PASS/WARN/FAIL model2.DB_Health.ps1uses in its full output
The production script wraps every foreach block in a try/catch, logs errors per item rather than halting the entire run, and collects results into structured objects that feed the CSV exports and HTML report generator. A single permission-denied error on one site collection does not stop the inventory of the remaining 499.
What to Do With Your Inventory Findings
Build Your Migration Inventory Baseline
Save the HTML report and every CSV file as your SharePoint farm inventory baseline snapshot. Date-stamp the run in your project documentation. Every later decision — which wave a database goes into, whether a cutover risk is new or pre-existing, whether a permission issue existed before migration — gets compared against this baseline.
If your migration spans several months, re-run the inventory 30 days before cutover to capture any changes. Content databases grow. New site collections appear. Running a stale inventory is almost as bad as running no inventory.
Flag Issues Before Migration Design Begins
Do not wait until the migration plan is written to address what the inventory surfaces. The time to deal with Classic-mode auth is before you have scheduled a cutover window, not after. Triage inventory findings into three buckets:
- Must fix before migration — Classic auth conversion, FAIL-status databases, orphaned databases with unknown content
- Must plan around — Large size outliers, high unique permission counts, host header sites requiring DNS changes
- Monitor during migration — WARN-status databases, large lists at the 5,000-item threshold
Size and Duration Estimation
The database sizes from 1.DB_List.ps1 are your primary input for cutover window estimation. Use them to calculate:
- Backup duration — rule of thumb on typical SANs is 1 GB per minute for a full SQL backup, adjusted for your storage performance
- Log shipping sync time — covered in Post #6; the initial seed of a log shipping secondary is driven by database size
- Restore duration — on the new farm, restoring 400 GB from backup takes longer than restoring 40 GB; plan accordingly
Do not commit to a cutover date until you have run this math against actual database sizes from the inventory. Estimating from memory or from Central Admin storage quotas is not accurate enough for a production migration window.
Get the SP Farm Inventory Scripts
The snippet above shows the inventory collection pattern. Writing it yourself is possible — but the production version includes error handling per site collection, retry logic for transient farm connectivity issues, a structured logging layer via Write-Log, the full HTML report generator, and parameter handling for scoped runs by web application.
The SP Farm Inventory scripts include:
Get-SPInventoryReport.ps1— full inventory with HTML report and per-category CSV exports1.DB_List.ps1— database enumeration with web application association and size data2.DB_Health.ps1— pre-migration health checks with PASS/WARN/FAIL classification- README with prerequisites, parameter reference, and output descriptions
Interested in the scripts? Contact sudharsan_1985@live.in to get access.
If you are building your own, the pattern above is a solid starting point. If you want to skip the build time and start with scripts that have been run on real farms, reach out.
Up Next
Post #3 in this series covers large file scanning — identifying documents that exceed SharePoint’s file size limits or that will cause issues during migration due to file type restrictions in Subscription Edition. The inventory you ran here gives you the list of document libraries to scan. Post #3 gives you the scanner.
Pre-Migration Inventory Checklist
Use this as a sign-off checklist before moving to the migration design phase.
Database enumeration
- ☐
1.DB_List.ps1executed; CSV exported and saved - ☐ All databases with no web application association identified and documented
- ☐ Decision made on each orphaned database (migrate / archive / delete)
Database health
- ☐
2.DB_Health.ps1executed; no FAIL status remains unresolved - ☐ All WARN entries reviewed, documented, and assigned to an owner
- ☐ PASS status confirmed on all databases planned for migration
Full inventory
- ☐
Get-SPInventoryReport.ps1executed with-OutputPathset to a documented location - ☐ HTML report saved and shared with the project team
- ☐ CSV exports saved to the migration project folder
Web application review
- ☐ No Classic-mode authentication web applications remain unconverted
- ☐ All host header sites documented with DNS change requirements
- ☐ Non-standard port configurations noted in the migration design
Database and permissions review
- ☐ Size outliers identified; backup and restore durations estimated
- ☐ High unique permission count sites flagged for extended post-migration validation
- ☐ Site collection administrators verified — no disabled or decommissioned accounts
Wave planning
- ☐ Databases grouped into migration waves by size and complexity
- ☐ Wave 1 (low-risk) identified and confirmed with project stakeholders
- ☐ Cutover window estimate based on actual database sizes from inventory
This is Post #2 in the SharePoint 2019 to Subscription Edition Migration series. Start with Post #1 for the full series overview and migration approach.