Commencer par un completed URL map and a simple redirect plan before switching assets. This concrete move reduces risk, locks in traffic, and preserves presence across pages and file types. Create a visuel diagram that highlights top-performing landing pages and critical conversion paths to support your studio team’s alignment and stakeholder trust.
Before launching, assess all duplicate content and canonical signals. Run a pre-launch audit to detect duplicate titles, meta descriptions, and header structure. Clean up duplicates, ensure canonical tags are correct, and keep a simple index of updated assets to avoid confusion during the switch.
Review the code layer, including important javascript changes, to prevent broken features. Confirm that crucial files, sitemaps, and robots.txt are updated, and that the new file structure is consistent with old URLs to help engines crawl and preserve growth in traffic et leads.
Maintain a checklist of 88 points, arranged by criticality: site structure, 301 redirects, crawl budget, and internal linking. Track completion status and mark items as completed while you verify that traffic signals and user journeys stay intact. Use javascript to monitor script behavior on key pages and ensure there are no regressions.
Post-launch, monitor traffic, conversions, and presence in search results. The plan should help you compare pre- and post-move performance, capture lost leads, and guide quick optimizations. Keep updated dashboards, use useful metrics, and maintain a duplicate-free record of changes to support ongoing improvements.
Website Migration SEO: 88-Step Free Template

Recommendation: Validate all current URLs and implement an agreed-upon 301-redirect map across the entire site within a defined time window; verify data consistency in analytics before going live.
This 88-action blueprint has worked across many projects. Apply a combination of audits, testing, and real-user insights to minimize threats and downtime across your network and launches.
- Phase 1 – Discovery and inventory
- Identifying many high-value pages and prioritizing by traffic, conversions, and branding impact.
- Catalog stored URLs, redirects, and canonical signals across the entire domain family.
- Assess affected assets when merging domains or subfolders; align with agreed-upon timelines.
- Verify current data accuracy in analytics and search-console; note discrepancies for remediation.
- Document challenges and risks to inform the rest of the blueprint.
- Phase 2 – Redirect architecture and linking
- Develop a correct redirect map that preserves link equity; prefer 1:1 mappings where possible.
- Address merging pages into a clean hierarchy; plan for 2-tier or 3-tier structures.
- Establish internal linking strategy to surface the most important pages post-move.
- Define error handling for missing pages; build a quality fallback path.
- Ensure all new URLs are canonicalized where appropriate.
- Phase 3 – Data, analytics, and measurement
- Enable event-tracking, conversions, and micro-metrics; store data consistently across tools.
- Link data sources to branding and proposition improvements you expect after the move.
- Set up a cross-domain tracking plan if you merge multiple domains; test for accuracy.
- Keep a record of data discrepancies and resolve them before launch; validate with a third party if needed.
- Prepare dashboards to monitor traffic, visibility signals, and user behavior after go-live; use best-practice hygiene.
- Phase 4 – Content, branding, and optimization
- Refresh page titles, meta descriptions, and structured data to reflect the new proposition.
- Align branding across pages; ensure consistent imagery, tone, and messaging.
- Update content to reflect the new site structure while preserving the best-performing assets.
- Mark up important product or service schemas and ensure stored meta signals remain accurate.
- Tag and categorize content consistently to support discovery in the new structure; should preserve ranking signals.
- Phase 5 – Technical readiness and security
- Check robots.txt, sitemap.xml, and hreflang; fix crawl errors before launch.
- Validate server performance, caching rules, and compression for the entire switch.
- Conduct threat assessment and implement cyber safeguards; monitor for suspicious activity; threats should be mitigated promptly.
- Document backout options and ensure we can revert quickly if needed.
- Verify stored credentials and access controls; limit exposure during the moving window.
- Phase 6 – Staging, testing, and validation
- Crawl the staging site to confirm correct linking and no orphaned assets.
- Simulate traffic patterns; verify time-to-interaction signals and user flows with Hotjar.
- Test form submissions, checkout paths, and conversion funnels; fix blockers.
- Validate that 301 mappings remain intact after the merge; audit for redirected loops.
- Coordinate a final cross-team review; ensure all agreed-upon changes are implemented.
- Phase 7 – Launch, monitoring, and optimization
- Launch during a low-traffic window if possible; monitor network latency and error rates.
- Observe user behavior with Hotjar; capture friction points and adjust quickly.
- Track organic visibility and crawl rate; watch for indexing delays and spikes in 404s.
- Validate branding and propositions appear consistently across pages and channels.
- Document learnings; refine the blueprint for future merging projects across the portfolio.
Tips: focus on best practices for internal linking, keep the entire team aligned, and maintain a stored log of agreed-upon decisions and changes. Expect challenges and plan contingencies; regularly review threats and adjust timing to minimize downtime and data loss. After completion, run a comprehensive validate pass to confirm data integrity, links, and branding align with the original proposition.
Perform a Site Crawl
Run a site-wide crawl with a trusted tool to map all URLs, assets, and redirects. Export the dataset as CSV/JSON and store it in the resources folder. This would reveal broken assets, 404s, and redirect chains, plus how many pages are indexed by search engines, providing a baseline for review and preserving link equity. Inspect bounce indicators and time-to-first-byte on critical paths, and prepare a concise evidence set for teammates. Use a distinct user-agent string for the crawler and log it clearly in server access logs. Track the level of depth per URL to identify clusters and blind spots for wider coverage.
Here is a structured approach aligned with practical preparation and validation. For each phase, capture values and indicators that feed decision making through the change cycle for different purposes.
Discovery & Mapping focuses on completeness: map every URL, asset, and redirect; verify canonical tags; identify malicious patterns; confirm TLS certificates are valid for current hostnames. Collect a wider set of resources such as images, scripts, and fonts; record processing times and depth levels to understand crawl efficiency. The result is a breakdown of coverage, with preserving internal connections and avoiding orphaned assets as a central goal. This phase establishes the foundation needed to move forward.
Validation & Remediation targets issues surfaced by the crawl: fix broken links, eliminate duplicate signals, normalize redirects, and align metadata. Ensure that the apache server configuration and .htaccess rules do not create loops. Update robots.txt as needed and preserve critical pages during change processing. Validate that the indexable content remains accessible and certificates remain valid. This phase yields a concrete change plan and a status of what was addressed.
Monitoring & Reassessment establishes ongoing checks: schedule automated re-crawls, set alerts for new 404s or 5xx errors, and track bounce improvements over time. Possible indicators include a higher indexed ratio, stable or improved values for page load and time to first byte, and lower conversion drop across key paths. Report results through a centralized dashboard and review trends with wider teams to ensure the objectives stay aligned.
| Phase | Focus | Key Indicators | Artifacts & Actions |
|---|---|---|---|
| Discovery & Mapping | URL inventory, asset mapping, redirects, canonical tags, TLS certificates | indexed count, 200/301/404/5xx, bounce, TTFB, redirect chains, level depth | CSV/JSON export, crawl log, breakdown of coverage, identify malicious patterns, logs for user-agent, update access rules on apache |
| Validation & Remediation | Fix redirects, repair broken links, preserve internal linking, adjust metadata | reduced errors, improved crawl coverage, increased indexability, certificates valid | Updated links and metadata, revised .htaccess where needed, revised robots.txt, evidence of fixes |
| Monitoring & Reassessment | Ongoing checks, alerts, performance stability | stable 200s, low new 404s/5xx, bounce improvement, consistent indexed growth, through data-driven alerts | Monitoring dashboard, scheduled re-crawls, thresholds, reporting for wider teams |
Define crawl scope and critical URLs
Start with a planned crawl scope focused on the critical URLs: the live homepage, top category pages, flagship product pages, and high-traffic landing posts. Use a combination of include rules to cover public paths and exclude paths that require authentication, such as admin, account, checkout, and staging areas. Define where crawlers should stop and what should be ignored, based on business impact and user behavior. Based on historical logs, keep firewalls and access controls in place to protect sensitive sections while enabling verifying of key assets during live crawling checks.
Divide the site into three segments: critical, high-traffic, and supporting content. For each, assign a clear priority and a practical crawling depth to minimize scanning churn. Where possible, use a lightweight crawler profile to stay within crawl budget and avoid slowing the live environment. Advanced tools help verify status codes, redirects, canonical signals, and the health of backlinks. Ensure the crawling path keeps public pages reachable and that internal linking remains consistent across a live pass. Pages were previously dynamic and may require adjusted rules to avoid noise while preserving intent.
Output from this phase includes a prioritized URL list with statuses, an exclude list for private areas, and a plan to preserve linking and URL structures to maintain credibility. Provide metrics on coverage, found versus missing critical URLs, and a risk map for potential 404s. After the initial pass, monitor live performance and adjust the scope if any issues arise, then deliver a refreshed crawl map for subsequent scans and stakeholder review.
Load current crawl data from your tools
Recommendation: Export crawl data from each tool for a single date window (todays) and store it in a central storage file (CSV or JSON). Use a consistent naming scheme like crawldata_YYYYMMDD_TOOL.csv to align every record and streamline review by roles in your agency.
Ceci in-depth view yields engaging insights that help you learn and verify how pages map to the new structure. If a URL is missing or misrouted, note it for early action; rely on this to prevent surprises during launch and to meet conversions targets.
Identify threats: 404s, 5xx, blocked URLs, noindex signals, and duplicated or case-sensitive URLs. Keep a file of resolved redirects to ensure every landing path remains valid. This allows you to reorganize the crawl map and avoid broken paths across every case scenario.
Engage stakeholders: assign roles (content owner, dev lead, analytics), confirm who leaves notes, and double-check access permissions for the storage. Early alignment reduces back-and-forth and speeds the launch. This data-driven approach helps the agency meet conversions and reduce risk.
Operational tips: keep a separate storage for export history; schedule daily refresh during the rollout; set up alerts for significant changes; use this file as the basis for a technical audit. In this term, a compact glossary helps teammates learn and research the meaning of each field, while you leave room to organize the data flow and prep for next iterations.
Identify redirects, canonical issues, and response codes
Begin by compiling a precise redirect map for legacy URLs, linking each old path to the intended new destination and tagging the move as permanent (301) where appropriate. This single source of truth minimises risks and preserves credibility across channels.
Run a crawl of the source and target structures to reveal broken or misdirected routes; ensure each legacy URL resolves accurately to the expected page and that non-essential pages do not trap crawlers.
Prevent chain redirects: cap hops at two, remove loops, and prune non-essential redirects that slow crawl performance and inflate index signals.
Prefer server-side redirects (htaccess on Apache, nginx rules) over client-side methods or meta refresh; confirm the destination serves a clean 200 status and that the redirect changes are global for the host as required.
Canonical discipline: audit core pages to guarantee a single canonical URL; if you use canonical tags, ensure consistency across www versus non-www and http versus https; remove conflicting markups that erode accuracy. On wordpress, align theme, plugins, and server rules accordingly to keep the scope tight.
Response-code mapping: designate 301 for permanent moves; 302 or 307 for temporary adjustments; use 404 for not-found pages and 410 Gone for content intentionally removed; avoid 500 errors caused by misconfigurations.
Detection and verification: run a post-cutover detector, compare status codes against the plan, and review server logs to catch anomalies early; address any issues via rapid iterations to minimises risks.
Testing discipline: appoint an expert tester to verify navigation, internal links, and sitemap entries drive users to the intended destinations; preserve session integrity where needed and ensure URL parameters do not create duplicate content.
Non-essential legacy pages: decide to prune by returning 410 Gone or removing from the sitemap; keep robots.txt aligned and remove non-indexable paths accordingly to optimise crawl budget.
wordpress-specific guidance: implement a robust redirection framework; review for conflicts between the plugin layer, theme files, and server rules; maintain a resource where operators can access the current mapping and decisions; this drives operational efficiency.
Monitoring and refinement: keep a living log of redirects, canonical decisions, and response codes; use detection metrics to improve handling in future relocations and to sustain authority over time.
Spot orphan pages and non-indexable URLs
Run a targeted crawl to pinpoint orphan pages and URLs blocked from indexing. Export results and validate with server logs for accuracy. This data helps you meet the need for precise indexing across websites and supports building a plan that is based on factual signals. The goal is to gain a clear view for merging and adjusting structures with controlled changes.
- Identify candidates: perform an in-depth crawl, evaluate inbound links, sitemap coverage, and 404s. Flag pages with zero internal links or signals indicating non-indexability. Ensure the crawling process is functioning and reliable.
- Validate non-indexability: inspect robots.txt rules, meta noindex tags, x-robots-tag headers, and canonical tags. Note pages with 404/410 statuses that should be removed or redirected.
- Compare against planned structure: this analysis, based on crawl results, yields actionable targets. Build a map showing which pages belong to which sections; ensure brand relevance is preserved. This comparison helps you see gaps and set a baseline for improvements.
- Assess impact and plan: categorize pages by potential gain, prioritizing those with high user value and brand alignment. Create a backlog with action items, owners, and timelines; involve consulting and partnering teams as needed.
- Fix strategy: for valuable orphaned pages, re-connect through internal links and navigation. For duplicates, merge content into a single authoritative page and implement a 301 redirect to the chosen URL. Update the sitemap and store of records accordingly.
- Redirects and canonical hygiene: apply redirects to a controlled set of targets; set canonical to the preferred page; adjust settings for indexable URL patterns. Ensure access to assets and storefront pages remains stable.
- Verification: run a second crawl after changes; compare results with the initial list; confirm previously non-indexable URLs now index or redirect correctly. Review server logs to verify indexing signals.
- Documentation and governance: capture decisions, owners, and timelines; store artifacts in the consulting partner’s repository; maintain accurate records for future audits, including subpoena-ready evidence if required.
- Continuous monitoring: schedule checks during migrating efforts; while planned changes roll out, watch for re-emergence of orphaned pages and adjust internal linking accordingly.
Plan crawl exports for migration cutover and QA
Export a full crawl snapshot every 2 hours during the cutover and again 24 hours after launching; compare against the pre-move baseline to spot errors before visitors encounter them. The cadence is often tuned to the highest-risk window; focus on moving pages and their redirects. Track pages moving between sections, canonical changes, and redirect chains; verify that core engines can index updated paths quickly. Where gaps appear, adjust crawling settings and re-run the snapshot until alignment is achieved.
Plan across phases: discovery, cutover, stabilization. In each phase, activate exports around the key window and run QA checks alongside staging. Build a catalog of affected URLs and a map of engines handling them. The requirements define criteria for success, such as 95% of critical pages returning 200, and no gaps in sitemaps.
Assign owners such as djokic and mihajlo to guiding the moving assets; use side-by-side diffs to validate that changes go as intended and that exported data remains consistent. If a process goes off track, execute a rollback and re-run the export.
Keep a subpoena-compliant log of changes; store artifacts in a catalog with timestamps to satisfy governance. Maintain access controls and document retention alongside legal hold considerations, ensuring that the data lineage is traceable.
KPIs to watch include kpis like crawl success rate, detected errors, time to reindex, and the proportion of pages successfully crawled post-cutover; monitor visitor flow and engagement against rebranding pages to confirm launching momentum. Track metrics per phase and alert on any deviation from expected trajectories within 24 hours of changes.
Supporting details: run exports on the side of the live environment, around the rebrand window; ensure catalog entries mirror the actual footprint across engines and pages, and that requirements for indexing visibility align with go-forward plans. Use snapshot-based checks to confirm that the system goes back to baseline after the initial surge, and document the phases followed so the process remains reproducible.
Website Migration SEO Checklist – 88 Steps Free Template">