Help! My developer won't implement / wants to remove / objects to my redirects.

A client recently asked me to chip in on a debate they were having with an internal tech team about retir­ing old 301 redir­ects. Concerns had been raised about ageing and legacy CMS func­tion­al­ity which needed to be replaced, and as part of this process the IT folks pushed to remove (and not replace) the thou­sands of redir­ects which the system was power­ing. The only hint as to why they’d think this would be a good/​necessary idea was a quote from the proponents stat­ing that as Google had seen, visited and clearly indexed the destin­a­tion URLs for some time, the redir­ects were no longer neces­sary, and their removal shouldn’t cause any harm. At best, they’d concede that the SEO guys needed to main­tain a smal­ler list of redir­ects which were specific­ally set up ‘for SEO purposes’ (such as forcing http/​s and lower­case, and a hand­ful of specific page-level redir­ects), but that other­wise they’d need to ‘trim the fat’.

Of course, this instantly set alarms bells ringing – the busi­ness domin­ates their market, largely because of the perform­ance of the destin­a­tion URLs for the redir­ects in ques­tion, and so any negat­ive impact what­so­ever on the equity flow­ing to those pages could have huge real-world, commer­cial implic­a­tions.

This is a type of scen­ario which I see frequently. There’s always huge reluct­ance from the IT/​developer community to imple­ment, support or main­tain ‘bulky’ redir­ect lists either on a day-to-day basis or as part of website launches or signi­fic­ant over­hauls, and it’s invari­ably a pain­ful exper­i­ence fight­ing to justify why “let’s just redir­ect the top 100 most visited URLs on the website” isn’t a viable option. Because it’s hard to quantify the tangible oppor­tun­ity cost of fail­ing to imple­ment redir­ects, it’s an incred­ibly chal­len­ging argu­ment to have, and often resorts to plead­ing for people to take leaps of faith.

Given how many times I’ve fought this corner in my own exper­i­ence, I thought I’d go all out, and attempt to construct­ive a compre­hens­ive argu­ment which weighed indis­put­ably in favour of main­tain­ing legacy redir­ects. Here’s the (slightly edited to preserve anonym­ity) tran­script.


TLDR:
Don’t ever remove redir­ects. There’s no reason to, and no bene­fit in doing so. The perform­ance gloom and doom your developers preach is nonsense.

Don’t ever remove redir­ects. URLs still get hit/​crawled/​found/​requested years after they’re gone. This might impact equity flow, qual­ity, and crawl stuff.

Don’t ever remove redir­ects. It gives a bad user exper­i­ence, which impacts the bottom line.

There are a few core consid­er­a­tions from my perspect­ive (from my agency exper­i­ence, tech­nical SEO know­ledge, and Link­dex exper­i­ence):

  • Tech teams are always highly skep­tical of redir­ects, as I’m sure you’re well-aware. They’re often hesit­ant to imple­ment even found­a­tional numbers of global redir­ects (such as case-forcing, http/​s, etc) due to either:
    • A lack of understanding/​consideration of the user exper­i­ence and/​or the way in which Google discov­ers, crawls and indexes (and the way in which equity flows in this process)
    • Concern over server perform­ance result­ing from high levels of redir­ect look­ups on each request
    • A desire to ‘trim the fat’ as a part of general house­keep­ing because it’s good prac­tice to do that kind of thing in other scen­arios and dev processes
  • In each of these cases, I’ve found that focus­ing on the user exper­i­ence compon­ent is a good solu­tion, where setting an expect­a­tion that any scen­ario in which a user hits a 404 error page has a negat­ive commer­cial impact on the busi­ness removes a lot of the ‘SEO polit­ics’ from the equa­tion. You can even go so far as to quantify this through survey­ing (e.g., “would seeing an error page on our site affect how much you trust the brand?”) and ascribe a £loss value to each result­ant 404 per hit, based on estim­ated impact on conver­sion rates, aver­age order values, etc. This gives you lots of ammuni­tion to justify the contin­ued exist­ence of the redir­ects in a context which they can under­stand and buy into.
  • This also goes some way to resolv­ing the perform­ance concerns; any small invest­ment made into optim­ising the impact of large volumes of redir­ects (which is always marginal at most, and vastly less than they anti­cip­ate). I should note that I’ve person­ally handled and argued for implementing/​retaining large redir­ect sets on a number of huge site launches/​relaunches, and in each case exper­i­enced these kinds of reac­tions; I’ve never once seen any actual perform­ance impact follow­ing the success­ful deploy­ment of redir­ects number­ing in the low to mid thou­sands (formed of mixes of static and pattern match­ing /​regex rules). London Stock Exchange were partic­u­larly hesit­ant to imple­ment ~1k redir­ects as part of a large site relaunch, but were surprised to see no meas­ur­able perform­ance impact follow­ing their imple­ment­a­tion.
  • If perform­ance concerns are still a barrier, I typic­ally resort to identi­fy­ing equi­val­ent perform­ance oppor­tun­it­ies, load speed improve­ments and tech­nical optim­isa­tions which can offset the perceived cost; and there’s never any short­age of oppor­tun­it­ies for simple things like improved resource cach­ing, client-side render­ing effi­cien­cies, etc. If it’s help­ful, I can assist/​support on reel­ing off some sugges­tions in this arena, from hard­core back-end stuff to softer front-end stuff. If they can quantify the ‘damage’, it’s easy enough to offset.
  • It’s also worth consid­er­ing that almost all of the major CMS plat­forms oper­ate by look­ing up the reques­ted URL against a data­base to find a pattern matched result; as such, depend­ing on the approach to imple­ment­a­tion, having a large list of redir­ects to check against needn’t neces­sar­ily repres­ent an extra lookup, process, or perform­ance hit. If your team are partic­u­larly soph­ist­ic­ated, emulat­ing the load-balan­cer style approach with layers of high-level cach­ing can further mitig­ate perform­ance concerns.
example-redirect-management

In an ideal scen­ario, you’d have a solu­tion in place which monit­ors how many times a redir­ect has been hit, and the last time/​date at which this occurred; I use a system like this on a number of my own sites, and peri­od­ic­ally review for and retire redir­ects which have been stale for over 6 months, and have a low over­all hit count [I use the Redir­ec­tion plugin for Word­Press, it’s one of my favour­ite tools].

What I’ve found inter­est­ing from running this system for a number of years over multiple sites is that:

  • Any URLs which used to have any real levels of equity, links and/​or perform­ance keep getting hit for years.
  • Redir­ects which have been removed and return a 404 keep getting hit by search engines, indef­in­itely
  • I run ~4k redir­ects on one of my own sites at present, with no meas­ur­able perform­ance hit for turn­ing them all on/​off
  • Remov­ing redir­ects in these scen­arios has a defin­ite impact on SEO, even if this is only indir­ect, due to the impact being on the reduced discov­ery, craw­lab­il­ity, and crawl equity distri­bu­tion result­ing from a bot enter­ing the site at a dead URL
  • People still link to dead content; much of the web is stale. We also know that urls which used to be linked to still main­tain some degree of equity, so removal of any of these is liable to negat­ively impact the destin­a­tion page’s equity.
  • Any marginal perform­ance over­head for running this kind of system (the perform­ance fears resur­face here in the context of main­tain­ing these kinds of logs) is vastly offset by the value reten­tion and improved user exper­i­ence

I think that crawl quota is a partic­u­larly relev­ant point for you guys; with a site of your size and scale, any activ­ity which is likely to result in Google alloc­at­ing lower crawl resource is liable to have enorm­ous consequences to index­a­tion levels and speed, which is obvi­ously to be avoided at all costs.

I’d expect a site of your scale to want to main­tain as many of these legacy redir­ects as possible, for as long as possible, until the impact of them is meas­ur­ably and demon­strably detri­mental and this cannot be offset else­where. Upwards of 6,000 sounds a lot, but I think that it’s a reas­on­able and real­istic volume given your site, legacy, etc. Having said that, I’d defin­itely want to be putting processes in place in the future to minim­ise the like­li­hood of URL struc­tures expir­ing, and plan­ning for more future-proof’d pattern­ing (obvi­ously easier said than done!). Good prac­tice suggests that no URL should ever change or die, which is a card you might be able to play, but I suspect that may lead to blame games around who-chose/designed-which-legacy-URL-struc­ture, which may cause more harm than good at this stage!

If there’s contin­ued pres­sure to ‘trim the fat’, I’d person­ally want to invest­ig­ate, query and quantify every single rule which is proposed for dele­tion to under­stand whether there’s still anything link­ing to it, whether the URL has/​had any social shares, and whether server-logs indic­ate that it’s been hit recently. This is some­thing Link­dex could defin­itely help with, however, it’s likely to be a resource-intens­ive process and may not provide any value – even if the vast major­ity of URLs turn out to be ‘duds’, all of the above rationale around user exper­i­ence and equity manage­ment still apply.

I wonder – as part of the migra­tion process, is there any oppor­tun­ity to combine any of the redir­ects into pattern match­ing rules? E.g., if there are 100 URLs with a shared folder root, it may be prudent to craft a single rule based on match­ing that pattern. This should reduce quant­it­ies signi­fic­antly.

From a completely altern­ate perspect­ive, if dele­tion is to proceed, I’d poten­tially consider return­ing a 410 rather than 404 status on the URLs in ques­tion. This may help in send­ing a stronger signal that, rather than a large section of your website having broken/​vanished (and the negat­ive connota­tions asso­ci­ated with this), that a delib­er­ate decision has been made to remove those pages. I’m not convinced that this would make any real­istic differ­ence, but it feels like a low risk/​effort which may help out.


In summary…

Hope­fully some of this can help bolster your argu­ments.

As a predom­in­antly tech­nical person, I’ve often argued in favour of redir­ects from an equity-preser­va­tion perspect­ive; it’s only in writ­ing the email that I real­ised that the two biggest points here are the tangible impact on user exper­i­ence, and the expec­ted (and some­what meas­ur­able) impact on crawl quotas. I think that next time I run into this scen­ario, I might not talk about ‘link juice’ or the differ­ence between differ­ent HTTP status codes at all, and just focus on these elements. Who’d have thought?

newest oldest most voted
Notify of
Greg
Guest
Greg

You’re miss­ing the easy sell, which is case stud­ies with lines going down. Sure, there are a ton of factors that can cause a drop in visib­ilty follow­ing a site migration/​launch, but an incomplete/​compromised redir­ect solu­tion will be a LARGE part of it. You’ll also be able to keep the discus­sion in topline/​sitewide terms, which will further mitig­ate having to do indi­vidual URL level value risk assess­ments. Histor­ic­ally though, even if the Deploy­ment Day scream­ing matches have been had with devs, they’ve all been caused by fail­ures in project manage­ment (agency and /​or client side) in the months lead­ing up to it, and /​or… Read more »