“You no longer need to do anything special for hash-bang URLs,” Google’s John Mueller said on the September 17 edition of #AskGoogleWebmasters, “we’ll just try to render them directly.”

The question. “What is the current status of #! AJAX crawling? How do I set up redirects?” asked user @olegko via Twitter.

The answer. As stated above, webmasters do not need to take any special action for Google to crawl their AJAX applications.

“The AJAX crawling scheme was something we proposed in the early days of JavaScript sites, way back in 2009,” Mueller contextualized. “This worked great for a number of years, but over time, it became kind of redundant. Search engines — or at least, Google — had learned how to render most pages like a browser would. And, in the meantime, we’re even using a special version of Chrome for crawling and rendering.”

“In order to move to a different URL structure, you need to use JavaScript on these pages to create the redirects,” he added. “It’s not possible to use server-side redirects since everything after a hash — so, the number symbol — is not sent to the server, but rather processed in the browser. Once you’ve set up those redirects, as Googlebot reprocesses the hash-bang URLs on the site, it’ll spot the redirect and follow it appropriately.”

Why we should care. If you inherited a codebase that uses AJAX, or are using AJAX for routing URLs, you’ll be glad to know that Google should have no problem indexing your pages discreetly.

Learn more the JavaScript and the evergreen Googlebot. Here are some additional resources to give you a better idea of how Googlebot handles your interactive pages and applications.

About The Author

George Nguyen is an Associate Editor at Third Door Media. His background is in content marketing, journalism, and storytelling.