Archive | Marketing RSS feed for this section

Google introduced Instant Previews

10 Nov

Today Google introduced Instant Previews, a new search feature that helps people find information faster by showing a visual preview of each result. Traditionally, elements of the search results like the title, URL, and snippet—the text description in each result—help people determine which results are best for them. Instant Previews achieves the same goal with a visual representation of each page and where the relevant content is, instead of a text description. For our webmaster community, this presents an opportunity to reveal the design of your site and why your page is relevant for a particular query. We’d like to offer some thoughts on how to take advantage of the feature.

Google Instant Previews

Instant Previews

First of all, it’s important to understand what the new feature does. When someone clicks on the magnifying glass on any result, a zoomed-out snapshot of the underlying page appears to the right of the results. Orange highlights indicate where highly relevant content on the page is, and text call outs show search terms in context.

Many of you have put a lot of thought and effort into the structure of your sites, the layout of your pages, and the information you provide to visitors. Instant Previews gives people a glimpse into that design and indicates why your pages are relevant to their query. Here are some details about how to make good use of the feature.

  • Keep your pages clearly laid out and structured, with a minimum of distractions or extraneous content. This is always good advice, since it improves the experience for visitors, and the simplicity and clarity of your site will be apparent via Instant Previews.
  • Try to avoid interstitial pages, ad pop-ups, or other elements that interfere with your content. In some cases, these distracting elements may be picked up in the preview of your page, making the screenshots less attractive.
  • Many pages have their previews generated as part of our regular crawl process. Occasionally, we will generate screenshots on the fly when a user needs it, and in these situations we will retrieve information from web pages using a new “Google Web Preview” user-agent.
  • Instant Previews does not change our search algorithm or ranking in any way. It’s the same results, in the same order. There is also no change to how clicks are tracked. If a user clicks on the title of a result and visits your site, it will count as a normal click, regardless of whether the result was previewed. Previewing a result, however, doesn’t count as a click by itself.
  • Currently, adding the nosnippet meta tag to your pages will cause them to not show a text snippet in our results. Since Instant Previews serves a similar purpose to snippets, pages with the nosnippet tag will also not show previews. However, we encourage you to think carefully about opting out of Instant Previews. Just like regular snippets, previews tend to be helpful to users—in our studies, results which were previewed were more than four times as likely to be clicked on. URLs that have been disallowed in the robots.txt file will also not show Instant Previews.
  • Currently, some videos or Flash content in previews appear as a “puzzle piece” icon or a black square. We’re working on rendering these rich content types accurately.

Source: http://googlewebmastercentral.blogspot.com/2010/11/instant-previews.html

How To Allow Google to Crawl your AJAX Content

28 Oct

Today we’re excited to propose a new standard for making AJAX-based websites crawlable. This will benefit webmasters and users by making content from rich and interactive AJAX-based websites universally accessible through search results on any search engine that chooses to take part. We believe that making this content available for crawling and indexing could significantly improve the web.

While AJAX-based websites are popular with users, search engines traditionally are not able to access any of the content on them. The last time we checked, almost 70% of the websites we know about use JavaScript in some form or another. Of course, most of that JavaScript is not AJAX, but the better that search engines could crawl and index AJAX, the more that developers could add richer features to their websites and still show up in search engines.

Some of the goals that we wanted to achieve with this proposal were:

  • Minimal changes are required as the website grows
  • Users and search engines see the same content (no cloaking)
  • Search engines can send users directly to the AJAX URL (not to a static copy)
  • Site owners have a way of verifying that their AJAX website is rendered correctly and thus that the crawler has access to all the content

Here’s how search engines would crawl and index AJAX in our initial proposal:

  • Slightly modify the URL fragments for stateful AJAX pages
    Stateful AJAX pages display the same content whenever accessed directly. These are pages that could be referred to in search results. Instead of a URL like http://example.com/page?query#state we would like to propose adding a token to make it possible to recognize these URLs: http://example.com/page?query#%5BFRAGMENTTOKEN%5Dstate . Based on a review of current URLs on the web, we propose using “!” (an exclamation point) as the token for this. The proposed URL that could be shown in search results would then be: http://example.com/page?query#!state.
  • Use a headless browser that outputs an HTML snapshot on your web server
    The headless browser is used to access the AJAX page and generates HTML code based on the final state in the browser. Only specially tagged URLs are passed to the headless browser for processing. By doing this on the server side, the website owner is in control of the HTML code that is generated and can easily verify that all JavaScript is executed correctly. An example of such a browser is HtmlUnit, an open-sourced “GUI-less browser for Java programs.
  • Allow search engine crawlers to access these URLs by escaping the state
    As URL fragments are never sent with requests to servers, it’s necessary to slightly modify the URL used to access the page. At the same time, this tells the server to use the headless browser to generate HTML code instead of returning a page with JavaScript. Other, existing URLs – such as those used by the user – would be processed normally, bypassing the headless browser. We propose escaping the state information and adding it to the query parameters with a token. Using the previous example, one such URL would be http://example.com/page?query&%5BQUERYTOKEN%5D=state . Based on our analysis of current URLs on the web, we propose using “_escaped_fragment_” as the token. The proposed URL would then become http://example.com/page?query&_escaped_fragment_=state .
  • Show the original URL to users in the search results
    To improve the user experience, it makes sense to refer users directly to the AJAX-based pages. This can be achieved by showing the original URL (such as http://example.com/page?query#!state from our example above) in the search results. Search engines can check that the indexable text returned to Googlebot is the same or a subset of the text that is returned to users.

Source: http://googlewebmastercentral.blogspot.com/2009/10/proposal-for-making-ajax-crawlable.html