OOP and Web – Interview Stuffs


<!DOCTYPE html>
<title>JSON CRUD Operation</title>
<h2>Create Object from JSON String</h2>
<p id="demo"></p>
<script type="text/javascript" src="jrac/jquery.js"></script>
<script type="text/javascript">
var empStr = '{"employees":[' +
'{"firstName":"suresh","lastName":"kb","age":"25" },' +
'{"firstName":"sachin","lastName":"tendulkar","age":"26" },' +
'{"firstName":"virender","lastName":"sehwag","age":"27" }]}';

var empObj = JSON.parse(empStr);
emps = empObj.employees;
for(var i=0, empLen = emps.length; i<empLen; i++){
  document.write("First Name :: "+emps[i].firstName + " Last Name :: " + emps[i].lastName+ " age :: "+emps[i].age+"<br/>");  


Best Practices – Google

1.Compressing your JavaScript with Closure Compiler


The best way to make your web page more responsive is to minimize the number of files and size of files that must be downloaded when the page is loaded. Reducing the number of files loaded avoids hitting maximum simultaneous download limits in browsers. Reducing file size cuts the time needed to send all the bytes over the Internet. Some tools already exist to minimize files and file sizes. Spriting tools and image compression tools let you minimize the number and size of images; gzip and HTML compression tools let you reduce the size of your HTML files.

JavaScript minimization tools give you another way to reduce your overall download size. These tools work on JavaScript source code for your webpage. The tools remove unneeded spaces and comments, and sometimes even change the names of variables in your program to shrink the file size even more. However, the commonly used minimization tools miss additional ways to compress the code even further.

The Closure Compiler, a new minimization tool, finds ways to compress your JavaScript code even further than existing minimization tools. It achieves additional compression by using compiler-like technology to rewrite your JavaScript into a much smaller form, while ensuring the code still runs correctly. Closure Compiler can condense several files into one single file, and can easily reduce the size of your JavaScript in half. The Closure Compiler also does syntactic checks and static analysis for your program, so it flags potential syntax and type errors and highlights code patterns that may not work well on all browsers.

Example: What Can Closure Compiler Do For You?

As a starting example, imagine you’re a web developer for a large newspaper, the Esmeralda Times, and you want to try optimizing the code for your home page. Go to the Closure Compiler web service UI at Notice that the web application starts off with a simple Hello World example; if you click “Compile”, Closure Compiler minimizes the code and shows it on the right hand side.

But you don’t care about “Hello World”; you care about speeding up your web page.

Our fictional example here (borrowed from an actual media website) loads six separate JavaScript files when loading the home page, for a total of about 227,000 bytes of code. Two of the files represent the Esmeralda Times’s own code, and the rest uses the Prototype and JavaScript libraries.

Click the “Reset” link to delete the “Hello World” portion of the code in the Closure Compiler web service UI. Add the URLs for the JavaScript files in your home page, either by typing a URL into the text field and pressing “Add”, or by typing these explicit requests to load specific pages into the input section of the Closure Compiler web service:

// ==ClosureCompiler==
// @output_file_name default.js
// @compilation_level ADVANCED_OPTIMIZATIONS
// @code_url
// @code_url
// @code_url
// @code_url,dragdrop
// @code_url
// @code_url
// ==/ClosureCompiler==

Now, choose a compile mode. You have these choices:

  • Whitespace Only mode simply removes unnecessary whitespace and comments. Selecting “Whitespace Only” mode and pressing compile presents you with a single file of JavaScript with 164K of source code, 28% smaller than the original 227K of source code.
  • Simple mode is a bit more sophisticated. It optimizes JavaScript function bodies in several ways, including renaming local variables, removing unneeded variables and code, and replacing constant expressions with their final value (such as converting “1+3” to “4”). It, however, won’t remove any functions or variables that might be referenced outside your JavaScript. It shrinks the code by 42% from 227K to 132K
  • Advanced mode does even more sophisticated changes to your code. Try selecting “Advanced” optimizations, compile the code, and look at the results. This code looks much less like your original code; it renames all functions to short names, deletes functions it does not believe are used, replaces some function calls with the function body, and does several other optimizations that shrinks the code even further. Typically, you can’t use Advanced Mode on existing JavaScript code without providing some additional information about functions in the code that need to be visible elsewhere and code elsewhere that might be called from within your JavaScript. However, it’s worth noting that the Advanced mode cut the code size from 227K to 86K – 62% smaller than the original code. If you’d like this file to load in 1/3 the time of the original, you might find it worthwhile to give Advanced mode all the information to do this change correctly.

Here is an example of what a simple optimization of the JavaScript files used by the Esmeralda Times might look like:

Screenshot of Closure Compiler web service

Note that you can download the resulting compressed source from a link in the UI, or copy and paste the optimized JavaScript directly from the Closure Compiler web service UI into your source file.

Best of all, you can still debug this compressed code by generating a Closure Inspector source map. The Firebug debugger uses this source map to point you to the lines of original source code that correspond to the optimized code that you’re debugging. Generating the Closure Inspector source maps requires running the Closure Compiler on your local machine using the source or pre-built jar files.

Now you’ve seen what the Closure Compiler can do for the Esmeralda Times; go look at the JavaScript brought in by your own web application, and see what it can do for your actual code!

What Else Can Closure Compiler Do For You?

The Closure Compiler can also warn you about incorrect code. Because it is parsing your JavaScript code, it can warn you about compile errors without requiring you to load the code into your browser. You can be sure you have no syntax errors that will mysteriously appear when you click on an infrequently-used button. The Closure Compiler also can identify common code patterns that are not consistently handled in all browsers, such as trailing commas in arrays (a = [1, 2, 3, ];) Some browsers create three elements in the array, others four; such inconsistencies can create some hard-to-track down bugs. Closure Compiler can also identify invalid math operations, code that can never be executed, or function calls where you’re not passing enough arguments.

In Advanced Mode, Closure Compiler can also use type annotations in your code to find logic errors in your program – cases where you intend to pass a Money object, but accidentally pass an AccountNumber object, for example. This gives you the bug-finding advantages of typed languages like Java, but still keeps the fast, fun coding experience of JavaScript.

How Does Closure Compiler Work?

Closure Compiler is actually a JavaScript compiler, but rather than generating machine code like most compilers, it produces valid JavaScript code. It can rewrite JavaScript code in many interesting ways. It can identify constant expressions and replace them with constant values, replacing (15 * 280) + 16 with 4216. By doing this, it cuts 15 characters to 5. More importantly, it gives you the freedom to write your code in a clear and understandable way, and frees you from worrying about the final size of the code. Functions called in only one or two places can be inlined, replacing the function call with the contents of the function body, saving the space needed for the function declaration. Closure Compiler can even tell when two different variables are never used at the same time, letting both share the same name and ensuring that as many variables as possible use very short names for better gzip compression.

Closure Compiler is open-source; examine or download the source at to build it yourself and write your own JavaScript code optimizations!


Closure Compiler gives JavaScript developers a new and better way to compress JavaScript code, and helps your web pages using JavaScript load faster than ever. Try it out on your JavaScript sources, and see what a difference it can mak

2.Speeding up JavaScript: Working with the DOM

When working with Rich Internet Applications, we write JavaScript that updates the page by changing elements or adding new ones. This is done by working with the DOM, or Document Object Model, and how we do this can affect the speed of our applications.

Working with the DOM can cause browser reflow, which is the browser’s process of determining how things should be displayed. Directly manipulating the DOM, changing CSS styles of elements, and resizing the browser window can all trigger a reflow. Accessing an element’s layout properties such as offsetHeight and offsetWidth can also trigger a reflow. Because each reflow takes time, the more we can minimise browser reflow, the faster our applications will be.

When working with the DOM we either manipulate existing elements on the page or generate new ones. The four patterns below cover both DOM manipulation and DOM generation and help reduce the amount of reflows triggered in the browser.

CSS class switching DOM manipulation

This pattern lets us change multiple style properties of an element and its descendants, triggering a single reflow.

The problem

Let’s make a function that changes the className attribute for all anchors within an element. We could do this by simply iterating through each anchor and updating their className attributes. The problems is, this can cause a reflow for each anchor.

function selectAnchor(element) { = 'bold'; = 'none'; = '#000';

The solution

To solve this problem we can create a class that sets all these style properties. Now we just trigger a single browser reflow by adding this class to our element. We also separate presentation from behaviour with this pattern.

.selectedAnchor {
  font-weight: bold;
  text-decoration: none;
  color: #000;

function selectAnchor(element) {
  element.className = 'selectedAnchor';

Out-of-the-flow DOM Manipulation

This pattern lets us create multiple elements and insert them into the DOM triggering a single reflow. It uses something called a DocumentFragment. We create a DocumentFragment outside of the DOM (so it is out-of-the-flow). We then create and add multiple elements to this. Finally, we move all elements in the DocumentFragment to the DOM but trigger a single reflow.

The problem

Let’s make a function that changes the className attribute for all anchors within an element. We could do this by simply iterating through each anchor and updating their href attributes. The problems is, this can cause a reflow for each anchor.

function updateAllAnchors(element, anchorClass) {
  var anchors = element.getElementsByTagName('a');
  for (var i = 0, length = anchors.length; i < length; i ++) {
    anchors[i].className = anchorClass;

The solution

To solve this problem, we can remove the element from the DOM, update all anchors, and then insert the element back where it was. To help achieve this, we can write a reusable function that not only removes an element from the DOM, but also returns a function that will insert the element back into its original position.

 * Remove an element and provide a function that inserts it into its original position
 * @param element {Element} The element to be temporarily removed
 * @return {Function} A function that inserts the element into its original position
function removeToInsertLater(element) {
  var parentNode = element.parentNode;
  var nextSibling = element.nextSibling;
  return function() {
    if (nextSibling) {
      parentNode.insertBefore(element, nextSibling);
    } else {

Now we can use this function to update the anchors within an element that is out-of-the-flow, and only trigger a reflow when we remove the element and when we insert the element.

function updateAllAnchors(element, anchorClass) {
  var insertFunction = removeToInsertLater(element);
  var anchors = element.getElementsByTagName('a');
  for (var i = 0, length = anchors.length; i < length; i ++) {
    anchors[i].className = anchorClass;

Single Element DOM Generation

This pattern lets us create and add a single element to the DOM triggering a single reflow. After creating the element, make all changes to the new element before adding it to the DOM.

The problem

Let’s make a function that adds a new anchor element to a parent element. The function lets you provide the class and text for the anchor. We could do this by creating the element, adding it do the DOM, and then setting these properties. This can trigger 3 reflows.

function addAnchor(parentElement, anchorText, anchorClass) {
  var element = document.createElement('a');
  element.innerHTML = anchorText;
  element.className = anchorClass;

The solution

To solve this, we insert the child into the DOM last. This triggers one reflow.

function addAnchor(parentElement, anchorText, anchorClass) {
  var element = document.createElement('a');
  element.innerHTML = anchorText;
  element.className = anchorClass;

However, we have a problem if we decide we want to add a large number of anchors to an element. With this approach, each time we add a anchor it could trigger a reflow. The next pattern resolves this problem.

DocumentFragment DOM Generation

This pattern lets us create multiple elements and insert them into the DOM triggering a single reflow. It uses something called a DocumentFragment. We create a DocumentFragment outside of the DOM (so it is out-of-the-flow). We then create and add multiple elements to this. Finally, we move all elements in the DocumentFragment to DOM but trigger a single reflow.

The problem

Let’s make a function that adds 10 anchors to an element. If we simply appended each new anchor directly to the element, we could trigger 10 reflows.

function addAnchors(element) {
  var anchor;
  for (var i = 0; i < 10; i ++) {
    anchor = document.createElement('a');
    anchor.innerHTML = 'test';

The solution

To solve this problem, we create a DocumentFragment and append each new anchor to this. When we append the DocumentFragment to the element using appendChild, all the children of the DocumentFragment are actually appended to the element. This triggers a single reflow.

function addAnchors(element) {
  var anchor, fragment = document.createDocumentFragment();
  for (var i = 0; i < 10; i ++) {
    anchor = document.createElement('a');
    anchor.innerHTML = 'test';

3.CSS: Using every declaration just once

A logical way to make your website faster is to make the client code you send to the browser smaller. When looking to optimize your CSS files, one of the most powerful measures you can employ is to use every declaration just once.Using every declaration just once means making strict use of selector grouping.

For example, you can combine these rules:

h1 { color: black; }
p { color: black; }
into a single rule:

h1, p { color: black; }While this simple example appears obvious, things are getting more interesting and harder to quantify when talking about complex style sheets. In our experience, using every declaration just once can reduce the CSS file size by 20-40% on average.

Let’s have a look at another example:

h1, h2, h3 { font-weight: normal; }
a strong { font-weight: normal !important; }
strong { font-style: italic; font-weight: normal; }
#nav { font-style: italic; }
.note { font-style: italic; }
Applying the “any declaration just once” rule here results in:

h1, h2, h3, strong { font-weight: normal; }
a strong { font-weight: normal !important; }
strong, #nav, .note { font-style: italic; }
Note that the !important declaration makes a difference.

There are some things to keep in mind when applying this method:

  • First, overly long selectors can render this method useless. Repeating selectors like html body table tbody tr td p span.example in order to have unique declarations doesn’t save much file size. In fact, since “using every declaration just once” might mean a higher number of selectors, this could even result in a bigger style sheet. Using more compact selectors would help, and would enhance the readability of your stylesheet.
  • Second, be aware of CSS regulations. When a user agent can’t parse the selector, it must ignore the declaration block as well. If you run into trouble with this, just bend the “declaration just once” rule – and use it more than once.
  • Third, and most importantly, keep the cascade in mind. No matter if you’re sorting your style sheets in a certain way or are very relaxed about the order in which rules appear in your style sheets, using every declaration once will make you change the order of the rules in one way or another. This order, however, can be decisive for a browser to decide which rule to apply. The easiest solution if you’re running into any issues with this is to make an exception as well and use the declaration in question more than once.

Alas, this is not always trivial to implement – this may change the cascading order and require a different workflow.


“Using every declaration just once” requires more attention when maintaining stylesheets. You will benefit from finding a way to track changed and added declarations to get them in line again. This is not hard when using a more or less reasonable editor (showing line changes, for example), but needs to be incorporated into the workflow.

One way, for instance, is to mark rules you edited or added by indenting them. Once you’re done updating your stylesheet, you can check for the indented rules to see if there are any new duplicate declarations, which you could then move to make sure each one of them is only used once.

4.Optimizing JavaScript code

Client-side scripting can make your application dynamic and active, but the browser’s interpretation of this code can itself introduce inefficiencies, and the performance of different constructs varies from client to client. Here we discuss a few tips and best practices to optimize your JavaScript code.

Working with strings

String concatenation causes major problems with Internet Explorer 6 and 7 garbage collection performance. Although these issues have been addressed in Internet Explorer 8 — concatenating is actually slightly more efficient on IE8 and other non-IE browsers such as Chrome — if a significant portion of your user population uses Internet Explorer 6 or 7, you should pay serious attention to the way you build your strings.

Consider this example:

var veryLongMessage =
'This is a long string that due to our strict line length limit of' +
maxCharsPerLine +
' characters per line must be wrapped. ' +
percentWhoDislike +
'% of engineers dislike this rule. The line length limit is for ' +
' style purposes, but we don't want it to have a performance impact.' +
' So the question is how should we do the wrapping?';
Instead of concatenation, try using a join:

var veryLongMessage =
['This is a long string that due to our strict line length limit of',
' characters per line must be wrapped. ',
'% of engineers dislike this rule. The line length limit is for ',
' style purposes, but we don't want it to have a performance impact.',
' So the question is how should we do the wrapping?'
Similarly, building up a string across conditional statements and/or loops by using concatenation can be very inefficient. The wrong way:

var fibonacciStr = 'First 20 Fibonacci Numbers
for (var i = 0; i < 20; i++) {
fibonacciStr += i + ' = ' + fibonacci(i) + '
The right way:

var strBuilder = ['First 20 fibonacci numbers:'];
for (var i = 0; i < 20; i++) {
strBuilder.push(i, ' = ', fibonacci(i));
var fibonacciStr = strBuilder.join('');

Building strings with portions coming from helper functions

Build up long strings by passing string builders (either an array or a helper class) into functions, to avoid temporary result strings.

For example, assuming buildMenuItemHtml_ needs to build up a string from literals and variables and would use a string builder internally, instead of using:

var strBuilder = [];
for (var i = 0, length = menuItems.length; i < length; i++) {
var menuHtml = strBuilder.join();

var strBuilder = [];
for (var i = 0, length = menuItems.length; i < length; i++) {
this.buildMenuItem_(menuItems[i], strBuilder);
var menuHtml = strBuilder.join();

Defining class methods

The following is inefficient, as each time a instance of baz.Bar is constructed, a new function and closure is created for foo:

baz.Bar = function() {
// constructor body = function() {
// method body
The preferred approach is:

baz.Bar = function() {
// constructor body
}; = function() {
// method body
With this approach, no matter how many instances of baz.Bar are constructed, only a single function is ever created for foo, and no closures are created.

Initializing instance variables

Place instance variable declaration/initialization on the prototype for instance variables with value type (rather than reference type) initialization values (i.e. values of type number, Boolean, null, undefined, or string). This avoids unnecessarily running the initialization code each time the constructor is called. (This can’t be done for instance variables whose initial value is dependent on arguments to the constructor, or some other state at time of construction.)

For example, instead of:

foo.Bar = function() {
this.prop1_ = 4;
this.prop2_ = true;
this.prop3_ = [];
this.prop4_ = 'blah';

foo.Bar = function() {
this.prop3_ = [];

foo.Bar.prototype.prop1_ = 4;

foo.Bar.prototype.prop2_ = true;

foo.Bar.prototype.prop4_ = 'blah';

Avoiding pitfalls with closures

Closures are a powerful and useful feature of JavaScript; however, they have several drawbacks, including:

  • They are the most common source of memory leaks.
  • Creating a closure is significantly slower then creating an inner function without a closure, and much slower than reusing a static function. For example:

    function setupAlertTimeout() {
    var msg = 'Message to alert';
    window.setTimeout(function() { alert(msg); }, 100);
    is slower than:

    function setupAlertTimeout() {
    window.setTimeout(function() {
    var msg = 'Message to alert';
    }, 100);
    which is slower than:

    function alertMsg() {
    var msg = 'Message to alert';

    function setupAlertTimeout() {
    window.setTimeout(alertMsg, 100);

  • They add a level to the scope chain. When the browser resolves properties, each level of the scope chain must be checked. In the following example:

    var a = 'a';

    function createFunctionWithClosure() {
    var b = 'b';
    return function () {
    var c = 'c';

    var f = createFunctionWithClosure();
    when f is invoked, referencing a is slower than referencing b, which is slower than referencing c.

See IE+JScript Performance Recommendations Part 3: JavaScript Code inefficiencies for information on when to use closures with IE.

Avoiding with

Avoid using with in your code. It has a negative impact on performance, as it modifies the scope chain, making it more expensive to look up variables in other scopes.

Avoiding browser memory leaks

Memory leaks are an all too common problem with web applications, and can result in huge performance hits. As the memory usage of the browser grows, your web application, along with the rest of the user’s system, slows down. The most common memory leaks for web applications involve circular references between the JavaScript script engine and the browsers’ C++ objects’ implementing the DOM (e.g. between the JavaScript script engine and Internet Explorer’s COM infrastructure, or between the JavaScript engine and Firefox XPCOM infrastructure).

Here are some rules of thumb for avoiding memory leaks:

Use an event system for attaching event handlers

The most common circular reference pattern [ DOM element –> event handler –> closure scope –> DOM ] element is discussed in this MSDN blog post. To avoid this problem, use one of the well-tested event systems for attaching event handlers, such as those in Google doctype, Dojo, or JQuery.

In addition, using inline event handlers can lead to another kind of leak in IE. This is not the common circular reference type leak, but rather a leak of an internal temporary anonymous script object. For details, see the section on “DOM Insertion Order Leak Model” in Understanding and Solving Internet Explorer Leak Patterns and and an example in this JavaScript Kit tutorial.

Avoid expando properties

Expando properties are arbitrary JavaScript properties on DOM elements and are a common source of circular references. You can use expando properties without introducing memory leaks, but it is pretty easy to introduce one by accident. The leak pattern here is [ DOM element –> via expando–> intermediary object –> DOM element ]. The best thing to do is to just avoid using them. If you do use them, only use values with primitive types. If you do use non-primitive values, nullify the expando property when it is no longer needed. See the section on “Circular References” in Understanding and Solving Internet Explorer Leak Patterns.

5.Prefetching resources

Web pages that require large files can often benefit from changing the order that those files are requested. In some cases, it makes sense to download files before they are necessary, so that they are instantly available once requested. When the resources required for a page can be loaded in advance, the user-perceived network latency for that page can be significantly reduced or even eliminated.For interactive websites, optimizing speed requires more than just minimizing the initial download size. For any site where user interactions can download additional resources, the speed of those actions depends on how long it takes for those resources to download. The site can be made faster by making the downloads smaller, but additional speedups may be possible by making the downloads sooner.

“Prefetching” is simply loading a file before it’s needed. It’s common on interactive sites for a user action to download of additional data, such as feeds or images. If it’s possible to predict the next user action, then it may be possible to start the downloads before the user input is made. For example, when you look at a photo on Picasa Web Albums, we make a guess that you’ll look at the next photo as well, and start downloading it as soon as possible. Sometimes we download a photo that the user won’t actually look at, but that’s worth it when we can make the rest of the photos show up faster.

This technique has the potential to speed up many interactive sites, but won’t work everywhere. For some sites, it’s just too difficult to guess what the user might do next. For others, the data might get stale if it’s fetched too soon. It’s also important to be careful not to prefetch files too soon, or you can slow down the page the user is already looking at. Prefetching too much will also leave a bad impression, as the user won’t appreciate having their network clogged up.

Here are a few things to consider when designing prefetching for your own site:

Study user actions

If your users spend most of their time on one particular page or action, then that’s the one to optimize. You can figure this out by looking through server logs, or just by watching a few volunteers use your site. For Picasa Web Albums, we know that the most common action is navigating from one photo to the next, and that was an excellent candidate for prefetching.

Measure when the page is ready

You shouldn’t start fetching data for a new page before the current page is done. For very dynamic pages, you may have to add JavaScript onload handlers for each external file on the page. Once those resources are safely downloaded, it’s reasonable to start prefetching files for the next user action.

Prefetch the right data

Some data is safer to prefetch than others. Images tend to be long-lived, and cache very well in the browser. Many data feeds are safe to prefetch, while others may be too time sensitive (for example, a feed of recent updates), or too frequently modified by user actions to be good prefetching candidates.

Profile your changes

Use profiling tools like Page Speed to get speed measurements before and after your change. Make sure that you’re not making the rest of your site slower by requesting too much data at once. Use those tools to tune how much and how frequently you prefetch, based on how much user time you save compared to how much network bandwidth you use.

Be a good web citizen

Your users probably have other websites open in different tabs or windows, so don’t hog all of their bandwidth. A modest amount of prefetching will make your site feel fast and make your users happy; too much will bog down the network and make your users sad. Prefetching only works when the extra data is actually used, so don’t use the bandwidth if it’s likely to get wasted.

6.Minimizing browser reflow

Author: Lindsey Simon, UX Developer

Recommended knowledge: Basic HTML, basic Javascript, working knowledge of CSS

Reflow is the name of the web browser process for re-calculating the positions and geometries of elements in the document, for the purpose of re-rendering part or all of the document. Because reflow is a user-blocking operation in the browser, it is useful for developers to understand how to improve reflow time and also to understand the effects of various document properties (DOM depth, CSS rule efficiency, different types of style changes) on reflow time. Sometimes reflowing a single element in the document may require reflowing its parent elements and also any elements which follow it.

There are a great variety of user actions and possible DHTML changes that can trigger a reflow. Resizing the browser window, using JavaScript methods involving computed styles, adding or removing elements from the DOM, and changing an element’s classes are a few of the things that can trigger reflow. It’s also worth noting that some operations may cause more reflow time than you might have imagined – consider the following diagram from Steve Souders’ talk “Even Faster Web Sites“:

From the table above it’s clear that not all changes to the style in JavaScript cause a reflow in all browsers, and that the time it takes to reflow varies. It is also somewhat clear that modern browsers are getting better at reflow times.

At Google, we test the speed of our web pages and applications in a variety of ways – and reflow is a key factor we consider when adding features to our UIs. We strive to deliver lively, interactive and delightful user experiences.


Here are some easy guidelines to help you minimize reflow in your web pages:

  1. Reduce unnecessary DOM depth. Changes at one level in the DOM tree can cause changes at every level of the tree – all the way up to the root, and all the the way down into the children of the modified node. This leads to more time being spent performing reflow.
  2. Minimize CSS rules, and remove unused CSS rules.
  3. If you make complex rendering changes such as animations, do so out of the flow. Use position-absolute or position-fixed to accomplish this.
  4. Avoid unnecessary complex CSS selectors – descendant selectors in particular – which require more CPU power to do selector matching.

In this video, Lindsey explains some simple ways to minimize reflow on your pages:

Additional resources

7.SPDY Performance on Mobile Networks

Authors: Matt Welsh, Ben Greenstein, and Michael Piatek, Mobile Web Performance team

April 30, 2012

SPDY is a replacement for HTTP, designed to speed up transfers of web pages, by eliminating much of the overhead associated with HTTP. SPDY supports out-of-order responses, header compression, server-side push, and other optimizations that give it an edge over HTTP when it comes to speed. SPDY is gaining a great deal of traction — it has been implemented in Chrome, Firefox, and Amazon Silk, been deployed widely by Google, and there is now SPDY support for Apache through the mod_spdy module.

SPDY’s design should help performance on mobile networks, which experience high round-trip times and typically lower throughput than to wired networks. SPDY includes several features that should improve web page download speeds on mobile networks, including:

  1. Header compression, which eliminates redundant data for HTTP headers;
  2. Out-of-order request processing, avoiding head-of-line blocking for HTTP responses;
  3. Use of a single TCP connection for multiple requests, eliminating overheads for TCP connection establishment (which can be high on mobile networks).

We wondered what the performance of SPDY would be compared to HTTP for popular websites, using a real phone (a Samsung Galaxy Nexus running Android), a modern, SPDY-enabled browser (Chrome for Android), and a variety of pages from real websites (77 pages across 31 popular domains).

The net result is that using SPDY results in a mean page load time improvement of 23% across these sites, compared to HTTP. This is equivalent to a speedup of 1.3x for SPDY over HTTP. Much more work can be done to improve SPDY performance on 3G and 4G cellular networks, but this is a promising start. More details below.


Our goal was to evaluate how SPDY performs on a real browser on a real phone when fetching popular websites. The challenge was in mitigating the variability across experiments.

  • Phone and browser configuration: We ran our experiments on Chrome for Android, because it has an up-to-date draft 2 implementation of SPDY and is the only mobile browser we know of that supports SPDY. We ran the browser on Android 4.0 (Ice Cream Sandwich) on a Samsung Galaxy Nexus phone. We used Chrome’s remote debugging interface with a custom client that starts up the browser on the phone, clears its cache and other state, initiates a web page load, and receives the Chrome developer tools messages to determine the page load times and other performance metrics.
  • Pages measured: We selected 77 URLs from a selection of 31 popular websites, to ensure a broad cross-section of both front pages and article pages across different types of sites. To ensure that the phone retrieved the same content each time it fetched a particular URL, we captured and replayed this content using the Web Page Replay tool, which eliminates the nondeterminism associated with replaying web page loads. All content was cached on our Web Page Replay server.
  • Server configuration: We needed a Web server implementation that supports SPDY. The best SPDY implementation available to us is Googles internal web server, called the Google Front End (or GFE). The GFE was configured to proxy to the Web Page Replay server hosting the actual site content. The GFE and the Web Page Replay servers ran on separate Linux desktop machines on the same LAN segment. All Web page contents were stored on the Web Page Replay servers local disk to eliminate additional sources of latency. The phones’ /etc/hosts file was modified to return the GFE machine’s IP address for all domain lookups, essentially isolating the phone and the desktop from the Internet. As a result, our measurements do not include realistic DNS lookup times.
  • Consistent network conditions: In prior experiments, we found page load times to be highly variable over real 3G and 4G cellular networks, making it hard to draw conclusions without running hundreds of experiments per site in order to estimate the statistical distribution. To reduce this variability, the phone was tethered to the desktop machine hosting the server using a USB connection, and traffic shaping was applied to the tethered connection using Dummynet. We emulated a 3G network with uplink bandwidth of 1 Mbps, downlink bandwidth of 2 Mbps, and a round-trip delay of 150 ms. These values were chosen as representative of cellular network performance in the United States. Note that packet loss was not included in the traffic shaping parameters, since cellular networks hide packet loss at the PHY layer, and our previous experiments have shown a TCP-level packet loss of less than 1% over typical cellular networks.
  • Data collection: For each page load, we recorded the page load time reported by the browser, as well as the detailed trace of Chrome remote debugging messages which were used to reconstruct a load time waterfall for each page, including the time to load each individual resource on the page, as well as timings for TCP connections, DNS lookups, and redirects. In addition, tcpdump was run on the phone to capture a trace of all network packets sent and received during the web page load.

Below is a diagram of our testbed.

We ran two sets of experiments:

  1. SPDY: Fetch the 77 URLs through the GFE using SPDY.
  2. HTTP: Fetch the 77 URLs through the GFE using HTTP.

Note that SPDY will use a single SSL connection per domain, whereas HTTP will open multiple parallel connections for fetching resources from the server. The SPDY measurements presented here include the SSL connection setup overhead.

Table 1: Domains included in the Web page measurements


Figure 2 shows the page load time for each of the 77 URLs using both HTTP and SPDY. As the figure shows, in all but one case, SPDY is faster than HTTP, with an average page load time reduction of 23% across all pages. For one of the URLs (an article from, the page loaded 6% slower on SPDY than HTTP.

Figure 2: Comparison of SPDY vs. HTTP page load times

Figure 3 shows the average SPDY load time reduction for each of the measured pages. The load time reduction is calculated as (SPDY load time) / (HTTP load time) for each page.

Figure 3: Page load time reduction for SPDY for each of the measured sites

In order to take a closer look at SPDY’s performance, Figures 4 and 5 show the waterfall chart for SPDY and HTTP (respectively) for one of the pages.

Figure 4: SPDY load waterfall for

Figure 5: HTTP load waterfall for

The waterfall diagrams clearly show SPDY’s main advantage over HTTP: The use of out-of-order responses. In the SPDY case, the browser opens a number of SPDY streams (over the same TCP connection) to fetch the various resources on the page, whereas in HTTP, each of the resources are fetched across several (6 in this case) TCP connections, with each connection handling requests in a FIFO fashion. Note that there are several 404 Not Found errors in both traces, owing to the Web Page Replay setup not caching all of the resources on the page.


SPDY shows promise to improve the performance of web page load times over mobile networks. Of course, its necessary to look across many more sites and a wider range of network conditions, but in this controlled experiment we find that SPDY yields a mean page load time reduction of 23% over HTTP, yielding a speedup of 1.3x. Website operators should consider using SPDY to speed up access to their sites from mobile devices.

9.UI messaging and perceived latency

To the typical user, speed doesn’t only mean performance. Users’ perception of your site’s speed is heavily influenced by their overall experience, including how efficiently they can get what they want out of your site and how responsive your site feels.

When designing your website or web app, keep in mind that users come to your site with a purpose. The faster (and easier) they can accomplish what they came to do, the better. If users encounter a lot of difficulty in getting to your content, they will leave your site for one that lets them accomplish their goals faster.

While there are there are many things you can do to save users time and make them feel that things are not as slow as they might be, this tutorial deals only with user messaging.

User messaging: 3 things to think about

1. Is my site simple and intuitive enough for a person who has never seen it before to easily use the first time?

If not, take some time to design some first-run-experience messaging.

Let’s say your site is a powerful web application with lots of features. Given that it’s not an easy task to design a completely intuitive out-of-the-box experience for this type of application, your users may need a bit of help getting started.

A first-run experience that concisely explains or shows the user what the product is and/or how to use it is extremely valuable. A little bit of time spent upfront in learning a few key things about your product can save a user a lot of time in the long run.

Warning: Don’t go overboard! Don’t block the user from getting to actual content by making the first-run experience into a cumbersome multi-step process.

2. Does this message interrupt or add steps to the user’s workflow?

Think carefully about how messages you display may lengthen the user’s workflow. There may be more appropriate times and ways to display a message that won’t keep the user from getting things done.

Consider the case where a user wants to carry out an action that you think is pretty drastic. You think it’s worth double-checking the user’s intention, to save the user who got to this point by accident. So you put up a message that says “Are you sure you want to do this?” You’ve saved the user who was about to make a big mistake, but for the user who actually wanted to commit this action, you’ve now introduced an extra step into the process. Instead, you could allow the action to be committed immediately in either case and given the ability to undo it, after the fact.

3. How can I reassure the user during wait times?

Let’s face it: there are going to be times when the user has to wait. There are, however, a few things you can do to make the inevitable wait time a little more bearable.

If the user has to wait more than a few seconds, show a progress bar. Progress bars not only indicate that the user has to wait, but also roughly how much longer the wait will take. If you want to be more specific, you can even detail how much of the action has completed (e.g. 40kb of 64kb). Try to refrain from including the estimated time to completion because, with fluctuating connection speeds, there’s nothing worse than seeing the estimated time remaining climb upwards.

When the user has to wait less than few seconds, show some kind of loading indicator. Loading indicators are often manifested as variations of a spinning doo-dad of sorts, but they can be something as simple as the text “Loading…”.

You might ask, why even show anything if it’s just less than a few seconds? The loading indicator gives feedback that the user’s action did in fact go through, and the site is working on it. Without any indicator, the user is left with uncertainty about whether or not it worked and may attempt to try again.

Additional resources

These suggestions are only the tip of the iceberg of things you can do to design with the user in mind. For reading material on interaction design and web design principles, here is a list of books that can get you started:


Rules to build and Optimiize Web pages – Google

Avoid Landing Page Redirects

This rule triggers when PageSpeed Insights detects that you have more than one redirect from the given url to the final landing page.


Redirects trigger an additional HTTP request-response cycle and delay page rendering. In the best case, each redirect will add a single roundtrip (HTTP request-response), and in the worst it may result in multiple additional roundtrips to perform the DNS lookup, TCP handshake, and TLS negotiation in addition to the additional HTTP request-response cycle. As a result, you should minimize use of redirects to improve site performance.

Here are some examples of redirect patterns:

  • uses responsive web design, no redirects are needed – fast and optimal!
  • → – multi-roundtrip penalty for mobile users.
  • → → – very slow mobile experience.

Avoid Plugins

This rule triggers when PageSpeed Insights detects the use of plugins on your page.


Plugins help the browser process special types of web content, such as Flash, Silverlight, and Java. Most mobile devices do not support plugins, and plugins are a leading cause of hangs, crashes, and security incidents in browsers that provide support. Due to these concerns, many desktop browsers restrict plugins:

Configure the Viewport

This rule triggers when PageSpeed Insights detects that your page does not specify a viewport, or specifies a viewport that does not adapt to different devices.


A viewport controls how a webpage is displayed on a mobile device. Without a viewport, mobile devices will render the page at a typical desktop screen width, scaled to fit the screen. Setting a viewport gives control over the page’s width and scaling on different devices.

Left: A page without a meta viewport. Right: A page with a viewport matching the device width.


Pages optimized to display well on mobile devices should include a meta viewport in the head of the document specifying width=device-width, initial-scale=1.

<meta name=viewport content="width=device-width, initial-scale=1">

Additional information


  • Hardware pixel: A physical pixel on the display. For example, an iPhone 5 has a screen with 640 horizontal hardware pixels.
  • Device-independent pixel (dip): A scaling of device pixels to match a uniform reference pixel at a normal viewing distance, which should be approximately the same size on all devices. An iPhone 5 is 320 dips wide.
  • CSS pixel: The unit used for page layout controlled by the viewport. Pixel dimensions in styles such as width: 100px are specified in CSS pixels. The ratio of CSS pixels to device independent pixels is the page’s scale factor, or zoom.

Desktop Pages on Mobile Devices

When a page does not specify a viewport, mobile browsers will render that page at a fallback width ranging from 800 to 1024 CSS pixels. The page scale factor is adjusted so that the page fits on the display, forcing users to zoom before they can interact with the page.

Meta Viewport Tag

A meta viewport tag gives the browser instructions on how to control the page’s dimensions and scaling, and should be included in the document’s head.

Fixed-Width Viewport

The viewport can be set to a specific width, such as width=320 or width=1024. While discouraged, this can be a useful stopgap to ensure pages with fixed dimensions display as expected.

Responsive Viewport

Using the meta viewport value width=device-width instructs the page to match the screen’s width in device independent pixels. This allows the page to reflow content to match different screen sizes.

Some browsers, including iOS and Windows Phone, will keep the page’s width constant when rotating to landscape mode, and zoom rather than reflow to fill the screen. Adding the attribute initial-scale=1 instructs browsers to establish a 1:1 relationship between CSS pixels and device independent pixels regardless of device orientation, and allows the page to take advantage of the full landscape width.

Left: An iPhone 5 rotating width=device-width, resulting in a landscape width of 320px. Right: An iPhone 5 rotating width=device-width, initial-scale=1, resulting in a landscape width of 568px.

Pages must be designed to work at different widths to use a responsive viewport. See our recommendations for sizing content to the viewport for advice.

Other Considerations

Avoid minimum-scale, maximum-scale, user-scalable

It is possible to set the minimum and maximum zoom, or disable the user’s ability to zoom the viewport entirely. These options negatively impact accessibility and should generally be avoided.


The meta viewport tag, while broadly supported, is not part of a formal standard. This behavior is being included in CSS as part of the CSS Device Adaptation specification. Until this specification is finalized and widely implemented, authors should continue to use the meta viewport tag for compatibility, either alone or with corresponding @viewport styles.


Enable Compression

This rule triggers when PageSpeed Insights detects that compressible resources were served without gzip compression.


All modern browsers support and automatically negotiate gzip compression for all HTTP requests. Enabling gzip compression can reduce the size of the transferred response by up to 90%, which can significantly reduce the amount of time to download the resource, reduce data usage for the client, and improve the time to first render of your pages. See text compression with GZIP to learn more.


Enable and test gzip compression support on your web server. The HTML5 Boilerplate project contains sample configuration files for all the most popular servers with detailed comments for each configuration flag and setting: find your favorite server in the list, look for the gzip section, and confirm that your server is configured with recommended settings. Alternatively, consult the documentation for your web server on how to enable compression:


PageSpeed Insights reports that many of my static content files need to be gzipped, but I have configured my web server to serve these files using gzip compression. Why is PageSpeed Insights not recognizing the compression?
Proxy servers and anti-virus software can disable compression when files are downloaded to a client machine. PageSpeed Insights’ results are based on headers that were actually returned to your client, so if you are running the analysis on a client machine that is using such anti-virus software, or that sits behind an intermediate proxy server (many proxies are transparent, and you may not even be aware of a proxy intervening between your client and web server), they may be the cause of this issue.
To determine if a proxy is the cause, you can use the PageSpeed Insights Chrome extension to examine the headers:

  1. Run PageSpeed on the page in question.
  2. Click the Show Resources tab.
  3. Expand the URL of the resource that is being flagged as uncompressed. The headers that accompanied that resource are displayed. If you see a header called Via, Forwarded, or Proxy-Connection, this indicates that a proxy has served the resource.

Improve Server Response Time

This rule triggers when PageSpeed Insights detects that your server response time is above 200 ms.


Server response time measures how long it takes to load the necessary HTML to begin rendering the page from your server, subtracting out the network latency between Google and your server. There may be variance from one run to the next, but the differences should not be too large. In fact, highly variable server response time may indicate an underlying performance issue.


You should reduce your server response time under 200ms. There are dozens of potential factors which may slow down the response of your server: slow application logic, slow database queries, slow routing, frameworks, libraries, resource CPU starvation, or memory starvation. You need to consider all of these factors to improve your server’s response time. The first step to uncovering why server response time is high is to measure. Then, with data in hand, consult the appropriate guides for how to address the problem. Once the issues are resolved, you must continue measuring your server response times and address any future performance bottlenecks.

  1. Gather and inspect existing performance and data. If none is available, evaluate using an automated web application monitoring solution (there are hosted and open source versions available for most platforms), or add custom instrumentation.
  2. Identify and fix top performance bottlenecks. If you are using a popular web framework, or content management platform, consult the documentation for performance optimization best practices.
  3. Monitor and alert for any future performance regressions!

Leverage Browser Caching

This rule triggers when PageSpeed Insights detects that the response from your server does not include caching headers or if the resources are specified to be cached for only a short time.


Fetching resources over the network is both slow and expensive: the download may require multiple roundtrips between the client and server, which delays processing and may block rendering of page content, and also incurs data costs for the visitor. All server responses should specify a caching policy to help the client determine if and when it can reuse a previously fetched response.


Each resource should specify an explicit caching policy that answers the following questions: whether the resource can be cached and by whom, for how long, and if applicable, how it can be efficiently revalidated when the caching policy expires. When the server returns a response it must provide the Cache-Control and ETag headers:

  • Cache-Control defines how, and for how long the individual response can be cached by the browser and other intermediate caches. To learn more, see caching with Cache-Control.
  • ETag provides a revalidation token that is automatically sent by the browser to check if the resource has changed since the last time it was requested. To learn more, see validating cached responses with ETags.

To determine the optimal caching policy for your site, please use the following guides:

We recommend a minimum cache time of one week and preferably up to one year for static assets, or assets that change infrequently. If you need precise control over when resources are invalidated we recommend using a URL fingerprinting or versioning technique – see invalidating and updating cached responses link above.

Minify Resources (HTML, CSS, and JavaScript)

This rules triggers when PageSpeed Insights detects that the size of one of your resources could be reduced through minification.


Minification refers to the process of removing unnecessary or redundant data without affecting how the resource is processed by the browser – e.g. code comments and formatting, removing unused code, using shorter variable and function names, and so on. See preprocessing & context-specific optimizations to learn more.


You should minify your HTML, CSS, and JavaScript resources. For minifying HTML, you can use PageSpeed Insights Chrome Extension to generate an optimized version of your HTML code. Run the analysis against your HTML page and browse to the ‘Minify HTML’ rule. Click on ‘See optimized content’ to get the optimized HTML code. For minifying CSS, you can try YUI Compressor and cssmin.js. For minifying JavaScript, try the Closure Compiler, JSMin or the YUI Compressor. You can create a build process that uses these tools to minify and rename the development files and save them to a production directory.

Optimize Images

This rule triggers when PageSpeed Insights detects that the images on the page can be optimized to reduce their filesize without significantly impacting their visual quality.


Images often account for most of the downloaded bytes on a page. As a result, optimizing images can often yield some of the largest byte savings and performance improvements: the fewer bytes the browser has to download, the less competition there is for the client’s bandwidth and the faster the browser can download and render content on the screen.


Finding the optimal format and optimization strategy for your image assets requires careful analysis across many dimensions: type of data being encoded, image format capabilities, quality settings, resolution, and more. In addition, you need to consider whether some images are best served in a vector format, if the desired effects can be achieved via CSS, and how to deliver appropriately scaled assets for each type of device. To answer these and other questions please follow the image optimization guide on Web Fundamentals. For a quick overview, see the image optimization checklist.

Optimize CSS Delivery

This rule triggers when PageSpeed Insights detects that a page includes render blocking external stylesheets, which delay the time to first render.


Before the browser can render content it must process all the style and layout information for the current page. As a result, the browser will block rendering until external stylesheets are downloaded and processed, which may require multiple roundtrips and delay the time to first render. See render-tree construction, layout, and paint to learn more about the critical rendering path, and render blocking CSS for tips on how to unblock rendering and improve CSS delivery.


If the external CSS resources are small, you can insert those directly into the HTML document, which is called inlining. Inlining small CSS in this way allows the browser to proceed with rendering the page. Keep in mind if the CSS file is large, completely inlining the CSS may cause PageSpeed Insights to warn that the above-the-fold portion of your page is too large via Prioritize Visible Content. In the case of a large CSS file, you will need to identify and inline the CSS necessary for rendering the above-the-fold content and defer loading the remaining styles until after the above-the-fold content.

Example of inlining a small CSS file

If the HTML document looks like this:

    <link rel="stylesheet" href="small.css">
    <div class="blue">
      Hello, world!

And the resource small.css is like this:

  .yellow {background-color: yellow;}
  .blue {color: blue;}
  .big { font-size: 8em; }
  .bold { font-weight: bold; }

Then you can inline critical CSS as follows:

    <div class="blue">
      Hello, world!
      var cb = function() {
        var l = document.createElement('link'); l.rel = 'stylesheet';
        l.href = 'small.css';
        var h = document.getElementsByTagName('head')[0]; h.parentNode.insertBefore(l, h);
      var raf = requestAnimationFrame || mozRequestAnimationFrame ||
          webkitRequestAnimationFrame || msRequestAnimationFrame;
      if (raf) raf(cb);
      else window.addEventListener('load', cb);

The critical styles needed to style the above-the-fold content are inlined and applied to the document immediately. The full small.css is loaded after initial painting of the page. Its styles are applied to the page once it finishes loading, without blocking the initial render of the critical content.

Note that the web platform will soon support loading stylesheets in a non-render-blocking manner, without having to resort to using JavaScript, using HTML Imports.

Don’t inline large data URIs

Be careful when inlining data URIs in CSS files. While selective use of small data URIs in your CSS may make sense, inlining large data URIs can cause the size of your above-the-fold CSS to be larger, which will slow down page render time.

Don’t inline CSS attributes

Inlining CSS attributes on HTML elements (e.g., <p style=...>) should be avoided where possible, as this often leads to unnecessary code duplication. Further, inline CSS on HTML elements is blocked by default with Content Security Policy (CSP).

Reduce the size of the above-the-fold content

This rule triggers when PageSpeed Insights detects that additional network round trips are required to render the above the fold content of the page.


If the amount of data required exceeds the initial congestion window (typically 14.6kB compressed), it will require additional round trips between your server and the user’s browser. For users on networks with high latencies such as mobile networks this can cause significant delays to page loading.


To make pages load faster, limit the size of the data (HTML markup, images, CSS, JavaScript) that is needed to render the above-the-fold content of your page. There are several ways to do this:

Structure your HTML to load the critical, above-the-fold content first

Load the main content of your page first. Structure your page so the initial response from your server sends the data necessary to render the critical part of the page immediately and defer the rest. This may mean that you must split your CSS into two parts: an inline part that is responsible for styling the ATF portion of the content, and the part that can be deferred.

Consider the following examples of how a site could be restructured to load faster:

  • If your HTML loads third-party widgets before it loads the main content, change the order to load the main content first.
  • If your site uses a two-column design with a navigation sidebar and an article, but your HTML loads the sidebar before the article, consider loading the article first.

Reduce the amount of data used by your resources

Once your site has been redesigned to work well across multiple devices and load the critical content first, use the following techniques to reduce the amount of data required to render your page:

Remove Render-Blocking JavaScript

This rule triggers when PageSpeed Insights detects that your HTML references a blocking external JavaScript file in the above-the-fold portion of your page.


Before the browser can render a page it has to build the DOM tree by parsing the HTML markup. During this process, whenever the parser encounters a script it has to stop and execute it before it can continue parsing the HTML. In the case of an external script the parser is also forced to wait for the resource to download, which may incur one or more network roundtrips and delay the time to first render of the page. See Adding Interactivity with JavaScript to learn more about how JavaScript affects the critical rendering path.


You should avoid and minimize the use of blocking JavaScript, especially external scripts that must be fetched before they can be executed. Scripts that are necessary to render page content can be inlined to avoid extra network requests, however the inlined content needs to be small and must execute quickly to deliver good performance. Scripts that are not critical to initial render should be made asynchronous or deferred until after the first render. Please keep in mind that for this to improve your loading time, you must also optimize CSS delivery.

Inline JavaScript

External blocking scripts force the browser to wait for the JavaScript to be fetched, which may add one or more network roundtrips before the page can be rendered. If the external scripts are small, you can inline their contents directly into the HTML document and avoid the network request latency. For example, if the HTML document looks like this:

    <script type="text/javascript" src="small.js"></script>
      Hello, world!

And the resource small.js is like this:

  /* contents of a small JavaScript file */

Then you can inline the script as follows:

    <script type="text/javascript">
      /* contents of a small JavaScript file */
      Hello, world!

Inlining the script contents eliminates the external request for small.js and allows the browser to deliver a faster time to first render. However, note that inlining also increases the size of the HTML document and that the same script contents may need to be inlined across multiple pages. As a result, you should only inline small scripts to deliver best performance.

Make JavaScript Asynchronous

By default JavaScript blocks DOM construction and thus delays the time to first render. To prevent JavaScript from blocking the parser we recommend using the HTML async attribute on external scripts. For example:

<script async src="my.js">

See Parser Blocking vs. Asynchronous JavaScript to learn more about asynchronous scripts. Note that asynchronous scripts are not guaranteed to execute in specified order and should not use document.write. Scripts that depend on execution order or need to access or modify the DOM or CSSOM of the page may need to be rewritten to account for these constraints.

Defer loading of JavaScript

The loading and execution of scripts that are not necessary for the initial page render may be deferred until after the initial render or other critical parts of the page have finished loading. Doing so can help reduce resource contention and improve performance.


What if I am using a JavaScript library such as jQuery?
Many JavaScript libraries, such as JQuery, are used to enhance the page to add additional interactivity, animations, and other effects. However, many of these behaviors can be safely added after the above-the-fold content is rendered. Investigate making such JavaScript asynchronous or defer its loading.
What if I’m using a JavaScript framework to construct the page?
If the content of the page is constructed by client-side JavaScript, then you should investigate inlining the relevant JavaScript modules to avoid extra network roundtrips. Similarly, leveraging server-side rendering can significantly improve first page load performance: render JavaScript templates on the server to deliver fast first render, and then use client-side templating once the page is loaded. For more information on server-side rendering, see

Size Content to Viewport

This rule triggers when PageSpeed Insights detects that the page content does not fit horizontally within the specified viewport size, thus forcing the user to pan horizontally to view all the content.


On both desktop and mobile devices, users are used to scrolling websites vertically but not horizontally, and forcing the user to scroll horizontally or to zoom out in order to see the whole page results in a poor user experience.

When developing a mobile site with a meta viewport tag, it easy to accidentally create page content that doesn’t quite fit within the specified viewport. For example, an image that is displayed at a width wider than the viewport can cause the viewport to scroll horizontally. You should adjust this content to fit within the width of the viewport, so that the user does not need to scroll horizontally.


Since screen dimensions vary widely between devices (e.g. between phones and tablets, and even between different phones), you should configure the viewport so that your pages render correctly on many different devices. However, since the width (in CSS pixels) of the viewport may vary, your page content should not rely on a particular viewport width to render well.

  • Avoid setting large absolute CSS widths for page elements (such as div{width:360px;}), since this may cause the element to be too wide for the viewport on a narrower device (e.g. a device with a width of 320 CSS pixels, such as an iPhone). Instead, consider using relative width values, such as width:100%. Similarly, beware of using large absolute positioning values that may cause the element to fall outside the viewport on small screens.
  • If necessary, CSS media queries can be used to apply different styling for small and large screens. This Web Fundamentals article provides further recommendations on how to go about this.
  • For images, this article provides a nice overview on how to serve responsively-sized images without incurring unnecessary page reflows during rendering.

Size Tap Targets Appropriately

This rule triggers when PageSpeed Insights detects that certain tap targets (e.g. buttons, links, or form fields) may be too small or too close together for a user to easily tap on a touchscreen.


Small or tightly packed links or buttons are more difficult for users to accurately press on a touchscreen than with a traditional mouse cursor. To prevent users from being frustrated by accidentally hitting the wrong ones, tap targets should be made sufficiently large and far from other tap targets that a user can press them without their finger pad overlapping any other tap targets. The average adult finger pad size is about 10mm wide (a bit less than half an inch), and the Android UI guidelines recommend a minimum tap target size of roughly 7mm, or 48 CSS pixels on a site with a properly-set mobile viewport.


You should ensure that the most important tap targets on your site—the ones users will be using the most often—are large enough to be easy to press, at least 48 CSS pixels tall/wide (assuming you have configured your viewport properly). Less frequently-used links can be smaller, but should still have spacing between them and other links, so that a 10mm finger pad would not accidentally press both links at once. Users should not have to pinch zoom (or rely on other browser UI features for disambiguating finger taps, such as Chrome’s popup magnifying glass) in order to easily and reliably press the desired button or link.

Make important tap targets large enough to be easy to press

This applies to the tap targets your users will use the most, such as buttons for frequently-used actions, search bars and other important form fields, and primary navigational links. These tap targets should be at least 7mm (48 CSS pixels if you have configured your viewport properly), and should have additional spacing around them if they are any smaller than 7mm.

Ensure there is extra spacing between smaller tap targets

It is reasonable for infrequently-used links or buttons to be smaller than the recommended size of 7mm, but there should still be no other tap targets within 5mm (32 CSS pixels), both horizontally and vertically, so that a user’s finger pressing on one tap target will not inadvertently touch another tap target.

Size Tap Targets Appropriately

This rule triggers when PageSpeed Insights detects that certain tap targets (e.g. buttons, links, or form fields) may be too small or too close together for a user to easily tap on a touchscreen.


Small or tightly packed links or buttons are more difficult for users to accurately press on a touchscreen than with a traditional mouse cursor. To prevent users from being frustrated by accidentally hitting the wrong ones, tap targets should be made sufficiently large and far from other tap targets that a user can press them without their finger pad overlapping any other tap targets. The average adult finger pad size is about 10mm wide (a bit less than half an inch), and the Android UI guidelines recommend a minimum tap target size of roughly 7mm, or 48 CSS pixels on a site with a properly-set mobile viewport.


You should ensure that the most important tap targets on your site—the ones users will be using the most often—are large enough to be easy to press, at least 48 CSS pixels tall/wide (assuming you have configured your viewport properly). Less frequently-used links can be smaller, but should still have spacing between them and other links, so that a 10mm finger pad would not accidentally press both links at once. Users should not have to pinch zoom (or rely on other browser UI features for disambiguating finger taps, such as Chrome’s popup magnifying glass) in order to easily and reliably press the desired button or link.

Make important tap targets large enough to be easy to press

This applies to the tap targets your users will use the most, such as buttons for frequently-used actions, search bars and other important form fields, and primary navigational links. These tap targets should be at least 7mm (48 CSS pixels if you have configured your viewport properly), and should have additional spacing around them if they are any smaller than 7mm.

Ensure there is extra spacing between smaller tap targets

It is reasonable for infrequently-used links or buttons to be smaller than the recommended size of 7mm, but there should still be no other tap targets within 5mm (32 CSS pixels), both horizontally and vertically, so that a user’s finger pressing on one tap target will not inadvertently touch another tap target.

Use Legible Font Sizes

This rule triggers when PageSpeed Insights detects that text in the page is too small to be legible.


Web font size can be specified via four common units: pixels (px), points (pt), EMs (em), and percent (%).

  • Pixels are “CSS pixels” and vary based on device size and density.
  • Points are defined in relation to pixels. A single pixel is 0.75 points*.
  • EMs and percent are “relative” units: they are relative to the inherited size and properties of the font being used. 1 EM is equivalent to 100%.

* See additional notes.

Additionally, the viewport impacts how fonts are scaled on mobile devices. A page without a properly configured viewport is scaled down on mobile devices, often resulting in the text on the page being illegible due to its small size.


First, configure a viewport to make sure fonts will be scaled as expected across various devices. Once you’ve configured a viewport, implement the additional recommendations below.

  1. Use a base font size of 16 CSS pixels. Adjust the size as needed based on properties of the font being used.
  2. Use sizes relative to the base size to define the typographic scale.
  3. Text needs vertical space between its characters and may need to be adjusted for each font. The general recommendation is to use the browser default line-height of 1.2em.
  4. Restrict the number of fonts used and the typographic scale: too many fonts and font sizes lead to messy and overly complex page layouts.

For example, the following CSS snippet defines a baseline font size of 16 CSS pixels, with CSS class ‘small’ that has a font size of 75% the baseline font (12 CSS pixels), and CSS class ‘large’ that has a font size of 125% the baseline font (20 CSS pixels):

body {
  font-size: 16px;

.small {
  font-size: 12px; /* 75% of the baseline */

.large {
  font-size: 20px; /* 125% of the baseline */

For additional font recommendations applicable for mobile devices, consult the Android typography guidelines.

Additional information

The CSS 2.1 specification requires careful reading to understand how length units are defined. It contains further units not mentioned here: inches, centimeters, millimeters, and picas. What is easy to miss is that a CSS inch is not always equal to a physical inch.

All absolute units are defined in relation to each other. 1 pixel is .75 points; 1 point is 1/72nd of an inch; 1 inch is 2.54 centimeters; etc. However, it is up to the device to decide how to “anchor” these units. For instance, when printing on paper, 1 inch is anchored at 1 physical inch, and all other units should be relative to the physical inch. When displayed on a mobile phone, however, devices anchor with what is known as the “reference pixel”. 1 CSS pixel is defined according to this reference pixel, and all other units – inches, centimeters, etc – are adjusted relative to the CSS pixel. Therefore, on a mobile phone, 1 CSS inch is typically displayed at a size smaller than 1 real, physical inch.

We recommend defining your font sizes using pixels for this reason. Some designers and developers may be misled when seeing inches or points used — they are physical dimensions but do not necessarily correspond to their real-world sizes. A pixel can always be thought to change size according to the device it is being displayed on.

Finally, each font also has its own characteristics: size, spacing, and so on. By default, the browser will display each font at 16px (CSS pixels). This is a good default for most cases, but may need to be adjusted for a specific font – i.e. the default size can be set lower or higher to accommodate for the different properties of the font. Then, once the default size is set, larger and smaller fonts should be defined relative to the default size using pixels. These can then be used to adjust size of the text for primary, secondary, and other type of content on the page.

Some mobile browsers may attempt to scale fonts for pages without a properly configured viewport. This scaling behavior varies between browsers and should not be relied upon to deliver legible fonts on mobile devices. PageSpeed Insights displays the text on your page without browser-specific font scaling applied.

Best Practices for Speeding Up Your Web Site – Yahoo


  1. Make Fewer HTTP Requests
  2. Reduce DNS Lookups
  3. Avoid Redirects
  4. Make Ajax Cacheable
  5. Postload Components
  6. Preload Components
  7. Reduce the Number of DOM Elements
  8. Split Components Across Domains
  9. Minimize Number of iframes
  10. Avoid 404s

Server ::

  1. Use a Content Delivery Network (CDN)
  2. Add Expires or Cache-Control Header
  3. Gzip Components
  4. Configure ETags
  5. Flush Buffer Early
  6. Use GET for Ajax Requests
  7. Avoid Empty Image src
 Cookie ::
  1. Reduce Cookie Size
  2. Use Cookie-Free Domains for Components

 CSS ::

  1. Put Stylesheets at Top
  2. Avoid CSS Expressions
  3. Choose <link> Over @import
  4. Avoid Filters

Javascript ::

  1. Put Scripts at Bottom
  2. Make JavaScript and CSS External
  3. Minify JavaScript and CSS
  4. Remove Duplicate Scripts
  5. Minimize DOM Access
  6. Develop Smart Event Handlers

Images ::

  1. Optimize Images
  2. Optimize CSS Sprites
  3. Do Not Scale Images in HTML
  4. Make favicon.ico Small and Cacheable

Mobile ::

  1. Keep Components Under 25 KB
  2. Pack Components Into a Multipart Document

Minimize HTTP Requests

tag: content

80% of the end-user response time is spent on the front-end. Most of this time is tied up in downloading all the components in the page: images, stylesheets, scripts, Flash, etc. Reducing the number of components in turn reduces the number of HTTP requests required to render the page. This is the key to faster pages.

One way to reduce the number of components in the page is to simplify the page’s design. But is there a way to build pages with richer content while also achieving fast response times? Here are some techniques for reducing the number of HTTP requests, while still supporting rich page designs.

Combined files are a way to reduce the number of HTTP requests by combining all scripts into a single script, and similarly combining all CSS into a single stylesheet. Combining files is more challenging when the scripts and stylesheets vary from page to page, but making this part of your release process improves response times.

CSS Sprites are the preferred method for reducing the number of image requests. Combine your background images into a single image and use the CSS background-image and background-position properties to display the desired image segment.

Image maps combine multiple images into a single image. The overall size is about the same, but reducing the number of HTTP requests speeds up the page. Image maps only work if the images are contiguous in the page, such as a navigation bar. Defining the coordinates of image maps can be tedious and error prone. Using image maps for navigation is not accessible too, so it’s not recommended.

Inline images use the data: URL scheme to embed the image data in the actual page. This can increase the size of your HTML document. Combining inline images into your (cached) stylesheets is a way to reduce HTTP requests and avoid increasing the size of your pages. Inline images are not yet supported across all major browsers.

Reducing the number of HTTP requests in your page is the place to start. This is the most important guideline for improving performance for first time visitors. As described in Tenni Theurer’s blog post Browser Cache Usage – Exposed!, 40-60% of daily visitors to your site come in with an empty cache. Making your page fast for these first time visitors is key to a better user experience.

Use a Content Delivery Network

tag: server

The user’s proximity to your web server has an impact on response times. Deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user’s perspective. But where should you start?

As a first step to implementing geographically dispersed content, don’t attempt to redesign your web application to work in a distributed architecture. Depending on the application, changing the architecture could include daunting tasks such as synchronizing session state and replicating database transactions across server locations. Attempts to reduce the distance between users and your content could be delayed by, or never pass, this application architecture step.

Remember that 80-90% of the end-user response time is spent downloading all the components in the page: images, stylesheets, scripts, Flash, etc. This is the Performance Golden Rule. Rather than starting with the difficult task of redesigning your application architecture, it’s better to first disperse your static content. This not only achieves a bigger reduction in response times, but it’s easier thanks to content delivery networks.

A content delivery network (CDN) is a collection of web servers distributed across multiple locations to deliver content more efficiently to users. The server selected for delivering content to a specific user is typically based on a measure of network proximity. For example, the server with the fewest network hops or the server with the quickest response time is chosen.

Some large Internet companies own their own CDN, but it’s cost-effective to use a CDN service provider, such as Akamai Technologies, EdgeCast, or level3. For start-up companies and private web sites, the cost of a CDN service can be prohibitive, but as your target audience grows larger and becomes more global, a CDN is necessary to achieve fast response times. At Yahoo!, properties that moved static content off their application web servers to a CDN (both 3rd party as mentioned above as well as Yahoo’s own CDN) improved end-user response times by 20% or more. Switching to a CDN is a relatively easy code change that will dramatically improve the speed of your web site.

Add an Expires or a Cache-Control Header

tag: server

There are two aspects to this rule:

  • For static components: implement “Never expire” policy by setting far future Expires header
  • For dynamic components: use an appropriate Cache-Control header to help the browser with conditional requests

Web page designs are getting richer and richer, which means more scripts, stylesheets, images, and Flash in the page. A first-time visitor to your page may have to make several HTTP requests, but by using the Expires header you make those components cacheable. This avoids unnecessary HTTP requests on subsequent page views. Expires headers are most often used with images, but they should be used on all components including scripts, stylesheets, and Flash components.

Browsers (and proxies) use a cache to reduce the number and size of HTTP requests, making web pages load faster. A web server uses the Expires header in the HTTP response to tell the client how long a component can be cached. This is a far future Expires header, telling the browser that this response won’t be stale until April 15, 2010.

      Expires: Thu, 15 Apr 2010 20:00:00 GMT

If your server is Apache, use the ExpiresDefault directive to set an expiration date relative to the current date. This example of the ExpiresDefault directive sets the Expires date 10 years out from the time of the request.

      ExpiresDefault "access plus 10 years"

Keep in mind, if you use a far future Expires header you have to change the component’s filename whenever the component changes. At Yahoo! we often make this step part of the build process: a version number is embedded in the component’s filename, for example, yahoo_2.0.6.js.

Using a far future Expires header affects page views only after a user has already visited your site. It has no effect on the number of HTTP requests when a user visits your site for the first time and the browser’s cache is empty. Therefore the impact of this performance improvement depends on how often users hit your pages with a primed cache. (A “primed cache” already contains all of the components in the page.) We measured this at Yahoo! and found the number of page views with a primed cache is 75-85%. By using a far future Expires header, you increase the number of components that are cached by the browser and re-used on subsequent page views without sending a single byte over the user’s Internet connection.

Gzip Components

tag: server

The time it takes to transfer an HTTP request and response across the network can be significantly reduced by decisions made by front-end engineers. It’s true that the end-user’s bandwidth speed, Internet service provider, proximity to peering exchange points, etc. are beyond the control of the development team. But there are other variables that affect response times. Compression reduces response times by reducing the size of the HTTP response.

Starting with HTTP/1.1, web clients indicate support for compression with the Accept-Encoding header in the HTTP request.

      Accept-Encoding: gzip, deflate

If the web server sees this header in the request, it may compress the response using one of the methods listed by the client. The web server notifies the web client of this via the Content-Encoding header in the response.

      Content-Encoding: gzip

Gzip is the most popular and effective compression method at this time. It was developed by the GNU project and standardized by RFC 1952. The only other compression format you’re likely to see is deflate, but it’s less effective and less popular.

Gzipping generally reduces the response size by about 70%. Approximately 90% of today’s Internet traffic travels through browsers that claim to support gzip. If you use Apache, the module configuring gzip depends on your version: Apache 1.3 uses mod_gzip while Apache 2.x uses mod_deflate.

There are known issues with browsers and proxies that may cause a mismatch in what the browser expects and what it receives with regard to compressed content. Fortunately, these edge cases are dwindling as the use of older browsers drops off. The Apache modules help out by adding appropriate Vary response headers automatically.

Servers choose what to gzip based on file type, but are typically too limited in what they decide to compress. Most web sites gzip their HTML documents. It’s also worthwhile to gzip your scripts and stylesheets, but many web sites miss this opportunity. In fact, it’s worthwhile to compress any text response including XML and JSON. Image and PDF files should not be gzipped because they are already compressed. Trying to gzip them not only wastes CPU but can potentially increase file sizes.

Gzipping as many file types as possible is an easy way to reduce page weight and accelerate the user experience.

Put Stylesheets at the Top

tag: css

While researching performance at Yahoo!, we discovered that moving stylesheets to the document HEAD makes pages appear to be loading faster. This is because putting stylesheets in the HEAD allows the page to render progressively.

Front-end engineers that care about performance want a page to load progressively; that is, we want the browser to display whatever content it has as soon as possible. This is especially important for pages with a lot of content and for users on slower Internet connections. The importance of giving users visual feedback, such as progress indicators, has been well researched and documented. In our case the HTML page is the progress indicator! When the browser loads the page progressively the header, the navigation bar, the logo at the top, etc. all serve as visual feedback for the user who is waiting for the page. This improves the overall user experience.

The problem with putting stylesheets near the bottom of the document is that it prohibits progressive rendering in many browsers, including Internet Explorer. These browsers block rendering to avoid having to redraw elements of the page if their styles change. The user is stuck viewing a blank white page.

The HTML specification clearly states that stylesheets are to be included in the HEAD of the page: “Unlike A, [LINK] may only appear in the HEAD section of a document, although it may appear any number of times.” Neither of the alternatives, the blank white screen or flash of unstyled content, are worth the risk. The optimal solution is to follow the HTML specification and load your stylesheets in the document HEAD.

Put Scripts at the Bottom

tag: javascript

The problem caused by scripts is that they block parallel downloads. The HTTP/1.1 specification suggests that browsers download no more than two components in parallel per hostname. If you serve your images from multiple hostnames, you can get more than two downloads to occur in parallel. While a script is downloading, however, the browser won’t start any other downloads, even on different hostnames.

In some situations it’s not easy to move scripts to the bottom. If, for example, the script uses document.write to insert part of the page’s content, it can’t be moved lower in the page. There might also be scoping issues. In many cases, there are ways to workaround these situations.

An alternative suggestion that often comes up is to use deferred scripts. The DEFER attribute indicates that the script does not contain document.write, and is a clue to browsers that they can continue rendering. Unfortunately, Firefox doesn’t support the DEFER attribute. In Internet Explorer, the script may be deferred, but not as much as desired. If a script can be deferred, it can also be moved to the bottom of the page. That will make your web pages load faster.

Avoid CSS Expressions

tag: css

CSS expressions are a powerful (and dangerous) way to set CSS properties dynamically. They were supported in Internet Explorer starting with version 5, but were deprecated starting with IE8. As an example, the background color could be set to alternate every hour using CSS expressions:

      background-color: expression( (new Date()).getHours()%2 ? "#B8D4FF" : "#F08A00" );

As shown here, the expression method accepts a JavaScript expression. The CSS property is set to the result of evaluating the JavaScript expression. The expression method is ignored by other browsers, so it is useful for setting properties in Internet Explorer needed to create a consistent experience across browsers.

The problem with expressions is that they are evaluated more frequently than most people expect. Not only are they evaluated when the page is rendered and resized, but also when the page is scrolled and even when the user moves the mouse over the page. Adding a counter to the CSS expression allows us to keep track of when and how often a CSS expression is evaluated. Moving the mouse around the page can easily generate more than 10,000 evaluations.

One way to reduce the number of times your CSS expression is evaluated is to use one-time expressions, where the first time the expression is evaluated it sets the style property to an explicit value, which replaces the CSS expression. If the style property must be set dynamically throughout the life of the page, using event handlers instead of CSS expressions is an alternative approach. If you must use CSS expressions, remember that they may be evaluated thousands of times and could affect the performance of your page.

Make JavaScript and CSS External

tag: javascript, css

Many of these performance rules deal with how external components are managed. However, before these considerations arise you should ask a more basic question: Should JavaScript and CSS be contained in external files, or inlined in the page itself?

Using external files in the real world generally produces faster pages because the JavaScript and CSS files are cached by the browser. JavaScript and CSS that are inlined in HTML documents get downloaded every time the HTML document is requested. This reduces the number of HTTP requests that are needed, but increases the size of the HTML document. On the other hand, if the JavaScript and CSS are in external files cached by the browser, the size of the HTML document is reduced without increasing the number of HTTP requests.

The key factor, then, is the frequency with which external JavaScript and CSS components are cached relative to the number of HTML documents requested. This factor, although difficult to quantify, can be gauged using various metrics. If users on your site have multiple page views per session and many of your pages re-use the same scripts and stylesheets, there is a greater potential benefit from cached external files.

Many web sites fall in the middle of these metrics. For these sites, the best solution generally is to deploy the JavaScript and CSS as external files. The only exception where inlining is preferable is with home pages, such as Yahoo!’s front page and My Yahoo!. Home pages that have few (perhaps only one) page view per session may find that inlining JavaScript and CSS results in faster end-user response times.

For front pages that are typically the first of many page views, there are techniques that leverage the reduction of HTTP requests that inlining provides, as well as the caching benefits achieved through using external files. One such technique is to inline JavaScript and CSS in the front page, but dynamically download the external files after the page has finished loading. Subsequent pages would reference the external files that should already be in the browser’s cache.

Reduce DNS Lookups

tag: content

The Domain Name System (DNS) maps hostnames to IP addresses, just as phonebooks map people’s names to their phone numbers. When you type into your browser, a DNS resolver contacted by the browser returns that server’s IP address. DNS has a cost. It typically takes 20-120 milliseconds for DNS to lookup the IP address for a given hostname. The browser can’t download anything from this hostname until the DNS lookup is completed.

DNS lookups are cached for better performance. This caching can occur on a special caching server, maintained by the user’s ISP or local area network, but there is also caching that occurs on the individual user’s computer. The DNS information remains in the operating system’s DNS cache (the “DNS Client service” on Microsoft Windows). Most browsers have their own caches, separate from the operating system’s cache. As long as the browser keeps a DNS record in its own cache, it doesn’t bother the operating system with a request for the record.

Internet Explorer caches DNS lookups for 30 minutes by default, as specified by the DnsCacheTimeout registry setting. Firefox caches DNS lookups for 1 minute, controlled by the network.dnsCacheExpiration configuration setting. (Fasterfox changes this to 1 hour.)

When the client’s DNS cache is empty (for both the browser and the operating system), the number of DNS lookups is equal to the number of unique hostnames in the web page. This includes the hostnames used in the page’s URL, images, script files, stylesheets, Flash objects, etc. Reducing the number of unique hostnames reduces the number of DNS lookups.

Reducing the number of unique hostnames has the potential to reduce the amount of parallel downloading that takes place in the page. Avoiding DNS lookups cuts response times, but reducing parallel downloads may increase response times. My guideline is to split these components across at least two but no more than four hostnames. This results in a good compromise between reducing DNS lookups and allowing a high degree of parallel downloads.

Minify JavaScript and CSS

tag: javascript, css

Minification is the practice of removing unnecessary characters from code to reduce its size thereby improving load times. When code is minified all comments are removed, as well as unneeded white space characters (space, newline, and tab). In the case of JavaScript, this improves response time performance because the size of the downloaded file is reduced. Two popular tools for minifying JavaScript code are JSMin and YUI Compressor. The YUI compressor can also minify CSS.

Obfuscation is an alternative optimization that can be applied to source code. It’s more complex than minification and thus more likely to generate bugs as a result of the obfuscation step itself. In a survey of ten top U.S. web sites, minification achieved a 21% size reduction versus 25% for obfuscation. Although obfuscation has a higher size reduction, minifying JavaScript is less risky.

In addition to minifying external scripts and styles, inlined <script> and <style> blocks can and should also be minified. Even if you gzip your scripts and styles, minifying them will still reduce the size by 5% or more. As the use and size of JavaScript and CSS increases, so will the savings gained by minifying your code.

Avoid Redirects

tag: content

Redirects are accomplished using the 301 and 302 status codes. Here’s an example of the HTTP headers in a 301 response:

      HTTP/1.1 301 Moved Permanently
      Content-Type: text/html

The browser automatically takes the user to the URL specified in the Location field. All the information necessary for a redirect is in the headers. The body of the response is typically empty. Despite their names, neither a 301 nor a 302 response is cached in practice unless additional headers, such as Expires or Cache-Control, indicate it should be. The meta refresh tag and JavaScript are other ways to direct users to a different URL, but if you must do a redirect, the preferred technique is to use the standard 3xx HTTP status codes, primarily to ensure the back button works correctly.

The main thing to remember is that redirects slow down the user experience. Inserting a redirect between the user and the HTML document delays everything in the page since nothing in the page can be rendered and no components can start being downloaded until the HTML document has arrived.

One of the most wasteful redirects happens frequently and web developers are generally not aware of it. It occurs when a trailing slash (/) is missing from a URL that should otherwise have one. For example, going to results in a 301 response containing a redirect to (notice the added trailing slash). This is fixed in Apache by using Alias or mod_rewrite, or the DirectorySlash directive if you’re using Apache handlers.

Connecting an old web site to a new one is another common use for redirects. Others include connecting different parts of a website and directing the user based on certain conditions (type of browser, type of user account, etc.). Using a redirect to connect two web sites is simple and requires little additional coding. Although using redirects in these situations reduces the complexity for developers, it degrades the user experience. Alternatives for this use of redirects include using Alias and mod_rewrite if the two code paths are hosted on the same server. If a domain name change is the cause of using redirects, an alternative is to create a CNAME (a DNS record that creates an alias pointing from one domain name to another) in combination with Alias or mod_rewrite.

Remove Duplicate Scripts

tag: javascript

It hurts performance to include the same JavaScript file twice in one page. This isn’t as unusual as you might think. A review of the ten top U.S. web sites shows that two of them contain a duplicated script. Two main factors increase the odds of a script being duplicated in a single web page: team size and number of scripts. When it does happen, duplicate scripts hurt performance by creating unnecessary HTTP requests and wasted JavaScript execution.

Unnecessary HTTP requests happen in Internet Explorer, but not in Firefox. In Internet Explorer, if an external script is included twice and is not cacheable, it generates two HTTP requests during page loading. Even if the script is cacheable, extra HTTP requests occur when the user reloads the page.

In addition to generating wasteful HTTP requests, time is wasted evaluating the script multiple times. This redundant JavaScript execution happens in both Firefox and Internet Explorer, regardless of whether the script is cacheable.

One way to avoid accidentally including the same script twice is to implement a script management module in your templating system. The typical way to include a script is to use the SCRIPT tag in your HTML page.

      <script type="text/javascript" src="menu_1.0.17.js"></script>

An alternative in PHP would be to create a function called insertScript.

      <?php insertScript("menu.js") ?>

In addition to preventing the same script from being inserted multiple times, this function could handle other issues with scripts, such as dependency checking and adding version numbers to script filenames to support far future Expires headers.

Configure ETags

tag: server

Entity tags (ETags) are a mechanism that web servers and browsers use to determine whether the component in the browser’s cache matches the one on the origin server. (An “entity” is another word a “component”: images, scripts, stylesheets, etc.) ETags were added to provide a mechanism for validating entities that is more flexible than the last-modified date. An ETag is a string that uniquely identifies a specific version of a component. The only format constraints are that the string be quoted. The origin server specifies the component’s ETag using the ETag response header.

      HTTP/1.1 200 OK
      Last-Modified: Tue, 12 Dec 2006 03:03:59 GMT
      ETag: "10c24bc-4ab-457e1c1f"
      Content-Length: 12195

Later, if the browser has to validate a component, it uses the If-None-Match header to pass the ETag back to the origin server. If the ETags match, a 304 status code is returned reducing the response by 12195 bytes for this example.

      GET /i/yahoo.gif HTTP/1.1
      If-Modified-Since: Tue, 12 Dec 2006 03:03:59 GMT
      If-None-Match: "10c24bc-4ab-457e1c1f"
      HTTP/1.1 304 Not Modified

The problem with ETags is that they typically are constructed using attributes that make them unique to a specific server hosting a site. ETags won’t match when a browser gets the original component from one server and later tries to validate that component on a different server, a situation that is all too common on Web sites that use a cluster of servers to handle requests. By default, both Apache and IIS embed data in the ETag that dramatically reduces the odds of the validity test succeeding on web sites with multiple servers.

The ETag format for Apache 1.3 and 2.x is inode-size-timestamp. Although a given file may reside in the same directory across multiple servers, and have the same file size, permissions, timestamp, etc., its inode is different from one server to the next.

IIS 5.0 and 6.0 have a similar issue with ETags. The format for ETags on IIS is Filetimestamp:ChangeNumber. A ChangeNumber is a counter used to track configuration changes to IIS. It’s unlikely that the ChangeNumber is the same across all IIS servers behind a web site.

The end result is ETags generated by Apache and IIS for the exact same component won’t match from one server to another. If the ETags don’t match, the user doesn’t receive the small, fast 304 response that ETags were designed for; instead, they’ll get a normal 200 response along with all the data for the component. If you host your web site on just one server, this isn’t a problem. But if you have multiple servers hosting your web site, and you’re using Apache or IIS with the default ETag configuration, your users are getting slower pages, your servers have a higher load, you’re consuming greater bandwidth, and proxies aren’t caching your content efficiently. Even if your components have a far future Expires header, a conditional GET request is still made whenever the user hits Reload or Refresh.

If you’re not taking advantage of the flexible validation model that ETags provide, it’s better to just remove the ETag altogether. The Last-Modified header validates based on the component’s timestamp. And removing the ETag reduces the size of the HTTP headers in both the response and subsequent requests. This Microsoft Support article describes how to remove ETags. In Apache, this is done by simply adding the following line to your Apache configuration file:

      FileETag none

Make Ajax Cacheable

tag: content

One of the cited benefits of Ajax is that it provides instantaneous feedback to the user because it requests information asynchronously from the backend web server. However, using Ajax is no guarantee that the user won’t be twiddling his thumbs waiting for those asynchronous JavaScript and XML responses to return. In many applications, whether or not the user is kept waiting depends on how Ajax is used. For example, in a web-based email client the user will be kept waiting for the results of an Ajax request to find all the email messages that match their search criteria. It’s important to remember that “asynchronous” does not imply “instantaneous”.

To improve performance, it’s important to optimize these Ajax responses. The most important way to improve the performance of Ajax is to make the responses cacheable, as discussed in Add an Expires or a Cache-Control Header. Some of the other rules also apply to Ajax:

Let’s look at an example. A Web 2.0 email client might use Ajax to download the user’s address book for autocompletion. If the user hasn’t modified her address book since the last time she used the email web app, the previous address book response could be read from cache if that Ajax response was made cacheable with a future Expires or Cache-Control header. The browser must be informed when to use a previously cached address book response versus requesting a new one. This could be done by adding a timestamp to the address book Ajax URL indicating the last time the user modified her address book, for example, &t=1190241612. If the address book hasn’t been modified since the last download, the timestamp will be the same and the address book will be read from the browser’s cache eliminating an extra HTTP roundtrip. If the user has modified her address book, the timestamp ensures the new URL doesn’t match the cached response, and the browser will request the updated address book entries.

Even though your Ajax responses are created dynamically, and might only be applicable to a single user, they can still be cached. Doing so will make your Web 2.0 apps faster.

Flush the Buffer Early

tag: server

When users request a page, it can take anywhere from 200 to 500ms for the backend server to stitch together the HTML page. During this time, the browser is idle as it waits for the data to arrive. In PHP you have the function flush(). It allows you to send your partially ready HTML response to the browser so that the browser can start fetching components while your backend is busy with the rest of the HTML page. The benefit is mainly seen on busy backends or light frontends.

A good place to consider flushing is right after the HEAD because the HTML for the head is usually easier to produce and it allows you to include any CSS and JavaScript files for the browser to start fetching in parallel while the backend is still processing.


      ... <!-- css, js -->
    <?php flush(); ?>
      ... <!-- content -->

Yahoo! search pioneered research and real user testing to prove the benefits of using this technique.


Use GET for AJAX Requests

tag: server

The Yahoo! Mail team found that when using XMLHttpRequest, POST is implemented in the browsers as a two-step process: sending the headers first, then sending data. So it’s best to use GET, which only takes one TCP packet to send (unless you have a lot of cookies). The maximum URL length in IE is 2K, so if you send more than 2K data you might not be able to use GET.

An interesting side affect is that POST without actually posting any data behaves like GET. Based on the HTTP specs, GET is meant for retrieving information, so it makes sense (semantically) to use GET when you’re only requesting data, as opposed to sending data to be stored server-side.

Post-load Components

tag: content

You can take a closer look at your page and ask yourself: “What’s absolutely required in order to render the page initially?”. The rest of the content and components can wait.

JavaScript is an ideal candidate for splitting before and after the onload event. For example if you have JavaScript code and libraries that do drag and drop and animations, those can wait, because dragging elements on the page comes after the initial rendering. Other places to look for candidates for post-loading include hidden content (content that appears after a user action) and images below the fold.

Tools to help you out in your effort: YUI Image Loader allows you to delay images below the fold and the YUI Get utility is an easy way to include JS and CSS on the fly. For an example in the wild take a look at Yahoo! Home Page with Firebug’s Net Panel turned on.

It’s good when the performance goals are inline with other web development best practices. In this case, the idea of progressive enhancement tells us that JavaScript, when supported, can improve the user experience but you have to make sure the page works even without JavaScript. So after you’ve made sure the page works fine, you can enhance it with some post-loaded scripts that give you more bells and whistles such as drag and drop and animations.

Preload Components

tag: content

Preload may look like the opposite of post-load, but it actually has a different goal. By preloading components you can take advantage of the time the browser is idle and request components (like images, styles and scripts) you’ll need in the future. This way when the user visits the next page, you could have most of the components already in the cache and your page will load much faster for the user.

There are actually several types of preloading:

  • Unconditional preload – as soon as onload fires, you go ahead and fetch some extra components. Check for an example of how a sprite image is requested onload. This sprite image is not needed on the homepage, but it is needed on the consecutive search result page.
  • Conditional preload – based on a user action you make an educated guess where the user is headed next and preload accordingly. On you can see how some extra components are requested after you start typing in the input box.
  • Anticipated preload – preload in advance before launching a redesign. It often happens after a redesign that you hear: “The new site is cool, but it’s slower than before”. Part of the problem could be that the users were visiting your old site with a full cache, but the new one is always an empty cache experience. You can mitigate this side effect by preloading some components before you even launched the redesign. Your old site can use the time the browser is idle and request images and scripts that will be used by the new site

Reduce the Number of DOM Elements

tag: content

A complex page means more bytes to download and it also means slower DOM access in JavaScript. It makes a difference if you loop through 500 or 5000 DOM elements on the page when you want to add an event handler for example.

A high number of DOM elements can be a symptom that there’s something that should be improved with the markup of the page without necessarily removing content. Are you using nested tables for layout purposes? Are you throwing in more <div>s only to fix layout issues? Maybe there’s a better and more semantically correct way to do your markup.

A great help with layouts are the YUI CSS utilities: grids.css can help you with the overall layout, fonts.css and reset.css can help you strip away the browser’s defaults formatting. This is a chance to start fresh and think about your markup, for example use <div>s only when it makes sense semantically, and not because it renders a new line.

The number of DOM elements is easy to test, just type in Firebug’s console:

And how many DOM elements are too many? Check other similar pages that have good markup. For example the Yahoo! Home Page is a pretty busy page and still under 700 elements (HTML tags).

Split Components Across Domains

tag: content

Splitting components allows you to maximize parallel downloads. Make sure you’re using not more than 2-4 domains because of the DNS lookup penalty. For example, you can host your HTML and dynamic content on and split static components between and

For more information check “Maximizing Parallel Downloads in the Carpool Lane” by Tenni Theurer and Patty Chi.

Minimize the Number of iframes

tag: content

Iframes allow an HTML document to be inserted in the parent document. It’s important to understand how iframes work so they can be used effectively.

<iframe> pros:

  • Helps with slow third-party content like badges and ads
  • Security sandbox
  • Download scripts in parallel

<iframe> cons:

  • Costly even if blank
  • Blocks page onload
  • Non-semantic

No 404s

tag: content

HTTP requests are expensive so making an HTTP request and getting a useless response (i.e. 404 Not Found) is totally unnecessary and will slow down the user experience without any benefit.

Some sites have helpful 404s “Did you mean X?”, which is great for the user experience but also wastes server resources (like database, etc). Particularly bad is when the link to an external JavaScript is wrong and the result is a 404. First, this download will block parallel downloads. Next the browser may try to parse the 404 response body as if it were JavaScript code, trying to find something usable in it.

tag: cookie

HTTP cookies are used for a variety of reasons such as authentication and personalization. Information about cookies is exchanged in the HTTP headers between web servers and browsers. It’s important to keep the size of cookies as low as possible to minimize the impact on the user’s response time.

For more information check “When the Cookie Crumbles” by Tenni Theurer and Patty Chi. The take-home of this research:

  • Eliminate unnecessary cookies
  • Keep cookie sizes as low as possible to minimize the impact on the user response time
  • Be mindful of setting cookies at the appropriate domain level so other sub-domains are not affected
  • Set an Expires date appropriately. An earlier Expires date or none removes the cookie sooner, improving the user response time

Use Cookie-free Domains for Components

tag: cookie

When the browser makes a request for a static image and sends cookies together with the request, the server doesn’t have any use for those cookies. So they only create network traffic for no good reason. You should make sure static components are requested with cookie-free requests. Create a subdomain and host all your static components there.

If your domain is, you can host your static components on However, if you’ve already set cookies on the top-level domain as opposed to, then all the requests to will include those cookies. In this case, you can buy a whole new domain, host your static components there, and keep this domain cookie-free. Yahoo! uses, YouTube uses, Amazon uses and so on.

Another benefit of hosting static components on a cookie-free domain is that some proxies might refuse to cache the components that are requested with cookies. On a related note, if you wonder if you should use or for your home page, consider the cookie impact. Omitting www leaves you no choice but to write cookies to *, so for performance reasons it’s best to use the www subdomain and write the cookies to that subdomain.

Minimize DOM Access

tag: javascript

Accessing DOM elements with JavaScript is slow so in order to have a more responsive page, you should:

  • Cache references to accessed elements
  • Update nodes “offline” and then add them to the tree
  • Avoid fixing layout with JavaScript

For more information check the YUI theatre’s “High Performance Ajax Applications” by Julien Lecomte.

Develop Smart Event Handlers

tag: javascript

Sometimes pages feel less responsive because of too many event handlers attached to different elements of the DOM tree which are then executed too often. That’s why using event delegation is a good approach. If you have 10 buttons inside a div, attach only one event handler to the div wrapper, instead of one handler for each button. Events bubble up so you’ll be able to catch the event and figure out which button it originated from.

You also don’t need to wait for the onload event in order to start doing something with the DOM tree. Often all you need is the element you want to access to be available in the tree. You don’t have to wait for all images to be downloaded. DOMContentLoaded is the event you might consider using instead of onload, but until it’s available in all browsers, you can use the YUI Event utility, which has an onAvailable method.

For more information check the YUI theatre’s “High Performance Ajax Applications” by Julien Lecomte.

tag: css

One of the previous best practices states that CSS should be at the top in order to allow for progressive rendering.

In IE @import behaves the same as using <link> at the bottom of the page, so it’s best not to use it.

Avoid Filters

tag: css

The IE-proprietary AlphaImageLoader filter aims to fix a problem with semi-transparent true color PNGs in IE versions < 7. The problem with this filter is that it blocks rendering and freezes the browser while the image is being downloaded. It also increases memory consumption and is applied per element, not per image, so the problem is multiplied.

The best approach is to avoid AlphaImageLoader completely and use gracefully degrading PNG8 instead, which are fine in IE. If you absolutely need AlphaImageLoader, use the underscore hack _filter as to not penalize your IE7+ users.

Optimize Images

tag: images

After a designer is done with creating the images for your web page, there are still some things you can try before you FTP those images to your web server.

  • You can check the GIFs and see if they are using a palette size corresponding to the number of colors in the image. Using imagemagick it’s easy to check using
    identify -verbose image.gif
    When you see an image using 4 colors and a 256 color “slots” in the palette, there is room for improvement.
  • Try converting GIFs to PNGs and see if there is a saving. More often than not, there is. Developers often hesitate to use PNGs due to the limited support in browsers, but this is now a thing of the past. The only real problem is alpha-transparency in true color PNGs, but then again, GIFs are not true color and don’t support variable transparency either. So anything a GIF can do, a palette PNG (PNG8) can do too (except for animations). This simple imagemagick command results in totally safe-to-use PNGs:
    convert image.gif image.png
    “All we are saying is: Give PiNG a Chance!”
  • Run pngcrush (or any other PNG optimizer tool) on all your PNGs. Example:
    pngcrush image.png -rem alla -reduce -brute result.png
  • Run jpegtran on all your JPEGs. This tool does lossless JPEG operations such as rotation and can also be used to optimize and remove comments and other useless information (such as EXIF information) from your images.
    jpegtran -copy none -optimize -perfect src.jpg dest.jpg

Optimize CSS Sprites

tag: images

  • Arranging the images in the sprite horizontally as opposed to vertically usually results in a smaller file size.
  • Combining similar colors in a sprite helps you keep the color count low, ideally under 256 colors so to fit in a PNG8.
  • “Be mobile-friendly” and don’t leave big gaps between the images in a sprite. This doesn’t affect the file size as much but requires less memory for the user agent to decompress the image into a pixel map. 100×100 image is 10 thousand pixels, where 1000×1000 is 1 million pixels

Don’t Scale Images in HTML

tag: images

Don’t use a bigger image than you need just because you can set the width and height in HTML. If you need
<img width="100" height="100" src="mycat.jpg" alt="My Cat" />
then your image (mycat.jpg) should be 100x100px rather than a scaled down 500x500px image.

Make favicon.ico Small and Cacheable

tag: images

The favicon.ico is an image that stays in the root of your server. It’s a necessary evil because even if you don’t care about it the browser will still request it, so it’s better not to respond with a 404 Not Found. Also since it’s on the same server, cookies are sent every time it’s requested. This image also interferes with the download sequence, for example in IE when you request extra components in the onload, the favicon will be downloaded before these extra components.

So to mitigate the drawbacks of having a favicon.ico make sure:

  • It’s small, preferably under 1K.
  • Set Expires header with what you feel comfortable (since you cannot rename it if you decide to change it). You can probably safely set the Expires header a few months in the future. You can check the last modified date of your current favicon.ico to make an informed decision.

Imagemagick can help you create small favicons

Keep Components under 25K

tag: mobile

This restriction is related to the fact that iPhone won’t cache components bigger than 25K. Note that this is the uncompressed size. This is where minification is important because gzip alone may not be sufficient.

For more information check “Performance Research, Part 5: iPhone Cacheability – Making it Stick” by Wayne Shea and Tenni Theurer.

Pack Components into a Multipart Document

tag: mobile

Packing components into a multipart document is like an email with attachments, it helps you fetch several components with one HTTP request (remember: HTTP requests are expensive). When you use this technique, first check if the user agent supports it (iPhone does not).

Avoid Empty Image src

tag: server

Image with empty string src attribute occurs more than one will expect. It appears in two form:

  1. straight HTML

    <img src=””>

  2. JavaScript

    var img = new Image();
    img.src = “”;

Both forms cause the same effect: browser makes another request to your server.

  • Internet Explorer makes a request to the directory in which the page is located.
  • Safari and Chrome make a request to the actual page itself.
  • Firefox 3 and earlier versions behave the same as Safari and Chrome, but version 3.5 addressed this issue[bug 444931] and no longer sends a request.
  • Opera does not do anything when an empty image src is encountered.

Why is this behavior bad?

  1. Cripple your servers by sending a large amount of unexpected traffic, especially for pages that get millions of page views per day.
  2. Waste server computing cycles generating a page that will never be viewed.
  3. Possibly corrupt user data. If you are tracking state in the request, either by cookies or in another way, you have the possibility of destroying data. Even though the image request does not return an image, all of the headers are read and accepted by the browser, including all cookies. While the rest of the response is thrown away, the damage may already be done.

The root cause of this behavior is the way that URI resolution is performed in browsers. This behavior is defined in RFC 3986 – Uniform Resource Identifiers. When an empty string is encountered as a URI, it is considered a relative URI and is resolved according to the algorithm defined in section 5.2. This specific example, an empty string, is listed in section 5.4. Firefox, Safari, and Chrome are all resolving an empty string correctly per the specification, while Internet Explorer is resolving it incorrectly, apparently in line with an earlier version of the specification, RFC 2396 – Uniform Resource Identifiers (this was obsoleted by RFC 3986). So technically, the browsers are doing what they are supposed to do to resolve relative URIs. The problem is that in this context, the empty string is clearly unintentional.

HTML5 adds to the description of the tag’s src attribute to instruct browsers not to make an additional request in section 4.8.2:

The src attribute must be present, and must contain a valid URL referencing a non-interactive, optionally animated, image resource that is neither paged nor scripted. If the base URI of the element is the same as the document’s address, then the src attribute’s value must not be the empty string.

Hopefully, browsers will not have this problem in the future. Unfortunately, there is no such clause for <script src=””> and <link href=””>. Maybe there is still time to make that adjustment to ensure browsers don’t accidentally implement this behavior.

This rule was inspired by Yahoo!’s JavaScript guru Nicolas C. Zakas. For more information check out his article “Empty image src can destroy your site“.

Javascript interview questions

HCL :: Event bubbling, event loop, bind and this keyword.

2.if(undefinedvariable==null) return ? && if(undefinedvariable===null) return ?

Without var keyword declare an variable it would be global or local?

EC Software :: Performance tunning for developing a webpage includes javascript, css and html.

If a load a webpage(url) performance and clarity of image will be good for any device.

Angular and bootsrap  need to learn.

How to center an element using css.

How to say extjs is best compare to other frameworks.

Apache Solr Backup Replication in 3.4.0

Solr Backup Replication in 3.4.0

1.Open SolrConfig.Xml and enable the following line
<requestHandler name=”/replication” class=”solr.ReplicationHandler” >

2.Solr backup replication command


3. Command to commit solr indexes


Core Javascript Conceps

Javascript Closure::

Javascript Closure are useful to create callback function and run time objects.

A closure is an inner function that has access to the outer (enclosing) function’s variables—scope chain. The closure has three scope chains: it has access to its own scope (variables defined between its curly brackets), it has access to the outer function’s variables, and it has access to the global variables.

The inner function has access not only to the outer function’s variables, but also to the outer function’s parameters. Note that the inner function cannot call the outer function’s arguments object, however, even though it can call the outer function’s parameters directly.

You create a closure by adding a function inside another function.
A Basic Example of Closures in JavaScript:

function showName (firstName, lastName) {

var nameIntro = "Your name is ";
    // this inner function has access to the outer function's variables, including the parameter
function makeFullName () {
return nameIntro + firstName + " " + lastName;

return makeFullName ();


showName ("Michael", "Jackson"); // Your name is Michael Jackson

Closures are used extensively in Node.js; they are workhorses in Node.js’ asynchronous, non-blocking architecture.

Single Inheritance:

Javascript supports single inheritance only.

Rule for single inheritance with prototype property.

  1. We cannot instantiate parent class method without prototype property.
  2. We can inherit parent class members by assigning parent class instance to child class.
  3. We can’t use prototype property in instance variable/object of any class.
  4. We can use prototype property in class name only.

<title>Single Inheritance</title>
<h1>Single Inheritance</h1>
<script src=”js/jquery.js” type=”text/javascript”></script>
<script type=”text/javascript”>

function Parent(){


Parent.prototype.sayHello = function(){
document.write(“Parent Instance Say Hello <br/>”);

Parent.prototype.sayBye = function(){
document.write(“Child Instance Say bye <br/>”);
Parent.AddWithoutPrototype = function(){     /*  This Method won’t work/instantiate unless prototype property is specified  */
document.write(“Method Without Prototype<br/>”);

function Child(){;

// Inherit Parent

Child.prototype = new Parent();

// Correct child constructor pointer to child class because it points to parent

Child.prototype.constructor = Child;

Child.prototype.sayHello = function(){
document.write(“Child Instance Say Hello <br/>”);

Child.prototype.sayMorning = function(){
document.write(“Child Instance Say Morning <br/>”);

var par1   = new Parent();
var child1 = new Child();


Java Interview Questions and answers

Metadata Extractor
Metadata extractor is used to get the information about a file. Tikaparser is used to extract the metadata information in a file. We use alchemy api. The Alchemy Api is a library(jar files) or a  dictionary. If we give a call to alchemy api it returns the important words in a file.

Is used to set the geotagger information in a article. It needs ip address or cityname based on this it sets or gets the geotagger information.

Maven is a high-level, intelligent project management, build and deployment tool provided by Apache?s software foundation group.
Maven deals with application development lifecycle management. Maven was originally developed to manage and to minimize the complexities of building the Jakarta Turbine project.

Spring Aspect Oriented Programming
Spring AOP (Aspect-oriented programming) framework is used to modularize cross-cutting concerns in aspects. Put it simple, it’s just an interceptor to intercept some processes, for example, when a method is execute, Spring AOP can hijack the executing method, and add extra functionality before or after the method execution.

In Spring AOP, 4 type of advices are supported :

Before advice – Run before the method execution
After returning advice – Run after the method returns a result
After throwing advice – Run after the method throws an exception
Around advice – Run around the method execution, combine all three advices above.

Add extra functionalities to a existing function. Method (Concern). Extrafunctionality(Crosscutting Concern)

Class is a prototype(rule) or a blueprint which defines states behaviour for the object of the class.

Singleton Pattern
It disallows to create more than one object in a class. This is useful when exactly one object is needed to coordinate actions across the system. In the same time it provides a global point of access to that instance.

Common UI Interview questions

Responsive web design
Responsive web design (often abbreviated to RWD) is an approach to web design in which a site is crafted to provide an optimal viewing experience—easy reading and navigation with a minimum of resizing, panning, and scrolling—across a wide range of devices (from desktop computer monitors to mobile phones).[1][2][3]
RWD uses CSS3 media queries,[3][5][6] an extension of the @media rule,[7] to adapt the layout to the viewing environment—along with fluid proportion-based grids[8] and flexible images:.[9][10][11][12]
Media queries allow the page to use different CSS style rules based on characteristics of the device the site is being displayed on, most commonly the width of the browser.

window onload and document ready
The ready event occurs after the HTML document has been loaded, while the onload event occurs later, when all content (e.g. images) also has been loaded.

Multi-tenancy is an architecture in which a single instance of a software application serves multiple customers. Each customer is called a tenant.
With a multi-tenancy architecture, the provider only has to make updates once. With a single-tenancy architecture, the provider has to touch multiple instances of the software in order to make updates.

jQuery Fading Methods
With jQuery you can fade an element in and out of visibility.

jQuery has the following fade methods:

fadeIn() The jQuery fadeIn() method is used to fade in a hidden element.
fadeOut() The jQuery fadeOut() method is used to fade out a visible element.
fadeToggle() The jQuery fadeToggle() method toggles between the fadeIn() and fadeOut() methods.
fadeTo() The jQuery fadeTo() method allows fading to a given opacity (value between 0 and 1).

The optional speed parameter specifies the duration of the effect. It can take the following values: “slow”, “fast”, or milliseconds.
The optional callback parameter is the name of a function to be executed after the fading completes.

jQuery Sliding Methods

Query slideToggle() Method
The jQuery slideToggle() method toggles between the slideDown() and slideUp() methods.

If the elements are slide down, slideToggle() will slide them up.

If the elements are slide up, slideToggle() will slide them down.

jQuery stop() Method
The jQuery stop() method is used to stop an animation or effect before it is finished.

jQuery Callback Functions
A callback function is executed after the current effect is finished.

DOM = Document Object Model
The DOM defines a standard for accessing HTML and XML documents:

“The W3C Document Object Model (DOM) is a platform and language-neutral interface that allows programs and scripts to dynamically access and update the content, structure, and style of a document.”

Get Content – text(), html(), and val()

Three simple, but useful, jQuery methods for DOM manipulation is:

text() – Sets or returns the text content of selected elements
html() – Sets or returns the content of selected elements (including HTML markup)
val() – Sets or returns the value of form fields

The jQuery empty() method removes the child elements of the selected element(s).
The jQuery remove() method removes the selected element(s) and its child elements.
append() – Inserts content at the end of the selected elements
prepend() – Inserts content at the beginning of the selected elements
after() – Inserts content after the selected elements
before() – Inserts content before the selected elements
addClass() – Adds one or more classes to the selected elements
removeClass() – Removes one or more classes from the selected elements
toggleClass() – Toggles between adding/removing classes from the selected elements
css() – Sets or returns the style attribute

AJAX = Asynchronous JavaScript and XML.
AJAX is the art of exchanging data with a server, and updating parts of a web page – without reloading the whole page.
AJAX is about loading data in the background and display it on the webpage, without reloading the whole page.
jQuery provides several methods for AJAX functionality.
With the jQuery AJAX methods, you can request text, HTML, XML, or JSON from a remote server using both HTTP Get and HTTP Post – And you can load the external data directly into the selected HTML elements of your web page!
HTTP Request: GET vs. POST
Two commonly used methods for a request-response between a client and server are: GET and POST.
GET – Requests data from a specified resource
POST – Submits data to be processed to a specified resource
GET is basically used for just getting (retrieving) some data from the server. Note: The GET method may return cached data.
The load() method loads data from a server and puts the returned data into the selected element.
POST can also be used to get some data from the server. However, the POST method NEVER caches data, and is often used to send data along with the request.

$.ajax()     Performs an AJAX request
ajaxComplete()     Specifies a function to run when the AJAX request completes
ajaxError()     Specifies a function to run when the AJAX request completes with an error
ajaxSend()     Specifies a function to run before the AJAX request is sent
$.ajaxSetup()     Sets the default values for future AJAX requests
ajaxStart()     Specifies a function to run when the first AJAX request begins
ajaxStop()     Specifies a function to run when all AJAX requests have completed
ajaxSuccess()     Specifies a function to run an AJAX request completes successfully
$.get()     Loads data from a server using an AJAX HTTP GET request
$.getJSON()     Loads JSON-encoded data from a server using a HTTP GET request
$.getScript()     Loads (and executes) a JavaScript from the a server using an AJAX HTTP GET request
load()     Loads data from a server and puts the returned data into the selected element
$.param()     Creates a serialized representation of an array or object (can be used as URL query string for AJAX requests)
$.post()     Loads data from a server using an AJAX HTTP POST request
serialize()     Encodes a set of form elements as a string for submission
serializeArray()     Encodes a set of form elements as an array of names and values

name:”Donald Duck”,
alert(“Data: ” + data + “\nStatus: ” + status);

var jq = $.noConflict();
jq(“p”).text(“jQuery is still working!”);

New Elements
New Attributes
Full CSS3 Support
Video and Audio
2D/3D Graphics
Local Storage
Local SQL Database
Web Applications

HTML5 is a cooperation between the World Wide Web Consortium (W3C) and the Web Hypertext Application Technology Working Group (WHATWG).

WHATWG was working with web forms and applications, and W3C was working with XHTML 2.0. In 2006, they decided to cooperate and create a new version of HTML.

Some rules for HTML5 were established:

New features should be based on HTML, CSS, DOM, and JavaScript
Reduce the need for external plugins (like Flash)
Better error handling
More markup to replace scripting
HTML5 should be device independent
The development process should be visible to the public

What is HTML5 Web Storage?
With HTML5, web pages can store data locally within the user’s browser.

Earlier, this was done with cookies. However, Web Storage is more secure and faster. The data is not included with every server request, but used ONLY when asked for. It is also possible to store large amounts of data, without affecting the website’s performance.

The data is stored in key/value pairs, and a web page can only access data stored by itself.

localStorage – stores data with no expiration date
sessionStorage – stores data for one session

What is Application Cache?
HTML5 introduces application cache, which means that a web application is cached, and accessible without an internet connection.

Application cache gives an application three advantages:

Offline browsing – users can use the application when they’re offline
Speed – cached resources load faster
Reduced server load – the browser will only download updated/changed resources from the server

What is a Web Worker?
When executing scripts in an HTML page, the page becomes unresponsive until the script is finished.

A web worker is a JavaScript that runs in the background, independently of other scripts, without affecting the performance of the page. You can continue to do whatever you want: clicking, selecting things, etc., while the web worker runs in the background.

Server-Sent Events – One Way Messaging
A server-sent event is when a web page automatically gets updates from a server.

This was also possible before, but the web page would have to ask if any updates were available. With server-sent events, the updates come automatically. The EventSource object is used to receive server-sent event notifications

HTML Best Practices Examples:
Use meaningful Title Tags and Meta Tags;
Always close your tags;
All images require the “Alt” attributes;
Proper use of headings (Use H1 – H6 Tags);
View source and validate all the code;
Use the right HTML elements at the right place;
Use lower case markup;
Be consistent in how you work;
Avoid using inline styles;
Place Javascript fiels at the bottom.
Declare the Correct DocType
Consider Placing Javascript Files at the BottomPlace all External CSS Files Within the Head Tag
Never Use Inline Javascript. It’s not 1996!
Keep Your Tag Names Lowercase
Choose a Great Code Editor
Once the Website is Complete, Compress!

Cross-Origin Resource Sharing (CORS) is a W3C spec that allows cross-domain communication from the browser. By building on top of the XmlHttpRequest object, CORS allows developers to work with the same idioms as same-domain requests.

E[@foo] an E element with a “foo” attribute
E[@foo=bar] an E element whose “foo” attribute value is exactly equal to “bar”
E[@foo^=bar] an E element whose “foo” attribute value begins exactly with the string “bar”
E[@foo$=bar] an E element whose “foo” attribute value ends exactly with the string “bar”
E[@foo*=bar] an E element whose “foo” attribute value contains the substring “bar”
$(“*”)     Selects all elements     Try it
$(this)     Selects the current HTML element     Try it
$(“p.intro”)     Selects all <p> elements with class=”intro”     Try it
$(“p:first”)     Selects the first <p> element     Try it
$(“ul li:first”)     Selects the first <li> element of the first <ul>     Try it
$(“ul li:first-child”)     Selects the first <li> element of every <ul>     Try it
$(“[href]”)     Selects all elements with an href attribute     Try it
$(“a[target=’_blank’]”)     Selects all <a> elements with a target attribute value equal to “_blank”     Try it
$(“a[target!=’_blank’]”)     Selects all <a> elements with a target attribute value NOT equal to “_blank”     Try it
$(“:button”)     Selects all <button> elements and <input> elements of type=”button”     Try it
$(“tr:even”)     Selects all even <tr> elements     Try it
$(“tr:odd”)     Selects all odd <tr> elements

Parse an XML Document

The following code fragment parses an XML document into an XML DOM object:
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
{// code for IE6, IE5
xmlhttp=new ActiveXObject(“Microsoft.XMLHTTP”);


Linux Commands and important Links for Reference

Out of Memory Error commands ::
export MAVEN_OPTS=”-Xms512m -Xmx1024m -XX:PermSize=256m -XX:MaxPermSize=512m”
sh ../bin/ –JvmMx 1024 -XX:PermSize=256m -XX:MaxPermSize=512m
mvn tomcat:run -DXms512m -DXmx512m -DXX:MaxPermSize=512m
mvn clean install -Dmaven.test.skip=true; mvn dependency:copy-dependencies

For Tomcat7
export CATALINA_OPTS=”-Xms2536m -Xmx2536m -XX:NewSize=2048m -XX:MaxNewSize=2048m -XX:PermSize=2048m -XX:MaxPermSize=2048m”

Maven project creation command

mvn archetype:generate -DgroupId=com.sify.springhbm -DartifactId=SpringHibernate -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
mvn archetype:generate -DgroupId=com.sify.resteasy -DartifactId=RESTfulExample -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false

-DartifactId –> Root Folder
-DgroupId    –> Creating folder and subfolders based on dot separated string actually created on src/main/java directory

SVN Commands ::
svn st
svn resolved filename
rm -rf filename
svn up
Previous revision :: <Service name> svn up -r  r4260
Svn log           :: <Service name> svn log –limit 10
:: <Service name> svn log –limit 10  –verbose
svn log <filepath>
svn cat -r <revision no> <filepath>
mvn clean install -Dmaven.test.skip=true -rf : servicename

Query to delete Site & Entitytype :

curl -H “Content-Type: text/xml” -d “<delete><query>(</query></delete>”

Query to delete Site :

curl http://solrIP:8983/solr/update/?commit=true -H “Content-Type: text/xml” -d “<delete><query>(</query></delete>”

Curl Commands

curl -X DELETE
curl -X POST -d @appflow.xml -H ‘Content-type: application/xml’
curl -X POST -d @viewer.xml -H ‘Content-type: application/xml’\&api_key=c15054c37fe64ce6e29d7d96eee2a3b72c3ac020\&authusername=admin\&aclrole=PortalAdmin

Redis Start
cd installedsoftware/redis-stable/src

./redis-server /opt/cmf/redis/redis_6379.conf &

Java installation repository
$ sudo add-apt-repository “deb hardy multiverse”
$ sudo apt-get update
$ sudo apt-get install sun-java6-jre sun-java6-jdk

Ab command for load performance test

ab -n 1000 -c 100 -p view.xml -T ‘application/xml’\&authusername=admin\&api_key=6053529b01ccddc15909e83afb8d2cdda7079364\&aclrole=PortalAdmin

Apache2 Installation/Remove commands
sudo apt-get -o DPkg::Options::=”–force-confmiss” –reinstall install apache2.2-common
sudo apt-get purge apache2.2-common
sudo apt-get install apache2

Starting redis
redis-server redif.conf &

Redis commands
Keyword search  = KEYS samac*
List all values = KEYS *
Delete keys     = DEL key1 key2 key3

Ubuntu Classpath setting
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_17;
export PATH=$JAVA_HOME/bin:$PATH;
export CLASSPATH=.;

Java Installation and classpath setting in RHEL6
1.Remove Default openjdk
rpm -qa | grep jre
rpm -qa | grep jdk
rpm -qa | grep openjdk
yum erase jre jdk openjdk

2. Download jdk-6u34-linux-x64.bin from
3. chmod a+x jdk-6u34-linux-x64.bin
4. ./jdk-6u34-linux-x64.bin
5. mv jdk1.6.0_34 /usr/java

6.which java

7.How to override this default built-in JDK? You can do it by alternatives:
/usr/sbin/alternatives –install /usr/bin/java java /usr/java/jdk1.6.0_34/bin/java 100
/usr/sbin/alternatives –install /usr/bin/jar jar /usr/java/jdk1.6.0_34/bin/jar 100
/usr/sbin/alternatives –install /usr/bin/javac javac /usr/java/jdk1.6.0_34/bin/javac 100
/usr/sbin/alternatives –config java

8.Just make sure everything is correct
java -version
java version “1.6.0_34”
Java(TM) SE Runtime Environment (build 1.6.0_34-b04)
Java HotSpot(TM) 64-Bit Server VM (build 20.9-b04, mixed mode)

9.Set Environment Variable for all users in RHEL 6
vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.6.0_34;
export PATH=$JAVA_HOME/bin:$PATH;
source /etc/profile
echo $PATH;

1. /etc/hosts    –> Specifying domain name instead of ip address eg:
2. /etc/rc.local –> Automatic service starts when system starts  eg: sudo sh
3. sudo vim  ~/.bashrc –> Specifying Classpath accessible to all users

SVN Commands
svn st
svn resolved filename
rm -rf filename
svn up

Previous revision :: <Service name> svn up -r  r4260
Svn log           :: <Service name> svn log –limit 10
:: <Service name> svn log –limit 10  –verbose

svn log <filepath>

svn cat -r <revision no> <filepath>

mvn clean install -Dmaven.test.skip=true -rf : servicename

Createing and selecting a database in sqlite –> sqlite3 databasename.db

How to Backup and Restore OpenLDAP Database
Instead of backup / restore of actual ldap database (hdb, etc) we will export/import ldap directory tree into ldif format that ultimately let us do the same, however without any particular database implementation specifics.
The backup will be stored in backup.ldif text file.


slapcat -v -l backup.ldif

The restore will go through replacing current database from a one we have in ldif backup.


# Stop slapd daemon
/etc/init.d/slapd stop

# Remove current database
rm -rf /var/lib/ldap/*

# Import directory tree from backup
slapadd -l backup.ldif

# Fix permissions
chown -R openldap:openldap /var/lib/ldap/*

# Start slapd daemon
/etc/init.d/slapd start

Sqlite Query
update dms set configvalue = ‘; where configkey=’url’ and domain=’’;
update generate set configvalue = “true” where domain =’’ and configkey = “debug”;
insert into generate values(“”,”configuration”, “ismaster”, “Action,DomainEntity,Role,Stage,Status,Adlet,AdletType,BodyType,Brand,Category,Channel,Format,Language,ProductType,ProductSubType,ShowCaseArea,Source,Viewer,Expert,ExpertType”);

PHP and Apache installation in Redhat 6

Redhat login details

Important URl

The Samsung Galaxy 3, Android, USB and Linux
Disconnect your phone from the computer.
Settings -> About Phone -> USB Settings -> Ask on connection
Settings -> Applications -> Development: Check both USB debugging and Stay awake.
Dial *#7284# to open PhoneUtils.
Set both the UART and USB modes to PDA instead of Modem.
Connect the phone to the computer.
You should now receive some sort of USB notification on your phone; you know what to do from here.

Restrict particular ip to block
sudo vi /etc/ssh/sshd_config

Stop/Start/Restart the OpenSSH using the following commands
sudo stop ssh
sudo start ssh
sudo restart ssh
sudo status ssh

Responsive web design Tutorial


Multiple Face detection

Apache Openoffice installation link

Javascript Date Time FormatSpecifiers



URl Rewrite Pattern URL’s
.              a single character
\s             a whitespace character (space, tab, newline)
\S             non-whitespace character
\d             a digit (0-9)
\D             a non-digit
\w             a word character (a-z, A-Z, 0-9, _)
\W             a non-word character
[aeiou]        matches a single character in the given set
[^aeiou]       matches a single character outside the given set
(foo|bar|baz)  matches any of the alternatives specified–net-6149

ALL Mp3 Songs —

Online youtube video downloader

Count no file in linux directory
find . -name “*.xml” | wc -l
grep -ir “<EntityType>Recipe</EntityType>” * | wc -l

Grid Summary Count Column