Monday, January 31, 2011

Improve Website Performance with better CSS code

This is something you should think about, everybody wants a fast page load and good website performance. Most of the time, this problem is a discussed as a client-side JavaScript issue, but sometimes, we can optimize browser rendering with better CSS code.

I'm not referring to things like minifying CSS, compression, or techniques like CSS Sprites, which definitely improve performance, but this is a good place to start thinking about CSS optimization. I want to discuss straight CSS coding and explain how you can write better CSS code by efficiently using CSS selectors.


1. Order of most efficient to least efficient: ID, class, tag, and universal. Thus, you should generally avoid the universal key selector (body > * {…}) and overall, make your rules as specific as possible.

2. Descendant selectors are really inefficient (html ul li a {…}) especially the tag or universal selectors. This is because browsers read CSS selectors from right to left. So, when the browsers evaluate even a simple tag descendant selector (#main li {…}) they do not look for the ID first and the tag children second. Instead, the browser finds all the li tags in the DOM, and then traverses up the DOM tree to find the matching #main ID selector.

3. Avoid using overly qualified selectors. I see this all the time (ul#nav {…}) and its just wasteful because ID selectors are unique by definition (ul#nav {…} = #nav {…}) so the tag name UL is not needed and extra information for the browser to evaluate. If you are thinking about readability, then just use a comment(/*ul*/ #nav {…}).

4. Rely on inheritance, learn which CSS properties inherit, and let them inherit, write less code, give the browser less work.

5. Avoid using redundant ancestors in descendant selectors. The descendant selector shown above (html ul li a {…}) has a redundant html selector, since all elements are descendants of the html tag.

6. Avoid using :hover on non-anchor elements as you might his performance issues in IE7 and IE8, and if you are coding for all browsers, you already know it does not work in IE6.

7. You should probably avoid CSS3 selectors (:nth-child) completely because they are really inefficient. Plus, they do not work in older browsers. Of course, if you don't care about certain browsers, and the alternative to CSS3 is some JavaScript code, it might be better to use CSS3.

These are some things you should be thinking about when doing xhtml/css code. These are not rules, but just advice on how to code CSS for better performance. Following these guidelines completely is actually pretty impractical as you would have a fast page with all unique ID's which is non-semantic and hard to maintain. But, it's still good information to know, and sometimes, it can help you write better CSS code.



Source and more Links:

Wednesday, January 26, 2011

Decoding Google Maps Street View Image

Google Maps is awesome and when Google Maps Street View came out I was pretty amazed. But, I don't want to see the street view in flash, I just want to see the street view image. I did some research and it looks like I am not alone, someone already figured out how to extract image tiles from the Google Streetview service. Well, with little repetition, let's expand on this below.

Let's start with some Geo Coordinates of some random location, or maybe let's just use a Boston location shown on the StreetView Simple Example from Google Maps. It gives us a location (2 Yawkey Way, Boston, Massachusetts) but no Geo Coordinates. Well, not too long ago Google added reverse geocoding to their web services so we can easily figure out the latitude and longitude coordinates from an address:

http://maps.googleapis.com/maps/api/geocode/xml?address=2+Yawkey+Way,+Boston,+MA&sensor=true

and it gives us:

[sourcecode language="xml"]
<location>
<lat>42.3467972</lat>
<lng>-71.0988861</lng>
</location>
[/sourcecode]


Next, we need something called a pano id to generate our tiled image, and to get that we use the coordinates above in this call:

http://cbk0.google.com/cbk?output=xml&ll=42.3467972,-71.0988861

which gives us:

[sourcecode language="xml"]
<data_properties image_width="13312" image_height="6656" tile_width="512" tile_height="512" pano_id="yerN9BDKmxDjHiavUjrDNQ" num_zoom_levels="3" lat="42.346814" lng="-71.098936" original_lat="42.346814" original_lng="-71.098932">
[/sourcecode]

So, now that we have this panorama id (pano_id="yerN9BDKmxDjHiavUjrDNQ") and the available zoom levels (num_zoom_levels="3" ) we can create the appropriate grid of tiles that will compose a 360 street view image of our address. Since the zoom level is 3, our grid will compose of 6 X positions (0-5) and 3 Y positions (0-2) which together will build our big image. The url format looks like this:

http://cbk0.google.com/cbk?output=tile&panoid=[PANO ID]&zoom=[ZOOM LEVEL]&x=[X position]&y=[Y position]

If the zoom level was 1, the x and y would both be 0, and our image would look like this:

http://cbk0.google.com/cbk?output=tile&panoid=yerN9BDKmxDjHiavUjrDNQ&zoom=1&x=0&y=0


But with zoom 3 we get way more details, we get this 18, 512x512 images. Sometimes, zoom 5 is available, which gives you 338 images, all 512px in width by 512px in height. That is a lot of detail, but that is also super resourceful. It's probably why this info is not released by Google, it's not some new discovery, but if you were constantly requesting all these images I think it would be a problem.