*** MOVED ***

NOTE: I have merged the contents of this blog with my web-site. I will not be updating this blog any more.
Showing posts with label web site. Show all posts
Showing posts with label web site. Show all posts

2007-02-06

Google Webmaster Central

A post on the Google blog pointed me to the Google Webmaster Central service. To access this service, all you need to have is a Google account (you already have it if you use Gmail, Blogger, Orkut, etc.). You can easily add your site to this service and verify your access to your web site either by uploading a page to your site with a unique name provided by Google or by adding a META tag to the default page of your site with a unique content provided by Google.

Among other things, this service lets you find out who links to your site. The difference between this service and the "link:" operator in Google searches is that this service actually works. The service also lets you know which search queries lead people to your site and how likely they are to hit your site for a given search query. If you have ever wondered how people discover your site, this is a fascinating way of knowing a large part of the answer to that question.

For example, currently these are the top 10 search queries on Google that are likely to lead people to my web site:
  1. gcj
  2. tangram history
  3. ranjit mathew
  4. paradoxical puzzles
  5. gcj windows
  6. hostingzero
  7. matthew symonds economist
  8. how to beat voldemort on harry potter goblet of fire gameboy advance
  9. "* dataone it"
  10. ananth chandrasekharan
I know that I have mentioned each of these terms somewhere on my web site, but I feel a bit sorry for the folks who arrive at my web site following the links from their search results - except for #3 and perhaps #5, they are going to be quite disappointed by the lack of any useful information about the things for which they were searching.

Most of the links to my web site are created due to the signature that I attach to the messages that I send to various mailing lists and that then gets archived all over the place. The second most common reason is that my blog and the blogs of some of my friends have a link to my web site in their "Links" section, which then gets replicated in the individual page for each of their posts. The third most common reason is that my profiles on sundry web sites link to my home page. There are actually very few "third parties" that link to my web site.

Quite sobering.

Of course, some of this information is also provided by the referrer logs and the analysis tools provided by Hosting Zero.

2006-04-26

Website Maintenance

I am now using GNU Make and GNU m4 to maintain my website. The main advantages of these tools over others were that I was already familiar with them and they were readily available on the platforms I work on. Some of the things that are now easily possible with the new setup:

  • Having a common header and footer for all the pages. They need to be edited just once and all affected pages are automatically regenerated. I can now add/delete sections of my website at will and the common site-navigation menu in all pages is updated automatically. This has already proved quite useful as I deleted the "Links" section of the website.

  • Allowing a page to specify its section ("Articles", "Books", etc.), the location of the root folder with respect to its own location and the title for the page. This lets the common header and footer correctly specify the location to images, scripts, stylesheets, etc., generate the correct page title and highlight the appropriate section in the common site-navigation menu.

  • Automatically updating the "Last Updated" date in a page footer based on when the page was actually updated, instead of having to manually remember to change that text every time I edit the page.

  • Automatically generating a "news entry" such that its title and body are linked together appropriately for my particular expandable/collapsible sections implementation.


I use the --prefix-builtins option of m4 (just to be a bit safer) and had to use different quote characters in some places because it was getting a bit confused with embedded apostrophes in JavaScript method calls and commas in normal text.

(Originally posted on Advogato.)

2006-04-20

HTML/CSS/JavaScript: Duh!

dorward: Thanks again for your comments. I don't know why it didn't occur to me to use GCC itself as a pre-processor. Some of the simple things I tried out worked well with GCC. I haven't checked out The Dolt yet.

As for condition #2 ("JavaScript disabled, Stylesheets enabled") mentioned in my previous post on this topic, I have found a better solution to the distracting "peek-a-boo" effect inherent in my previous solution. In the HEAD of the page, I have now put:

<script type="text/javascript"><!--
hideHiddenDivs( );
// --></script>

where hideHiddenDivs() is defined as:

function hideHiddenDivs( )
{
if( document.getElementById)
{
document.write(
'<style type="text/css"> div.hidden { display: none; } </style>');
}
}

I could have put this scriptlet inline, but there seems to be a problem with the parser of the W3C validator tool which complains about a "</style>" that does not end a STYLE element.

(Originally posted on Advogato.)

2006-04-19

HTML/CSS/JavaScript

Ankh, dorward: Thanks for your comments. For a sloth like me, it's not easy to once again overhaul the entire site to make it XHTML - I'll let it remain at HTML 4.01 for the time being. By the way dorward, I did not know until very recently that unlike XML, things like "<br/>" are not valid HTML elements. I used to insert "<br/>", "<p/>", "<hr/>", etc. liberally throughout my pages mistakenly thinking it's the "right" thing to do.

A rant: I don't know much of HTML/CSS/JavaScript, but I really wish for the ability to "#include" files (for example, for page headers and footers) and to define macros (for example, to generate a news item's headline and content elements linked to each other). I know these can be overcome by using JavaScript and document.write(), but that's a kludge. I also know that these can be achieved on the server, but I do not want to depend on it - I keep moving my website from one (free) provider to another and I also like it to behave exactly the same way when accessed from my local filesystem as from a remote server. Note that we already have inclusion mechanisms for external stylesheets, scripts, etc. so this is not something too difficult to provide.

Now on to something that I hope you HTML/CSS/JavaScript gurus can help me with: I'm trying to implement a handy expandable/collapsible news entries mechanism for my website somewhat similar to what is explained in this article. I have already implemented most of what I want and it can be seen in action on my site, but it's not "right". In particular, I want this system to behave properly whether JavaScript is enabled or not and whether stylesheets are enabled or not, that is, under the following conditions:
  1. JavaScript enabled, Stylesheets enabled
  2. JavaScript disabled, Stylesheets enabled
  3. JavaScript enabled, Stylesheets disabled
  4. JavaScript disabled, Stylesheets disabled

My implementation works right now under #1.

For doing #2, I make the stylesheet actually declare "hidden" elements as visible, but then use JavaScript attached to the "onload" event of the page to turn them invisible - if the user doesn't have JavaScript enabled, he still gets to see all the content properly. Note that I cannot use the alternative suggested in the article I have linked to; that is, something like:

<noscript>
<style type="text/css">
.hidden { display: block; }
</style>
</noscript>

does not work since the W3C validator rejects it - NOSCRIPT cannot occur inside HEAD, but STYLE can only occur inside HEAD. The downside of my approach is that there is a short but noticeable and sometimes distracting phase under both IE and Firefox, where the browser loads and renders the full page and then hides the hidden sections. Isn't there a better way of achieving this while still remaining strictly valid?

#3 poses a slight problem in that I wish that even the "togglers" do not appear if stylesheets are disabled. I was thinking of iterating through the stylesheets defined for the document in the DOM and check if all of them are disabled and omit emitting the togglers if they are. Is there a better way of doing this?

#4 is automatically taken care of by the "solution" to #2, since only JavaScript is used to emit the togglers. The user still gets to see the entire contents.

Perhaps I'm just wasting my time as #2 and #3 are unlikely to happen with real visitors to my pages - #1 is what almost all human visitors are likely to have and #4 is what almost all search engine bots are likely to have. This impractical fussing might explain why I have not become a manager. ;-)

(Originally posted on Advogato.)

2006-04-17

Website Redesign

After a long procrastination, I have finally updated my website to be more standards-compliant, better looking and somewhat easier to navigate. I wanted to shift completely over to XHTML but that has its problems besides lack of support in Internet Explorer. I have therefore settled for HTML 4.01 Strict. Every page on my website should now validate with the W3C validator. To enhance the looks of the site, I am using a variant of the Sinorca 2.0 design created by haran and provided by OSWD. I stumbled upon OSWD while admiring the recent makeover of Tom's site (which uses the Blue Haze design also created by haran).

While I was at it, I renamed the folders and files that had names like "phartz", "philez", etc. - these names had looked "kewl" half a decade ago, but now look rather juvenile. This results in some of the links posted elsewhere becoming invalid and I apologise to anyone affected by this change. I have also implemented support for simple expandable and collapsible sections so that some of the pages do not appear intimidatingly verbose.

Right now the website is mostly an exercise in vanity. I need to add content that is actually useful so that someone other than googlebot finds the website interesting.


(Originally posted on Advogato.)

2006-01-13

rmathew.com

I have finally registered the domain rmathew.com for myself. It just points to my site hosted by Hosting Zero. The registration was surprisingly easy and rather quick. Within just a couple of minutes, the domain was accessible and the redirection was working. I chose GoDaddy.com even though they were more expensive for just the registration than Yahoo! because their private registration add-on turned out to be slightly cheaper on the whole than that of Yahoo! - I really do not want random people to access my personal information via a WHOIS lookup.

The "gotcha" about this registration turned out to be the fact that I inadvertently ended up with a pre-approved payment agreement with GoDaddy.com on my PayPal account, even though I had paid all the amount upfront and had marked my domain for manual renewal. Talking to their customer support didn't help much except stuff my mailbox with verbose and heavily graphic HTML messages. I have tried to contain the damage a bit by limiting the monthly outflow to USD 0.01, but I don't think I'll be sticking with them after my domain registration expires.

(Originally posted on Advogato.)

2004-12-08

Hosting Zero

For some reason, the guys at Hosting Zero suddenly offered to host my site for free! So I moved my web site there. They seem to have far more facilities than Tripod where I used to host my site and have no irritating advertisements and popup windows.

Let us see how this works out. A big thanks to the guys at Hosting Zero - do check them out.

(Originally posted on Advogato.)