*** MOVED ***

NOTE: I have merged the contents of this blog with my web-site. I will not be updating this blog any more.


Indian Cuisine

The latest issue of The Week has an article titled "The Taste of India" by Vir Sanghvi, where he tries to explain why most foreigners do not appreciate Indian cuisine. Apart from being a famous journalist and a television interviewer here in India, Vir Sanghvi has also been writing about food for some time. He also hosts the show "A Matter of Taste" on Discovery Travel & Living. In this article he argues that what the foreigners normally get to taste is not the real Indian cuisine and that Indian cuisine is far too varied and subtle when compared to other cuisines.

Many of us realise that Indian cuisine is not considered in the same league as, say, French or Italian cuisine by most foreigners.
All Indian dishes look the same, a sort of brown mess; the level of spices is so high that you can never taste the flavour of the original ingredients; the cuisine has not evolved over the decades unlike French food; and Indians have no understanding of texture.

Why does our food invite this epicurean derision? More importantly, have these foreigners had a chance to taste the real Indian cuisine?
First of all, most of the so-called 'experts' who are sniffy about Indian food haven't actually eaten it in India. They rely on British-Indian cuisine, a bastard school of cooking invented by Bangladeshis tinkering around with the Punjabi menu. Secondly, even those who have eaten in India have never experienced the diversity of our nation's cuisine. Foreign foodies like countries such as Thailand where they can tour the length and breadth of the whole nation in one week and declare themselves 'experts' on Thai cuisine.
Foreigners rarely recognise this unique characteristic of Indian cuisine because they either eat their meals in Indian restaurants abroad or stick to hotel restaurants where chefs faithfully reproduce the standard recipes they were taught at catering college.
That is a tough one because when most people talk about great cuisines they talk about restaurant food. And, let's face it; there aren't many great Indian restaurants. If you want amazing south Indian food, you need to go to people's homes. If you want good Lucknawi cuisine your best bet is still one of the wedding caterers from an old Muslim family. Rarely, if ever, will you get food of that calibre at restaurants.

This is something that I have noticed as well. The food that you get in the restaurants, even here in India, rarely comes close to what people actually prepare in their homes. Restaurants in five-star hotels are particularly pathetic in that they serve utterly bland dishes for exorbitant prices and call it Indian cuisine. On the few occasions that I have travelled abroad and eaten in an "Indian" restaurant, the dishes didn't even come close to what we Indians would recognise as our cuisine. I am told that the same problem plagues the Chinese cuisine - what we get in "Chinese" restaurants, especially here in India, is far from what the Chinese actually eat in their homes. You really must eat in an Indian's home to get a feel for Indian food.

There are also some nice titbits about the evolution of Indian cuisine in the article.
But if we are talking about adaptability, then consider this: until the European colonists got to India, we had never heard of the chilli and had never seen a potato. Can anybody imagine modern Indian cuisine without these ingredients?
The tandoori chicken is now the world's most famous Indian dish. But how many people recognise that tandoori chicken, its cousin the chicken tikka and their ill-begotten offspring the butter chicken, were all invented in the 1940s and sprang to fame only in the 1950s and 1960s. If that is not culinary evolution, then what is?


IIT Kanpur

I had a chance recently to visit my alma mater IIT Kanpur. It has been over 11 years since I graduated from that place. This is the first time in all these years that I got to visit the place, though I had been yearning to do so all the while. It turned out to be a mixed experience - I found IIT Kanpur to be familiar and estranging at the same time.


The Golden Compass

Some days back, I saw the trailer for the film "The Golden Compass". I am eagerly looking forward to watching this film, as it is based on the eponymous book that is the first in the fantastic "His Dark Materials" trilogy by Philip Pullman.

"His Dark Materials" sadly does not seem to have achieved as much popularity as some of the other less-deserving sagas. I for one didn't even know that such a series existed until Ananth had pointed it out to me and given me his copies of the books to read. Once I had read it though, I liked it immensely (my review of the series on Amazon.com; the same on my web-site). I some times even use the names "Lyra" and "Pantalaimon" in my code and scripts instead of the prosaic "foo" and "bar", much to the bewilderment of the reviewers.

Ananth is so excited about watching the film and its sequels that he has already re-read the books to refresh his memory. He plans to watch the first show of the first day for the film. He eagerly laps up articles about Dakota Blue Richards, the young girl who will play the role of "Lyra", the charming protagonist of the books.

I always have a mixture of excitement and dread when I learn of a film based on a book that I have liked - for me, the Lord of the Rings trilogy of films are the only such films that haven't left me disappointed. When you read a book, you imagine the characters and the scenes in a certain way that might not be shared by the director of a film. A film is also constrained by a limit on the duration of the film, which might not be enough to develop all the characters to the extent the book has done. When those who have not read the book watch the film, they usually either get confused or miss a lot of (what you consider) important references in the dialogues. You still look forward to watching the films because you want to see how the directors have realised the books as films, because you want to revisit the characters and the story and because you hope that the films will get more people interested in reading the respective books.

I will keep my fingers and my toes crossed.


New Delhi

In a quirk of modern air-travel, it is usually cheaper to travel from Bhopal to Bangalore via New Delhi, which is about 750 kilometres in the opposite direction, than via Mumbai which would appear to be a much shorter route. This time Anusha and I decided to use the opportunity thus afforded to stay over in New Delhi for a little over a day and see some of the main tourist attractions.

As I have noted before, to someone coming from a place like Bangalore where traffic is always slowed down by congestion and potholes, the fast traffic on the smooth and wide roads of New Delhi comes as a bit of a shock. Taxi drivers drive even faster than the other folks, not bothering to slow down for junctions or turns. Everyone snakes in and out of traffic lanes with abandon and freely exchanges expletives at the slightest provocation.

Despite the presence of so much traffic, the air-pollution seems to be manageable, no doubt helped by the mandatory installation of Compressed Natural Gas (CNG) systems in public vehicles. New Delhi also has a lot of green trees, quite unlike that jungle of concrete, Mumbai. One of the more surprising sights for me in New Delhi was the row of toll booths at the entry to the DND Flyway, something that you rarely see in a developing country like India.

Among the standard tourist attractions, some of the places were either only mildly interesting or particularly underwhelming. However, I was particularly fascinated by Humayun's Tomb, the Qutub Minar complex and the Red Fort.

I had not expected Humayun's Tomb to be such a big, peaceful and beautiful complex. It was restored and beautified only as recently as 2003. The main mausoleum is particularly remarkable since it looks so similar to the Taj Mahal, although it was constructed almost 100 years before the latter. Some of the other buildings in the complex are also nice, although many of the places reek very strongly of bat droppings.

Even the Qutub Minar surprised me with its height and beauty. It has beautiful carvings on its walls and a nice pattern to the first three storeys. Sadly however, the last two storeys, which were later additions, are quite discordant with the rest of the tower. While the Qutub Minar is quite impressive by itself, one can only imagine the magnificence of the entire complex in its prime by looking at its sad remains. Some of the walls still retain bits of intricate carvings.

The Red Fort was built by the Mughal emperor Shah Jahan as a palace for his eponymous new capital Shahjahanabad. It is a splendid structure and seems fairly well-preserved. Of particular note are the royal residence, the "deewan-e-aam" (where the emperor met the common people), the "deewan-e-khaas" (where the emperor met his ministers and important guests) and the museum housing some interesting artefacts of that era. The "deewan-e-khaas", the emperor's seat in the "deewan-e-aam" and the royal residence are lavishly decorated and are a must-see.

I have created a Picasa web-album that has some more pictures from New Delhi.



On a recent trip to Bhopal, Anusha and I visited Pachmarhi. Pachmarhi is a little-known, but very beautiful, hill-station in Madhya Pradesh that deserves to be more popular. It is the highest town on the Satpura range of hills, situated about 900 metres (3,000 feet) above the sea-level. It is surrounded by the Satpura National Park.

Pachmarhi gets its name from a set of five caves carved into a big rock. These caves are called Pandava Caves because it was believed that the Pandavas stayed here for some time during their exile from their kingdom. However, it turns out that these caves were really carved out by Buddhist monks who used them as shelters.

Most of the rocks in Pachmarhi are made of soft sandstone. These rocks soak up water during the rains and then slowly release it throughout the rest of the year. This water feeds the many streams that flow through Pachmarhi and provide its beautiful waterfalls. A remarkable thing about these rocks is that some of them clearly show layers of embedded round pebbles. Though it is at a high altitude now, Pachmarhi must have been under water a long time ago.

We decided to stay in Glen View, a hotel developed by Madhya Pradesh Tourism. We also took their bus from Bhopal to reach Pachmarhi. The bus usually takes around five hours for the journey. As luck would have it, we ran into a chakka jam (road blockade) organised by the BJP that day to protest against the Sethusamudram project. This delayed us by three hours.

Pachmarhi is a very small town, most of which is occupied by the Indian army. It has a very unhurried and quiet feel to it. The local people are very friendly and helpful. It is surrounded by stunningly beautiful forests, valleys, waterfalls and caves. There are several trekking trails for enthusiasts. Some of the caves have pre-historic rock paintings. It also has several Shiva temples, each one associated with a myth relating to his fight against Bhashmasura. There are so many places to see that it can easily take about five days to explore everything properly.

It is not easy reaching these beautiful spots though. You have to have a car with a four-wheel drive and an expert driver to drive it. You should also have a bit of stamina because the car would not be able to take you all the way everywhere and you would have to go about 200 to 400 metres up and down a hill to reach many a spot.

I had an especially tough time reaching these spots because I found it hard to breathe. That was because I had a very bad cold. That in turn was because the air-conditioner in our room could only be run at its full strength. The ceiling fan was not enough for the hot and humid weather at the time. This was one of the worst colds I have ever suffered. It took a full two weeks for me to recover completely from it.

This was not the only problem with our room though. The water from the taps was always reddish-brown from the rust in the pipes for a while in the beginning and you had to leave it on for some time for it to approach something close to transparency. Even then it had an unpleasant iron flavour to it. The hotel staff was extremely nonchalant about it, informing us that it was quite normal for all the rooms in the hotel.

Pachmarhi does not have good options for accommodation at this time. None of the well-known hotel chains have their presence here, so you tend to err on the side of caution by opting for what is touted as a luxury hotel. Glen View is promoted as a luxury hotel by MP Tourism. Judging by our experience, they seem to have a rather peculiar definition of "luxury". To add insult to injury, they insist on collecting 100% of the hotel tariff for all the days you plan to stay there, in advance at the time of booking, with no scope for refunds. The hotel also does not accept credit cards, so you better carry some good cash to pay for the ridiculous amount it charges for its unremarkable food.

We were so disappointed by our hotel and the lack of other options that we cut our visit short by a day, losing some money in the process. We took some solace in the fact that we had saved almost 30% by booking it during the off-season. We would love to go back and see Pachmarhi again and explore the sites we could not visit, but not until there are better options for accommodation there.

I have created a Picasa web-album that has some more pictures from Pachmarhi.


Lossy Marvels

JPEG and MP3 are very popular formats for storing photographs and music respectively. They are both lossy formats and yet achieve amazing compression ratios without a loss of quality that is easily perceptible by normal people. I have always wondered how this is achieved.

The respective technical specifications are unfortunately too complicated to follow for a layman. Purportedly "explanatory" articles elsewhere gloss too much over the important points leaving me quite unsatisfied. I have fortunately come across two articles recently that seem to strike the perfect balance between these extremes.

"The Audiofile: Understanding MP3 Compression" was published in Ars Technica some time back and very nicely explains the compression algorithm behind MP3 as well as shedding some light over some of the apparent idiosyncrasies of this format. "Image Compression: Seeing What's Not There" was published by the American Mathematical Society and does a similar service for JPEG, including its successor JPEG 2000. (Come to think of it, these articles are "lossy" marvels in their own right.)

Now let us see if I can find an article with a similar depth that explains the MPEG video formats.



The Economist carried an obituary for Alex some time back. Alex was an African Grey parrot that Irene Pepperberg had trained to actually understand what it was talking about, unlike the parrots raised as pets which merely repeat whatever they hear.

For a parrot, Alex had impressive linguistic capabilities. It could describe objects, materials, shapes, colours, etc. It could express its desires. It could also ask questions. It could also count up to six and even had a notion of "zero". Very impressive.


ICFPC 2007: Epilogue

The results of ICFPC 2007 have finally been announced. Team Smartass from Google has come first (yet again), followed by United Coding Team from the University of Cape Town (South Africa) in the second place and Celestial Dire Badger (a lone hacker named Jed Davis) in the third place.

The organisers of the contest have an interesting report on the contest that also contains the "ideal" way one would go about solving the puzzles. Interestingly, Jochen Hoenicke managed to find a perfect DNA prefix some time after the contest was over. Impressively, Jed Davis came third by using a brute-force approach that won him the Judges' Prize - he was declared to be "an extremely cool hacker".

Update (2007-10-24): The organisers have now shared the video of their presentation about the contest at ICFP 2007.


Product Reviews by "Wayne Redhart"

Kingshuk pointed out the amusing reviews of products posted by a "Wayne Redhart" to the website of Amazon UK.

For some of the products reviewed by him, I found the product on offer more amusing than the review itself.


Tools for Indians by Google Labs India

The Google Labs India folks have just announced a couple of cool new tools for Indians. This includes being able to search in a number of Indian languages as well as a transliteration tool for easily typing in Devanagari using an ordinary keyboard.

The transliteration tool is especially nice. For example, it automatically converts "ramesh" to "रमेश". If you did not want the word automatically put in by the tool, you can select the desired word from a set of alternatives or explicitly type it out yourself.

Note that you might have to tweak things a little to correctly display Indic scripts.

Update (2007-08-23): QuillPad seems to have been in existence for some time now and has support for more Indian languages than the Google transliteration tool (which only supports Hindi at the moment).

Update (2007-08-29): Raftaar also allows you to search in Hindi using a transliterating interface.


Calculating Interest Rates

You want to buy that fancy LCD TV that costs Rs 60,000 but you do not have that much money with you. You see an advertisement in a newspaper for the TV from a dealer who offers to sell it to you if you make a down-payment of Rs 10,000 and pay Rs 4,380 every month for one year. You see another advertisement in the newspaper for the same TV from another dealer who offers to sell it to you if you make a down-payment of Rs 20,000 and pay Rs 1,330 every month for three years. How do you calculate the rate of interest each dealer is charging you for what is, in effect, a loan?

In "Calculating EMIs", we derived the formula for calculating the "Equated Monthly Installment" (EMI) on a loan. If "E" represents the EMI, "P" represents the principal amount in the loan, "r" represents the monthly rate of interest (one way of arriving at it is to divide the annual rate of interest, quoted as a percentage, by 1,200) and "n" represents the number of months in the tenure of the loan, then:

     E = P × r × (1 + r)n / ((1 + r)n - 1)

In the current example, we know the values for "E", "P" and "n" and wish to calculate "r". Unfortunately it is not that simple to calculate "r" using just the high-school algebra that most of us manage to remember. Fortunately there is a simple algorithm that can help us in this situation.

Let us first rewrite the formula above as an equation:

     P × r × (1 + r)n / ((1 + r)n - 1) - E = 0

Our task now is to find the roots of this equation - that is, the values of "r" that will make the left-hand-side (LHS) of this equation evaluate to zero.

To find the roots of a given equation "f(x) = 0", the algorithm in question can be described as follows:

  1. Find a value "a" for which "f(a)" evaluates to a negative value.

  2. Find a value "b" for which "f(b)" evaluates to a positive value.

  3. Let "c" be the average of "a" and "b".

  4. If "f(c)" is close enough to zero, "c" is the desired root.

  5. Otherwise, if "f(c)" is a negative value, substitute "c" for "a" and repeat the procedure from step #3 and if "f(c)" is a positive value, substitute "c" for "b" and repeat the procedure from step #3.

Note that this is just a binary search algorithm. By "close enough to zero", we mean that the absolute value of "f(c)" is less than some value, usually called "epsilon", that can be as small as we please. The algorithm given above can be rewritten as a function in a pseudo-language as follows:

guessRoot( f, a, b)
c := (a + b) / 2;

if( absoluteValue( f( c)) < EPSILON)
return c;
else if( f(c) < 0)
return guessRoot( f, c, b);
return guessRoot( f, a, c);

You can implement this in your favourite programming language along with a function that calculates the LHS of the equation given earlier. You can choose a value of "epsilon" according to your preference - the smaller the value of "epsilon", the more accurate is the result and the longer it takes to compute it. The time taken for the computation is also affected by how wide is the range between "a" and "b". Note that Newton's method is a much faster way of computing the roots of such equations, though it involves calculating derivatives.

How do you arrive at the values for "a" and "b"? This differs for each function. For our example, we can start with a low guess of "0.001%" ("0%" gives an undefined result) as the annual rate of interest and a high guess of "100%" and this gives us a negative and a positive value for the LHS respectively. With an "epsilon" of "0.00001", a C programme computes the answer in around 25 iterations.

In our example, the first dealer is offering us an effective loan of Rs 50,000 for 12 months with an EMI of Rs 4,380 and the effective annual rate of interest comes to about 9.32%. The second dealer is offering us an effective loan of Rs 40,000 for 36 months with an EMI of Rs 1,330 and the effective annual rate of interest comes to about 12.08%. In terms of the interest rates being charged by the dealers, you should now be able to tell that the first dealer has a better proposition for you when compared to the second dealer.


Advogato Diary Imported

I have imported all the old entries from my Advogato diary into this blog. These entries are labelled "advogato diary".

It was very simple to do since Advogato used to provide a simple way to import all my entries as an XML file (it does not work any more) and Blogger has a nice and simple API that allows me to post entries automatically, even allowing me to back-date and label them appropriately.

However, I did encounter an unexpected problem during this import. It turns out that if you post too many entries in a single day (in my case the limit seems to be 50), Blogger thinks that you are creating a SPAM blog and turns on "word verification" for posts (a CAPTCHA). While you can still post entries manually, the API provides you no way of retrieving and resolving the CAPTCHA using human input. After 24 hours the word verification is automatically switched off and you can again post using a programme. I had to therefore spread out the import over 5 days since I had 231 entries to import.

By the way, if you worry about search engine rankings for your pages, you might want to note that such mass imports cause Google to downgrade your site for having duplicate content if the old blog is still accessible.

Finally, Advogato is still alive and being maintained.


Blog Tweaks

I tweaked this blog in the last couple of days in the hopes of making it a little better - a little better-looking and a little better-behaved. Read on for the details.

The tweaks include:

  • Using a better-looking template. The old template was a bit boring and quite minimal. It made it hard for most people to read all the text I was spewing. The new template looks better (at least to me). It has a narrower column width to display the text, somewhat similar to those in newspapers and magazines, which makes it easier for most people to read the text.

  • Showing only the initial paragraph from each post on the main page. You can read the full post using the "Read More..." link at the bottom of each such paragraph. This makes it easier to skip over posts that you are not interested in reading.

  • Showing only a preview of the post in the feeds. I used to feel bad about my banal verbiage eating up lots of space on Planet Classpath and other such "planets". This change should let people easily skip over my posts if they don't care for what they see in the preview and navigate to the page containing the full post if they do. It should also benefit people who have subscribed to this blog using a blog aggregator. To do this a bit better than what was possible with Blogger's own feed mechanism (but still not entirely satisfactorily), I have had to redirect the Blogger feed for this blog to the FeedBurner feed for this blog.

  • Giving at least something back to Google for providing this great service for free. I used to feel bad about being yet another leech on Google's resources. I signed up for Google AdSense via Blogger. Now each page on this blog shows textual advertisements relevant to the context of the page.

I do realise that there are negative aspects of each of these changes that some folks are not going to appreciate. However, I believe that each of these changes is for the better, all things considered.

Update (2007-08-10): Anusha did not like the fact that you navigate to a different page when you click on "Read More...". It is also not fair to the reader since the entire post is already there on the main page, but hidden from view. So now I have changed the blog template to expand and collapse the rest of the post in place using a combination of techniques shown here, here and here, with a few adjustments of my own.

Update (2007-08-29): The feeds now have the full contents of the posts once again.


Disabling atime Updates

A recent article on KernelTrap highlights the high cost of supporting atime ("last-accessed time") updates on Linux file-systems. It has been suggested that desktop machines should just mount their file-systems using the "noatime" option to avoid this overhead.

Each time you read a file, its atime has to be updated. This can quickly become costly if you have applications that access a lot of small files. Most modern desktop environments, office suites, compilers (think of C/C++ headers), browsers, etc. fall into this category, so Linux takes a lot of unnecessary performance hit for data that is of interest only to a very small set of applications like tmpwatch. (Apparently even Windows has the same issue with NTFS.)

I have now changed the "/etc/fstab" on my PC to mount its file-systems using the "noatime" option. It does seem to have slightly improved the responsiveness of the desktop, though this could just be a placebo effect. On the other hand, in the KernelTrap article people have presented measurements that demonstrate the actual performance improvements brought about by using this option.


Blog Comments

Joel Spolsky links to a post by Dave Winer on blog comments as well as providing his own views on the subject.

Dave opines:
If it was one voice, unedited, not determined by group-think -- then it was a blog, no matter what form it took. If it was the result of group-think, with lots of ass-covering and offense avoiding, then it's not.
Well actually, my opinion is different from many, but it still is my opinion that it does not follow that a blog must have comments, in fact, to the extent that comments interfere with the natural expression of the unedited voice of an individual, comments may act to make something not a blog.

while Joel adds:
When a blog allows comments right below the writer's post, what you get is a bunch of interesting ideas, carefully constructed, followed by a long spew of noise, filth, and anonymous rubbish that nobody ... nobody ... would say out loud if they had to take ownership of their words.
Dave is absolutely right. The way to give people freedom of expression is to give them a quiet place to post their ideas. If other people disagree, they're welcome to do so... on their own blogs, where they have to take ownership of their words.

The timing of Joel's post couldn't have been better as I was recently wondering about the same issues myself. I was seriously considering disallowing comments on my blog posts. Some of the main reasons were:

  • Impurity - it no longer remains purely my own ramblings (Dave makes the same point). My utterly inane ramblings get combined with the inane ramblings of other, mostly anonymous, folks. It starts to look like a mailing list where I start a thread and others join in.

  • Overhead - I have had to review and moderate every comment since the time spammers discovered this blog and started abusing the comments facility to post links to their sites in order to boost their ranks with search engines (CAPTCHAs don't seem to deter them). I would like to avoid this unnecessary overhead.

  • Liability - I seem to unnecessarily become liable for the contents of the comments since they are available from my blog. I moderate comments simply to weed out spamming efforts, not to censor or alter them. Reasonable folks would agree that the respective posters of the comments should be liable for their content, but as we all know, reasonable folks are a sad minority in this world.

  • Noise - while I try to put some thought and effort into the material posted here, it gets diluted by the utterly trite comments that sometimes follow it, especially when people post under the cover of anonymity (Joel makes the same point). Insightful or interesting comments are a sad rarity on my blog.

  • Lock-in - the ability to collect and collate comments is one of the major reasons I am forced to be with Blogger or similar blogging platforms. I would ideally like to be able to merge this blog with my web-site and only upload static pages to my web-site. I would then not depend on anything other than the very basic hosting facilities and this would let me easily switch hosting providers.

However, comments are not all bad, of course. Some of the main reasons I continue to allow comments on this blog include:

  • Feedback - at worst, it tells you that at least some people took the trouble of navigating to your blog and reading your blog post. At best, a "Thank you!" warms your day up and a "This sucked!" goads you into writing better. In any case, you get to know that your efforts have not entirely been wasteful.

  • Ease - comments allow a reader of your blog to quickly and easily leave feedback for you. Emails are a little burdensome for this purpose, not to mention a bit formal. Making everyone respond to your blog post via their own blog posts (as Joel seems to suggest) looks too awkward to me - you would have a very hard time keeping up with the responses and most readers would just give up trying to leave feedback for you (perhaps that is indeed the effect Joel intends).

  • Scale - as Clay notes, if you are a small-time blogger (like yours truly), the signal-to-noise ratio in your comments is very likely to be much better than that on more popular blogs and web-sites that allow comments. For the same reason, the volume of comments is also likely to remain manageable enough for you to be able to moderate them.

  • Anonymity - some people are just not comfortable with revealing their identities to you, but would still like to leave a comment for you - perhaps anonymity provides them the security needed to provide frank opinions, perhaps they are shy, perhaps they don't want to sign up with Google just to be able to leave a comment for you, perhaps they just don't want to be seen as a person caught reading blogs in general or your blog in particular, etc.

  • Enhancement - some of the best comments are those that expand on the blog post by providing further information, clarifications, alternative ideas, etc. This enhances the value of your blog and makes it more appealing to your readers.

On the whole, blog comments appear quite useful to me despite their obvious warts. I will continue to allow comments on this blog, even those posted anonymously, as long as it remains manageable. I hope I continue to remain small enough to escape the attention of the trolls.



The Amiga

Ars Technica has just published Part 1 of what looks like a very interesting series of articles on the history of the Amiga series of personal computers.

The Amiga was quite unlike the other PCs of its time and could supposedly handle multimedia with an ease that put the IBM PCs of that time to shame. Sadly, I never had the chance to work with an Amiga myself. As is usual in the computer industry however, mere technical brilliance does not guarantee survival and popularity and in the end the IBM PC prevailed, while Commodore, makers of the Amiga, went bankrupt. Being an early user and fan of the BBC Micro, I can also bitterly attest to this sad turn of events that made the IBM PC the overwhelmingly dominant PC. Even though the Intel 8086 CPU was awkward to work with, DOS was an abomination for an operating system and the IBM PC was quite limited in its capabilities, none of this could hold the IBM PC back from reigning supreme and from killing off other personal computers (the Apple Macintosh being a notable exception).

Some time back, I saw the second volume of MindCandy. This volume was about the Amiga demo-scene while the first volume was about the IBM PC demo-scene. I had been following the IBM PC demo-scene since about 1993 to about 2000, so the first volume also evoked nostalgia apart from being fun and awe-inspiring. The second volume was no less awe-inspiring - watch Lapsuus by Maturefurk and then consider the fact that it was running on an Amiga with a Motorola 68060 CPU that was running at 75 MHz at best! Amazing coding skills at display on an amazing piece of hardware.

Update (2007-08-14): Part 2 is now on-line.

Update (2007-08-22): Part 3 is now on-line.

Update (2007-10-22): Part 4 is now on-line.

Update (2007-12-12): Part 5 is now on-line.

Update (2008-02-11): Part 6 is now on-line.

Update (2008-05-13): Part 7 is now on-line.


ICFPC 2007

I spent most of this weekend taking part in the ICFP contest for 2007. As in 2005, I took part in the contest with Yumpee as my team-mate. Our team was called "Kalianpur Bakaits" (an IIT-K reference) in the beginning, but we changed it to "The Great Indian Rope Trick" later, for reasons that should become obvious in a while.

This year's task was very similar to that from last year, but it was a bit tougher and even more fun. The organisers surely must have spent a lot of time and effort in putting together the task description and the supporting materials. There were no mistakes discovered in the task description for this year's contest (unlike those for the previous years' contests). The mailing list also had an unusually low-traffic during the contest - perhaps it was because the task description was precise enough, perhaps it was because a lot of people were finding it difficult just to get started or perhaps the contest was affected by the release of the latest Harry Potter book during the same weekend.

This year's task was to save an alien named "Endo". Endo was a "Fuun" travelling in a spaceship named "Arrow", when it had a mishap with an "interstellar garbage collector" that picked it and then dumped it on Earth. This caused critical damage to both Endo and Arrow. Our job was to repair Endo's DNA in such a way that Arrow is able to restore it to its former condition using its limited remaining power.

Endo's DNA is a long sequence comprising four bases, 'I', 'C', 'F' and 'P'. This sequence can be interpreted to create an RNA sequence comprising the same four bases that in turn can be used to build the proteins necessary to restore Endo. Prefixes comprising the four bases can be attached to a DNA sequence to modify its behaviour - we had to find a cheap and low-risk prefix that Arrow could process to repair Endo's DNA.

This translates to creating an interpreter for the DNA sequence that produces an RNA sequence, creating another interpreter for this RNA sequence that creates an image out of it and finally discovering a prefix that could restore Endo. The organisers provided us the DNA sequence for Endo, a "source" image resulting from interpreting this DNA sequence and a "target" image representing a fully-restored Endo. We were asked to create a DNA prefix that results in an image that is as close to the target image as possible, while being as short and as cheap to process as possible.

The task description was detailed enough for us to readily build a DNA to RNA converter as well as an RNA to image converter. The challenge was to do the processing quickly enough since the given DNA sequence required about 2 million iterations over a string that was about 7.5MB long. I created the DNA-to-RNA converter, while Yumpee built the RNA-to-image converter.

Since we had to splice and join huge lists while interpreting the DNA, I started off with a C implementation that used Lisp-style "cons" cells. While it seemed to perform well (but not good enough for the task), it had hard to track bugs that caused segmentation faults, not to mention memory leaks. I then tried using Java for creating the interpreter, which resulted in a correct but even slower interpreter. By this time, I had wasted two out of the three days allotted for the task. We had not made much progress while a lot of teams seemed to have discovered the same "magical" prefix comprising 28 bases that had boosted their rankings.

This is when I finally decided to act on Yumpee's suggestion of looking into using "ropes" as the underlying data structure. He had been suggesting it repeatedly since the time he had seen the task description, but I kept putting it off since I was not familiar with it. I decided to check out the "cords" implementation of ropes that comes with Boehm-GC. I modified my original interpreter in C to use cords instead of my home-grown and buggy cons-cells implementation.

The difference in performance was startling! The very first implementation ran through all the 2 million iterations in about 10 minutes on my PC and in about 4 minutes on Yumpee's PC. The code had also become much simpler to read and much closer to the pseudo-code provided in the task description. I debugged it a little to remove a bug that resulted from a misreading of the specification and then handed it over to Yumpee. By this time, he had a working RNA-to-image converter (he was also busy barking up the wrong trees till that time) and had also discovered the magic 28-base prefix. Unfortunately for us, we had just nine hours left to finish the task.

We discovered that the fun had just started. Embedded in the DNA were a series of clues that formed a virtual treasure-hunt, involving encrypted pages, EBCDIC-encoded text, fractals, callable functions, etc. We discovered a lot of clues but we couldn't find a prefix in time that would improve our rankings. It felt good that we could make so much progress by the end of the contest, unlike the last time. However, I felt like an idiot for not having listened to Yumpee earlier and saving myself and our team a lot of wasted time.

As I always end up saying after such contests, I hope we have a better luck the next time.

Some more reports: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, etc.

Update (2007-07-26): I tweaked the code for the DNA-to-RNA converter a little bit to avoid creating a lot of unnecessary cords and that reduced its running time to about 1 minute 35 seconds on my PC (about 7 times better than before) and about 30 seconds on Yumpee's PC. Right now the garbage collector itself is the biggest consumer of CPU cycles (about 45% on the whole), but disabling it leads to the programme quickly running out of memory on my PC.


"Finance, Investments 'n' Trading"

"Finance, Investments 'n' Trading" is a weblog by Shobhit that tries to bring some sanity to the general mania currently surrounding the Indian stock markets.

You should read his articles in chronological order (he provides a helpful "Table of Contents" article for this purpose). He might sound too pessimistic to some and I do not necessarily agree with everything that he says, but the articles are still a good and recommended read, particularly if you actively buy and sell stocks or are thinking of getting into it.

The Indian stock markets have been in a prolonged and an almost-continuous bull run for about three to four years. So many people have apparently made so much money with such ease that everyone from students and housewives to army men and retired government workers want to jump into the bandwagon for fear of being left out. It does not help that newspapers, magazines and TV channels provide a disproportionate coverage of the "excitement" surrounding the Indian stock markets as if it is a form of sport.

When caution and rational reason are abandoned in times of general euphoria, many people are likely to get their hands burnt (some times without even realising that they have made a net loss, taking inflation, taxes and trading charges into account). The only people who consistently make money in such situations are the stockbrokers (for example, read Warren Buffett's "Chairman's Letter" to Berkshire Hathaway Shareholders for 2005 (PDF) from page 17 onwards).

I don't think that investing in stocks is necessarily a bad thing or that the current bull run is not backed by a real growth in the Indian economy. I just wish that people think a bit more rationally, keep realistic expectations of returns and figure out how to calculate their real loss or gain from trading, before jumping in.

I wish that more people read and understand the advice given in Benjamin Graham's superb book "The Intelligent Investor".


Expiry Dates

If medicines and food items become dangerous to consume after their expiry dates, shouldn't pesticides become more powerful after their expiry dates?


Dragon Ball Z

Many years ago, a cartoon TV channel in India started showing Dragon Ball Z. They showed 53 episodes of this series comprising the Vegeta Saga and the Namek Saga. Just as the series got really interesting, they yanked it off the air without notice and without even a word of apology.

Loyal viewers of the series were aghast. They pleaded with the channel to show them the rest of the series as well, rather than leaving them on tenterhooks. The channel could not care less. The viewers were frustrated and cursed the channel using the choicest of expletives. They did not have much choice.

Several years later, the same cartoon channel starts showing Dragon Ball Z once again. Some of the old viewers hope the channel would show the entire series this time and watch the episodes once again to revive their memories.

They were naïve. The channel showed the same 53 episodes of the series and then abandoned it yet again at the same critical point in the story.

They were idiots. The channel as well as the viewers.


Data Visualisation with Gnuplot

Visualisation of data using charts and other types of plots is immensely helpful in getting a feel for it without carrying out detailed analyses. Gnuplot is a freely-available tool for data visualisation that is also very simple to use. The article "Visualize your data with gnuplot" is a nice introduction to this tool. Gnuplot proved to be quite handy for me recently.

I wanted to find out whether the Unit Price of a particular fund varies in line with the popular equity market indices in India, the NSE S&P CNX Nifty and the BSE Sensex. The current values of these indices are always readily available in the newspapers and on television channels, while I have to use the web-site of the fund to get its current Unit Price. If the Unit Price of the fund varied in line with the values of the equity market indices, it would save me some effort in determining its current worth.

The portfolio of the fund in question is almost entirely based on equities. It holds the shares of some of the biggest and the most stable companies across a variety of industry sectors. It was therefore reasonable to suspect that its Unit Price would vary in line with the values of the indices. However, it is not as diversified as the indices and it might not have invested across sectors in the same proportion as that represented by either of the indices.

It was easy to obtain the historical closing prices of the two indices and the Unit Prices of the fund. To keep things simple, I only considered the current month for making this comparison. To simplify things further and to improve the visualisation, I normalised the first value in each series to "100" by scaling all the values appropriately. (This is a technique that I have often seen put to good use in The Economist.)

Using Gnuplot, I obtained the following chart:

This gave me the desired answer right away!

In case you're curious, here are the Gnuplot commands I used for creating the chart above:

# We want PNG output.
set terminal png
set output "plot.png"

# Specify how and where the key (legend) for the chart should
# appear.
set key bottom right
set key width 2 box

# Tweak the look of the chart.
set title "Fluctuations in Unit Prices Relative to Market Indices"
set xlabel "June 2007"
set ylabel "Normalised Value"
set grid

# The data on the X axis represent time values.
set format x "%d"
set xtics "01-Jun-2007", 3600*24
set xdata time
set timefmt "%d-%b-%y"
set xrange ["01-Jun-2007":"22-Jun-2007"]
set yrange [95:100]

# Plot the chart using data files normalising the values in
# each case.
plot \
'nifty.dat' using 1:($5)/42.9705 \
title 'NIFTY' with lines linewidth 2, \
'sensex.dat' using 1:($7)/145.7075 \
title 'SENSEX' with lines linewidth 2, \
'fund.dat' using 1:($2)*100.00/57.0337 \
title 'FUND' with lines linewidth 2

By the way "Junk Charts" seems to be a blog devoted to criticising charts that appear in various magazines and web-sites in general and in The Economist in particular.


iTunes and QuickTime

iTunes and QuickTime would probably be the applications by Apple that most non-Mac-OS-X users get exposed to and it is a shame that these applications are so frustratingly unintuitive, not to mention bloated and slow. Are these really made by the same company that is famous for putting a lot of emphasis on the usability of its hardware and software products?


Indic Scripts and Linux

If you have the fonts for Indic scripts (for example, the Lohit fonts), Firefox on Linux is able to display the Devanagari text on sites like BBC Hindi and Google News in Hindi. (Devanagari is the primary writing system for languages like Hindi, Sanskrit, etc.) However, if you are using the builds released by mozilla.com, you would notice that the matras (diacritics) are not applied properly to form the correct ligatures. For example, the word "हिन्दी" ("Hindi") itself is not rendered properly. Konqueror does not suffer from such problems.

It turns out that Firefox does not support complex text layout because it doesn't use Pango in the officially-released builds (Firefox 3 will support it by default). You have to either compile it yourself from the source and enable the support for Pango by using --enable-pango, or use a build that has Pango enabled - for example, the builds provided by the Fedora Project. (Setting the environment variable MOZ_ENABLE_PANGO to "1" had no effect for me with Firefox

On Fedora Core 6 (FC6), it is very simple to get this working:
  1. Install the fonts for the Indic scripts you are interested in. For example, "sudo yum install fonts-hindi" , "sudo yum install fonts-malayalam", "sudo yum install fonts-kannada", etc.
  2. Install a Firefox build for Fedora using "sudo yum install firefox". Note that FC6 installs Firefox 1.5 by default - if you prefer Firefox 2.0 instead, you can install it using "sudo yum --enablerepo=development install firefox".

By the way, I recently came across Omniglot, a site about the writing systems of almost all known human languages, existing or extinct, naturally-evolved or artificially-created. I found it extremely fascinating and insightful. For example, I did not know that Devanagari was not considered to be an "alphabet" but an "abiguda". Check out the International Phonetic Alphabet (IPA) that can represent almost all spoken languages. How about Loglan (and its freer derivative, Lojban) that claims to be a "logical" language? (I first came across the IPA on Wikipedia, where it is used to provide the pronunciation for some terms. xkcd is where I first read about Lojban.)


ICFP Contest 2007

The 10th ICFP Contest is scheduled for the weekend of 20th July 2007. I hope it turns out to be as fun as the one held last year.

There is already a blog written by one of the organisers that contains some teaser puzzles. (Do these images use some form of steganography or can we simply work out the graphical transformations applied to the original image and apply them in reverse to obtain the desired image? I wonder.)

"Superstition Reigns"

"Superstition Reigns" by Rahul Singh, published in The Times of India today:
"Amitabh Bachchan, an icon for tens of millions of Indians, makes his daughter-in-law [Aishwarya Rai] perform outlandish ceremonies because she is supposedly under the evil influence of Mars. Politicians routinely consult astrologers before taking important decisions, despite abundant proof that astrology is no science at all, just quackery. Horoscopes continue to be cast in most families and palmists consulted. A newly-inducted cabinet minister insists that her bungalow be completely redesigned because it does not follow vaastu principles, a system nobody had heard of till only a few years ago."

Superstition in all its ugly forms is sickeningly pervasive in India, even among educated people who ought to know better. We waste a lot of time and money and unnecessarily make life difficult for ourselves as well as others, all in the name of something that doesn't withstand rational scrutiny.


Firefox 3 and Linux

Mike Connor blogs about the proposed requirements for Firefox 3 to run on Linux. A nasty surprise for me was the inclusion of GNOME as a dependency. While the GTK/Pango/Cairo/etc. requirements are quite understandable, I don't understand why it should need GNOME. Many of us are happy with KDE or Xfce and would like to avoid the bloat and the dependency hell of GNOME for the sake of running a browser.

As an aside, Firefox on Linux also seems to behave quite differently from Firefox on Windows. For example, on Linux Firefox seems to consistently consume more CPU time and memory than on Windows. Some pages are rendered differently on Windows and Linux (perhaps due to the availability, or otherwise, of the fonts requested by the page designer and the rendering infrastructure). I have personally also noticed bug-337093 on Windows but not on Linux.


VU3RDD Gets $2.56 From Donald Knuth

VU3RDD (a.k.a. Ramakrishnan Muthukrishnan) recently received a cheque for $2.56 from Donald Knuth as a reward for spotting a mistake in TAoCP Volume 2. Among the people I have met, he is the first such person. Congratulations!

Mohan Embar

I met Mohan Embar this weekend. He used to maintain the Windows port of GCJ. It was nice to finally be able to associate a face and a voice with the name, since our interaction so far had only been over email. He turned out to be much thinner, more soft-spoken and more boyish than I had imagined.

I think I ended up asking him a bit too much about how he managed to remain a freelance programmer for so long, that too in Milwaukee, since it is something that interests (as well as scares) me.



The Labour Day holiday last Tuesday, combined with a day's leave off work on Monday, offered us a four day long weekend that Anusha and I used for a mini vacation in Wayanad in Kerala.

A couple of Anusha's friends, along with their spouses, also joined us on this trip. We drove from Bangalore to Wayanad via Mysore, Nanjangud, Gundlupet and Sultan Battery, taking SH-17 and NH-212 and passing through the Bandipur wildlife sanctuary. The roads were quite good in general and the road from Bangalore to Mysore was excellent in particular. We covered the distance in about 5.5 hours, including a couple of short breaks. We stayed in Edakkal Hermitage, a resort quite close to the Edakkal Caves.

Pre-historic Carvings of the Edakkal Caves

Kerala is one of the most beautiful states in India. Even for someone from a generally green city like Bangalore, the lush and pervasive greenery of Kerala is an exhilarating change. As with The Tall Trees Resorts in Munnar that we had visited earlier, the cottages of Edakkal Hermitage were located on the quiet slopes of a mountain in a way that allows one to soak in the beauty of nature in relative privacy while affording a fantastic view of the valley below. The two resorts were also very similar in the amazing service provided by the respective staff and the delicious food that were served by them.

Our Cottage

On the first day, after having refreshed ourselves and having had lunch at the resort, we drove down to the Pookote Lake. This lake turned out to be a disappointment. It was small, filthy and full of tourists. We didn't stay there long, moving on to a view-point and then returning to our resort. Later in the evening, we had our dinner in a cave in the resort that was beautifully lit by more than a hundred candles.

The next morning, we visited the Edakkal Caves and admired the pre-historic carvings on the walls of the caves. The climb to the caves is a bit difficult and is not everyone's cup of tea. Three of our party, including me, wanted to climb further up the mountain and on to the summit, while the others preferred to stay back on a landing waiting for us to come back. We kept climbing up till we reached a rock-face that was a bit steep. There was a single rope for support and not many footholds. Not being experienced climbers and only having our ordinary shoes for support, we chickened out. We tried to find an alternate route to the summit and turned back on not finding any. I regret this now and wish I had mustered the courage.

Our only consolation was spotting a huge butterfly on the way back. It was the biggest butterfly I had ever seen and must have been about 20 to 25 centimetres (8 to 10 inches) across. We were able to get very close to the butterfly and even touch it - it just moved its wings and continued to sit on its tree.

The Butterfly

Tired from our trek, Anusha and I chose to relax in our cottage that afternoon while the rest of the party drove to the Suchipara waterfalls. In the evening we had our dinner in an amphitheatre in the resort.

The next morning we went for a safari through the Muthanga wildlife sanctuary. The safari was utterly disappointing for the most part with not many animals in sight (as has been our luck on all such occasions), when it suddenly turned rather interesting towards the end. Our way was blocked by a herd of at least seven elephants, one of which was a baby elephant. The driver of the Jeep was evidently quite scared and was ready to scoot at the slightest hint of trouble. Retreating was a bit difficult since the path was rather narrow (you had to either retreat in reverse gear or find a clearing large enough to turn the Jeep around) and there were other Jeeps behind us. We had to wait for about half an hour before the herd moved away and we could proceed. Other than the elephants, we were able to spot a Malabar Squirrel, different types of deer, langurs and peacocks.

The Elephants

That afternoon we headed back to Bangalore, regretting that we could not stay longer and dreading the plunge back into the daily grind of our lives.


Running Java Applets in Internet Explorer on Linux

One of the unfortunate things about the current state of the internet is that some web sites refuse to work with anything other than Internet Explorer (IE). Some of these also require you to run Adobe Flash Player and/or Sun's JVM within IE. Most of these sites can be happily avoided, but some of them just can't, especially when they run important applications within a corporate intranet. This can seriously dampen the enthusiasm of people willing to try out Linux as their primary desktop.

WINE allows you to run many a Windows application natively on Linux, including IE (albeit with a few tweaks). IEs4Linux makes it really simple to install one or more versions of IE on your Linux system, something that is very difficult on Windows itself, if not impossible! You can also view Flash content and run Java applets within such an IE. The latter requires a bit of tweaking with the current release of WINE (0.9.34), if you want to use Sun's JVM instead of that provided by Microsoft, as explained below.

Install a version of IE using IEs4Linux into, say, $HOME/.ies4linux (the default). Assuming that you choose to install only IE 6.0 SP1, IEs4Linux will create a WINE "bottle" named "ie6" within "$HOME/.ies4linux", separate from your regular WINE bottle (which is present by default in "$HOME/.wine"). IEs4Linux can also automatically install Adobe Flash Player along with IE. Run IE at least once to verify that it is working.

Now install the Java Runtime Environment (JRE) making sure that you correctly specify the WINEPREFIX environment variable by pointing it to the IE WINE bottle. For example:

export WINEPREFIX=$HOME/.ies4linux/ie6
wine jre-1_5_0_11-windows-i586-p.exe

For some reason, RegUtils.dll is not correctly copied during the installation of the JRE and therefore you must copy this file from a Windows machine that has exactly the same version of the JRE. This file is usually found in the "bin" sub-folder of the JRE installation folder. Without this file, the Java Control Panel applet will not be able to register Sun's JRE with IE.

Now run the "javacpl" programme found in the "bin" sub-folder of the JRE installation folder. With the current WINE release, this would cause your display to flicker or black out since it does not yet fully support DirectX-based acceleration (but where the maximum development effort currently seems to be directed). To avoid this, you can also invoke the Java Control Panel applet alternatively like this in the "lib" sub-folder of the JRE installation folder:

java -classpath deploy.jar \
-Dsun.java2d.noddraw com.sun.deploy.panel.ControlPanel

Go to the "Advanced" tab and uncheck the check-box for "<APPLET> tag support" for "Internet Explorer", apply your changes and close the applet. Restart the applet once again and this time check the check-box, apply your changes and close the applet. You should now be able to see Java applets within IE using Sun's JRE.

If you wish to avoid the flicker/blacking-out of the display whenever you run Java GUI applications, you can either pass the JVM option -Dsun.java2d.noddraw to Java applications and applets or disable DirectX-based acceleration for Java 2D completely by looking for a registry key like:


and setting the value of "DXAcceleration" to "0". (WINE includes a "regedit" programme just like its Windows counterpart.)

With IE, Flash and Java Applets with you, you are now ready to savour the worst of the internet first-hand on your Linux desktop instead of hearing about it from your friends who use Windows.

(The method outlined here seems to work with WINE 0.9.34 on Fedora Core 6, IE 6.0 SP1 as installed by IEs4Linux 2.0.5 and Sun's JRE 1.5.0_11 - your mileage might vary.)

Update (2007-07-03): With WINE 0.9.40 on Fedora 7 and Sun's JRE 1.5.0_12, I don't see the problem with "RegUtils.dll" and the JRE installs just fine. Another way of avoiding the blackening of the entire desktop while using Java Swing applications (and for getting a much more accurate display) is to enable a "virtual desktop" that will hold your Windows applications. To do this, invoke "winecfg", select "Enable a virtual desktop" under the "Graphics" tab and provide a size for the virtual desktop (say, 800 by 600 or 1024 by 768).

Update (2007-10-31): The default location used by IEs4Linux is $HOME/.ies4linux (notice the period in front of the directory name). Changed the post to use this location instead as people were getting confused by the location used earlier.



NX allows you to remotely access a Linux or Solaris machine and makes applications using X Windows appear quite responsive even over slow links. It even supports resuming, from anywhere, a session with the server suspended for any reason (for example, a broken network connection). The "Free Edition" of NX is free for personal use. The core NX libraries are Free Software. There is also FreeNX that provides a Free implementation of the NX server licenced under the GPL.

NX performs incredibly well, especially when you compare it to VNC, ssh with compression and X forwarding, etc. The desktop client, especially on Windows, still has a few bugs that are mildly irritating but nothing catastrophic. The sheer improvement in the response of your remote applications more than makes up for these minor shortcomings.


Investing For Retirement

(Note: This post might not be of interest to those not from India.)

Most of us do not even think about planning for retirement until we reach the age of 30. Some of us "live for the moment" and don't care for the future, some of us feel uncomfortable thinking about retirement and pretend like the proverbial cat that closing our eyes to the problem will make it go away and some of us just do not know how to assess our financial requirements three decades into the future.

Unfortunately for us, there is not much of a government-provided social security in India for old folks, we cannot realistically expect our children to take care of all our expenses, inflation constantly lessens the value of our savings and interests on assured-return investments (fixed-deposits, EPFs, etc.) keep falling. We must have some idea of our needs at the time of our retirement and know how much to invest now to be able to afford the same lifestyle that we are currently used to.

The good news is that we can use basic mathematics to calculate these figures. We will make use of two equations. The first equation (call it "E1") tells us the final amount "S" that an initial amount "P" grows to if it grows at a compounded rate of "r" over "n" years:

S = P × (1 + r)n

The second equation (call it "E2") tells us the final amount "S" that a regular annual investment of "P" over "n" years gives if it grows at a compounded rate of "r":

S = P × ((1 + r)n - 1) / r

Note that since the rates are usually quoted as percentages, you need to divide them by 100 to get the value of "r" usable in these equations. For example, a quoted rate of 8% translates to "r" equal to 0.08.

Now assume that you are aged 30 years, plan to retire at the age of 60 years, have a montly expenditure of 20,000 rupees and the rate of inflation is about 5% on the average. Using E1, you can see that at the time of your retirement 30 years hence, your monthly expenditure would become about 86,438.85 rupees simply because of inflation! That translates to about 10,37,266 rupees in annual expenditure. With old age come many an ailment for which you would need to spend money - at about 1,00,000 rupees per year at today's rates, you would need about 4,32,194 rupees at the time of your retirement to meet medical expenses. So you would need an annual income of at least 14,69,460 rupees at the time of your retirement just to sustain your current lifestyle and cope with the inevitable medical expenses!

How will you generate an income like this at that time? It is very likely that your appetite for risk would have considerably diminished at that time and you would only be willing to invest for assured-returns and thus lower rates of interest, say, about 5%. This in turn means that you would need a sum of 2,93,89,200 rupees (5% of which is the amount you need per year) at the time of your retirement. You need to have raised about 3 crore rupees by the time you retire just to be able to afford your current lifestyle!

To raise this kind of money, you either need to invest a certain amount annually till the time you retire or do a one-time investment. If you assume an annual return of 8% on your investments, you either need to invest about 2,59,431 rupees annually for the next 30 years (using E2) or about 29,20,620 rupees at a single shot (using E1). If you assume a more aggressive (though riskier) annual return of 15% on your investments, the amounts change to about 67,601 rupees and about 4,43,867 rupees respectively.

If you had started at the age of 25 years, you would have had 35 years to raise the money. At a per-annum return of 8%, you would have either invested about 1,70,533 rupees annually or about 19,87,725 rupees at a single shot. At a per-annum return of 15%, these figures become about 33,352 rupees and about 2,20,680 rupees respectively.

If you postpone investing for your retirement by another five years, you would have 25 years to raise the money. At a per-annum return of 8%, you would need to either invest about 4,02,008 rupees annually or about 42,91,349 rupees at a single shot. At a per-annum return of 15%, these figures become about 1,38,112 rupees and about 8,92,774 rupees respectively.

So the earlier you start investing for your retirement, the better it is for you. The folks at Personalfn.com have a report titled "Retirement Planning and You" that provides a more detailed analysis of this situation as well as the available investment options suitable for retirement planning.

Of course, these are simplified calculations. They do not take into account the fact that you will very likely have to pay income tax on the returns from this investment. They also do not take into account the fact that because of inflation, you would need slightly more and more every year after you retire instead of the fixed amount assumed here. Hopefully the average rate of inflation for India for the next 30 years will be less than the 5% assumed here.


Google Webmaster Central

A post on the Google blog pointed me to the Google Webmaster Central service. To access this service, all you need to have is a Google account (you already have it if you use Gmail, Blogger, Orkut, etc.). You can easily add your site to this service and verify your access to your web site either by uploading a page to your site with a unique name provided by Google or by adding a META tag to the default page of your site with a unique content provided by Google.

Among other things, this service lets you find out who links to your site. The difference between this service and the "link:" operator in Google searches is that this service actually works. The service also lets you know which search queries lead people to your site and how likely they are to hit your site for a given search query. If you have ever wondered how people discover your site, this is a fascinating way of knowing a large part of the answer to that question.

For example, currently these are the top 10 search queries on Google that are likely to lead people to my web site:
  1. gcj
  2. tangram history
  3. ranjit mathew
  4. paradoxical puzzles
  5. gcj windows
  6. hostingzero
  7. matthew symonds economist
  8. how to beat voldemort on harry potter goblet of fire gameboy advance
  9. "* dataone it"
  10. ananth chandrasekharan
I know that I have mentioned each of these terms somewhere on my web site, but I feel a bit sorry for the folks who arrive at my web site following the links from their search results - except for #3 and perhaps #5, they are going to be quite disappointed by the lack of any useful information about the things for which they were searching.

Most of the links to my web site are created due to the signature that I attach to the messages that I send to various mailing lists and that then gets archived all over the place. The second most common reason is that my blog and the blogs of some of my friends have a link to my web site in their "Links" section, which then gets replicated in the individual page for each of their posts. The third most common reason is that my profiles on sundry web sites link to my home page. There are actually very few "third parties" that link to my web site.

Quite sobering.

Of course, some of this information is also provided by the referrer logs and the analysis tools provided by Hosting Zero.


Xfce and KDE

I have started using Xfce instead of KDE as the desktop environment on my Linux PC.

It is easy to compile Xfce 4.4.0. It even has a self-extracting installer that first compiles a GUI installer, which interviews you and then proceeds to automatically configure, compile and install the Xfce modules. The environment is quite configurable, the file manager and the terminal emulator quite usable and it integrates well with an existing KDE installation.

My PC now boots into a usable desktop environment after a cold start far faster than before and there is considerably more free memory and CPU cycles for use by applications. (For some reason, artsd from KDE used to eat up a lot of CPU cycles on my PC.) Everything feels so much snappier now.

KDE has become increasingly bloated over the years. Unlike the Linux kernel, which has also become more bloated over the years but at least makes it easy to leave out unwanted features using "make menuconfig" before compilation, there is no simple way to avoid the increasing bloat in KDE other than to hack the Makefile templates. With each release, each of the KDE core packages seems to pick up more utterly useless, functionally-overlapping and half-developed applications.

KDE has also remained rather buggy throughout the years. Applications crash every now and then for no apparent reason. Watching the numerous panicking messages from applications fly by on the console makes one constantly wonder how the desktop still manages to hold up and fills one up with an urgency to just get the work done as soon as possible and close the panicking application before it eventually crashes. About the only "improvement" in newer releases seems to be a dialogue-box asking the user to submit a bug report to the developers when an application crashes. The applications still crash about as often as they used to.

About every two years, I check out the latest release in the last stable KDE branch. I do this with the hope that the bugs affecting me would have been fixed by then. They usually are, but their place is then taken up by newer bugs. Compiling a KDE release is not a pleasant exercise and not just for the fact that each release takes longer and longer to compile than the previous release on the same hardware (understandable, since there is more code from more applications and GCC also generally keeps getting slower and slower at compiling C++ with successive releases). Each KDE release seems to require more and more dependent libraries (or updated versions of existing dependencies), which in turn require yet more dependent libraries - this is the kind of dependency hell that put me off GNOME in the first place. Each KDE release seems to fail compilation for me in the most basic of ways (for example, ksysguard in 3.5.6 has an unguarded call to strlcpy( )). Some times there are issues with the tarballs themselves. For example, the 3.5.6 tarball for kdelibs that I downloaded off a mirror had the timestamps for the files set to 31 October 2007 for some reason, with the result that when it finally finished compilation after several hours on my PC, I executed a "make install" only to discover that it proceeded to compile everything from the beginning all over again! Needless to say, this is very frustrating.

I know that Konstruct is supposed to ease the pain of downloading and compiling a KDE release, including automatically applying fixes for problems discovered only after the release, but I never found its insistence on downloading and compiling dependent libraries, even though I already have the necessary versions, particularly appealing.

Even after switching to Xfce, I still haven't removed KDE from my PC. After all, it does have some nifty applications, not least of which are two of my favourite games Kmahjongg and Ksirtet (a Tetris clone). I also like its well-integrated look and feel and its almost infinite configurability. Some day perhaps KDE will be able to iron over its current problems and I would again be tempted to go back to KDE. For the moment however, I'm happily sticking with Xfce.

On a side note, has anyone tried to compile the ultra-modular 7.1 release of the X.org server? Every little thing has now been broken into its own little module with the result that there are just too many modules without an easy way of choosing the ones you want (again, like "make menuconfig" for the Linux kernel). There are scripts to automate the download and build, of course, but they still don't seem to make it easy to choose among the modules.



If you are a bibliophile with a non-trivial collection of books, sooner or later you would feel the urge to catalogue it. If you use a computer, you would either use a software like Delicious Library or hack up something yourself if you have the skills, the time and the enthusiasm.

LibraryThing is a web site that allows you to maintain this catalogue online, with your catalogue being either publicly visible or being private. With a free account, you can catalogue up to 200 books. Since many users catalogue their books like this, you can also use the web site to meet other people who have a taste similar to yours in books and you can also get suggestions about new books you might want to check out based on your existing collection. You can also find lots of reviews about books you actually intend to check out.

This is not all. Since the most boring part of cataloguing your books is entering in all the data (even if you only enter the ISBNs and then the software looks up the details itself), they provide a CueCat bar-code scanner for automating this job at a price that is cheap even by Indian standards. I ordered one as a way of showing my support for the site. It is surprisingly easy to get it working - under Linux, if you have USB HID enabled (quite likely), any application can read the scanned-in bar-codes as if they were directly typed in at the keyboard. Of course, the CueCat obfuscates its output so that applications cannot readily make sense of the data, but it is very easy to get back the plain text or to "declaw" it altogether.

LibraryThing understands the obfuscated output of the CueCat and it supports a "bulk import" feature - you scan in the ISBN bar-codes of your books into a text file, upload it and LibraryThing uses Amazon.com, the Library of Congress, etc. to query the details of the books and automatically add them to your online library. The process is so simple that I was able to scan in two shelves of books in under 10 minutes, upload it to LibraryThing and see my online library populated automatically over the next three days! The reason it took three days was that LibraryThing is nice enough to throttle its querying of free online catalogues so as to not overwhelm them with such requests.

When she saw that I had bought a funny-looking bar-code scanner just for cataloguing my books, Anusha gave me one of those "What a weirdo!" looks. She had earlier burst out laughing when I had said that I was toying with the idea of getting one for myself. However, bar-code scanning is so much fun that she was soon merrily scanning in books with me. Her criticism is considerably muted now.


"Concepts, Techniques, and Models of Computer Programming"

I just finished reading "Concepts, Techniques, and Models of Computer Programming" by Peter Van Roy and Seif Haridi. If you are the kind of person who thinks that "The Art of Computer Programming" and "Structure and Interpretation of Computer Programs" are good books, then you owe it to yourself to check this book out.

There is a slightly-dated version of the book available online (PDF, 3.4 MB), if you want to preview some of the content before buying it. There is also an Indian edition of the book published by Prentice Hall of India (ISBN: 81-203-2685-7) and priced at Rs 450. The book's web site links to some reviews and you can also read my review of the book.


Local Variables in Java

The other day I was reviewing some Java code written by a colleague. I noticed that he was in the habit of declaring all the variables used by a method at the beginning of the method body rather than in the places where they were first used. I pointed out that declaring a variable only when it is first required makes the code more readable.

While he agreed to change the style of his code, he was still reluctant to move the declaration of a variable used only within a loop from outside it to inside it. For example, he was reluctant to change:

String s;
for( int i = 0; i < 10; i++)
s = String.valueOf( i);


for( int i = 0; i < 10; i++)
String s = String.valueOf( i);

He believed that only one variable is created in the former case while 10 variables are created in the latter - clearly it is more efficient to declare a single variable outside the loop and keep reusing it inside the loop!

I then pointed out the section in the JVM specification that says that a JVM uses a fixed-size array for storing the values of local variables used in a method and each local variable maps to an index in this array. A Java compiler calculates the size of this array during the compilation of a method and declares it in the generated bytecode for the method.

Since he was still sceptical, I compiled both the variants to bytecode, used javap -c to produce the dissassembled code and used diff to show that the generated code was the same in both the cases (except for the indices used for s and i). I then used a simple modification of using the JVM Emulator Applet written by Bill Venners as a standalone application to show the bytecode variants in execution and demonstrate that the size of the local variables array really remains constant throughout.

He was finally convinced.

On the other extreme, I have another colleague who is in the masochistic habit of introducing new scopes to isolate the local variables used only in a section of a method's body. That is, something like:

Foo x = wombat.snafu( );
// Use x here.
Bar y = new Bar( a, b, c);
// Use y here.