*** MOVED ***

NOTE: I have merged the contents of this blog with my web-site. I will not be updating this blog any more.

2007-02-06

Google Webmaster Central

A post on the Google blog pointed me to the Google Webmaster Central service. To access this service, all you need to have is a Google account (you already have it if you use Gmail, Blogger, Orkut, etc.). You can easily add your site to this service and verify your access to your web site either by uploading a page to your site with a unique name provided by Google or by adding a META tag to the default page of your site with a unique content provided by Google.

Among other things, this service lets you find out who links to your site. The difference between this service and the "link:" operator in Google searches is that this service actually works. The service also lets you know which search queries lead people to your site and how likely they are to hit your site for a given search query. If you have ever wondered how people discover your site, this is a fascinating way of knowing a large part of the answer to that question.

For example, currently these are the top 10 search queries on Google that are likely to lead people to my web site:
  1. gcj
  2. tangram history
  3. ranjit mathew
  4. paradoxical puzzles
  5. gcj windows
  6. hostingzero
  7. matthew symonds economist
  8. how to beat voldemort on harry potter goblet of fire gameboy advance
  9. "* dataone it"
  10. ananth chandrasekharan
I know that I have mentioned each of these terms somewhere on my web site, but I feel a bit sorry for the folks who arrive at my web site following the links from their search results - except for #3 and perhaps #5, they are going to be quite disappointed by the lack of any useful information about the things for which they were searching.

Most of the links to my web site are created due to the signature that I attach to the messages that I send to various mailing lists and that then gets archived all over the place. The second most common reason is that my blog and the blogs of some of my friends have a link to my web site in their "Links" section, which then gets replicated in the individual page for each of their posts. The third most common reason is that my profiles on sundry web sites link to my home page. There are actually very few "third parties" that link to my web site.

Quite sobering.

Of course, some of this information is also provided by the referrer logs and the analysis tools provided by Hosting Zero.

2007-02-03

Xfce and KDE

I have started using Xfce instead of KDE as the desktop environment on my Linux PC.

It is easy to compile Xfce 4.4.0. It even has a self-extracting installer that first compiles a GUI installer, which interviews you and then proceeds to automatically configure, compile and install the Xfce modules. The environment is quite configurable, the file manager and the terminal emulator quite usable and it integrates well with an existing KDE installation.

My PC now boots into a usable desktop environment after a cold start far faster than before and there is considerably more free memory and CPU cycles for use by applications. (For some reason, artsd from KDE used to eat up a lot of CPU cycles on my PC.) Everything feels so much snappier now.

KDE has become increasingly bloated over the years. Unlike the Linux kernel, which has also become more bloated over the years but at least makes it easy to leave out unwanted features using "make menuconfig" before compilation, there is no simple way to avoid the increasing bloat in KDE other than to hack the Makefile templates. With each release, each of the KDE core packages seems to pick up more utterly useless, functionally-overlapping and half-developed applications.

KDE has also remained rather buggy throughout the years. Applications crash every now and then for no apparent reason. Watching the numerous panicking messages from applications fly by on the console makes one constantly wonder how the desktop still manages to hold up and fills one up with an urgency to just get the work done as soon as possible and close the panicking application before it eventually crashes. About the only "improvement" in newer releases seems to be a dialogue-box asking the user to submit a bug report to the developers when an application crashes. The applications still crash about as often as they used to.

About every two years, I check out the latest release in the last stable KDE branch. I do this with the hope that the bugs affecting me would have been fixed by then. They usually are, but their place is then taken up by newer bugs. Compiling a KDE release is not a pleasant exercise and not just for the fact that each release takes longer and longer to compile than the previous release on the same hardware (understandable, since there is more code from more applications and GCC also generally keeps getting slower and slower at compiling C++ with successive releases). Each KDE release seems to require more and more dependent libraries (or updated versions of existing dependencies), which in turn require yet more dependent libraries - this is the kind of dependency hell that put me off GNOME in the first place. Each KDE release seems to fail compilation for me in the most basic of ways (for example, ksysguard in 3.5.6 has an unguarded call to strlcpy( )). Some times there are issues with the tarballs themselves. For example, the 3.5.6 tarball for kdelibs that I downloaded off a mirror had the timestamps for the files set to 31 October 2007 for some reason, with the result that when it finally finished compilation after several hours on my PC, I executed a "make install" only to discover that it proceeded to compile everything from the beginning all over again! Needless to say, this is very frustrating.

I know that Konstruct is supposed to ease the pain of downloading and compiling a KDE release, including automatically applying fixes for problems discovered only after the release, but I never found its insistence on downloading and compiling dependent libraries, even though I already have the necessary versions, particularly appealing.

Even after switching to Xfce, I still haven't removed KDE from my PC. After all, it does have some nifty applications, not least of which are two of my favourite games Kmahjongg and Ksirtet (a Tetris clone). I also like its well-integrated look and feel and its almost infinite configurability. Some day perhaps KDE will be able to iron over its current problems and I would again be tempted to go back to KDE. For the moment however, I'm happily sticking with Xfce.

On a side note, has anyone tried to compile the ultra-modular 7.1 release of the X.org server? Every little thing has now been broken into its own little module with the result that there are just too many modules without an easy way of choosing the ones you want (again, like "make menuconfig" for the Linux kernel). There are scripts to automate the download and build, of course, but they still don't seem to make it easy to choose among the modules.

2007-02-02

LibraryThing

If you are a bibliophile with a non-trivial collection of books, sooner or later you would feel the urge to catalogue it. If you use a computer, you would either use a software like Delicious Library or hack up something yourself if you have the skills, the time and the enthusiasm.

LibraryThing is a web site that allows you to maintain this catalogue online, with your catalogue being either publicly visible or being private. With a free account, you can catalogue up to 200 books. Since many users catalogue their books like this, you can also use the web site to meet other people who have a taste similar to yours in books and you can also get suggestions about new books you might want to check out based on your existing collection. You can also find lots of reviews about books you actually intend to check out.

This is not all. Since the most boring part of cataloguing your books is entering in all the data (even if you only enter the ISBNs and then the software looks up the details itself), they provide a CueCat bar-code scanner for automating this job at a price that is cheap even by Indian standards. I ordered one as a way of showing my support for the site. It is surprisingly easy to get it working - under Linux, if you have USB HID enabled (quite likely), any application can read the scanned-in bar-codes as if they were directly typed in at the keyboard. Of course, the CueCat obfuscates its output so that applications cannot readily make sense of the data, but it is very easy to get back the plain text or to "declaw" it altogether.

LibraryThing understands the obfuscated output of the CueCat and it supports a "bulk import" feature - you scan in the ISBN bar-codes of your books into a text file, upload it and LibraryThing uses Amazon.com, the Library of Congress, etc. to query the details of the books and automatically add them to your online library. The process is so simple that I was able to scan in two shelves of books in under 10 minutes, upload it to LibraryThing and see my online library populated automatically over the next three days! The reason it took three days was that LibraryThing is nice enough to throttle its querying of free online catalogues so as to not overwhelm them with such requests.

When she saw that I had bought a funny-looking bar-code scanner just for cataloguing my books, Anusha gave me one of those "What a weirdo!" looks. She had earlier burst out laughing when I had said that I was toying with the idea of getting one for myself. However, bar-code scanning is so much fun that she was soon merrily scanning in books with me. Her criticism is considerably muted now.

2007-01-21

"Concepts, Techniques, and Models of Computer Programming"

I just finished reading "Concepts, Techniques, and Models of Computer Programming" by Peter Van Roy and Seif Haridi. If you are the kind of person who thinks that "The Art of Computer Programming" and "Structure and Interpretation of Computer Programs" are good books, then you owe it to yourself to check this book out.

There is a slightly-dated version of the book available online (PDF, 3.4 MB), if you want to preview some of the content before buying it. There is also an Indian edition of the book published by Prentice Hall of India (ISBN: 81-203-2685-7) and priced at Rs 450. The book's web site links to some reviews and you can also read my review of the book.

2007-01-20

Local Variables in Java

The other day I was reviewing some Java code written by a colleague. I noticed that he was in the habit of declaring all the variables used by a method at the beginning of the method body rather than in the places where they were first used. I pointed out that declaring a variable only when it is first required makes the code more readable.

While he agreed to change the style of his code, he was still reluctant to move the declaration of a variable used only within a loop from outside it to inside it. For example, he was reluctant to change:

String s;
for( int i = 0; i < 10; i++)
{
s = String.valueOf( i);
}

to:

for( int i = 0; i < 10; i++)
{
String s = String.valueOf( i);
}

He believed that only one variable is created in the former case while 10 variables are created in the latter - clearly it is more efficient to declare a single variable outside the loop and keep reusing it inside the loop!

I then pointed out the section in the JVM specification that says that a JVM uses a fixed-size array for storing the values of local variables used in a method and each local variable maps to an index in this array. A Java compiler calculates the size of this array during the compilation of a method and declares it in the generated bytecode for the method.

Since he was still sceptical, I compiled both the variants to bytecode, used javap -c to produce the dissassembled code and used diff to show that the generated code was the same in both the cases (except for the indices used for s and i). I then used a simple modification of using the JVM Emulator Applet written by Bill Venners as a standalone application to show the bytecode variants in execution and demonstrate that the size of the local variables array really remains constant throughout.

He was finally convinced.

On the other extreme, I have another colleague who is in the masochistic habit of introducing new scopes to isolate the local variables used only in a section of a method's body. That is, something like:

{
Foo x = wombat.snafu( );
// Use x here.
...
}
{
Bar y = new Bar( a, b, c);
// Use y here.
...
}

2007-01-11

Generics in Java and Return Types

Consider a class C that implements an interface I.

While the following is allowed:

I foo( )
{
return new C( );
}

the following is not:

ArrayList<I> foo( )
{
return new ArrayList<C>( );
}

In the first case, callers expect to get an object implementing the interface I and therefore it is correct for foo( ) to return an object of class C. In the second case, callers expect to get an ArrayList containing objects implementing the interface I and therefore it should again be correct for foo( ) to return an ArrayList containing objects of class C, right?

Consider what happens if the compiler were to allow such code to compile. Callers can then add objects of another class X, which also implements the interface I, to the returned ArrayList with the result that the original ArrayList, which is only supposed to contain objects of class C, now also contains objects of an incompatible class X!

A better way to define the second case is:

ArrayList<? extends I> foo( )
{
return new ArrayList<C>( );
}

(You can also return an ArrayList<I> instead, but that loosens up the definition of the returned object.)

Thanks to Steve for clearing up my muddied thinking.

2006-12-27

Utterly Disgusting

It is utterly disgusting that Microsoft so readily bends over backwards to please the immensely greedy folks from the entertainment industry.

For example:

2006-12-25

On The One Hand

On the one hand, a cute baby...

2006-12-21

"LtU Books" In India

There are some books on computer science that I had never heard of until I had started reading "Lambda the Ultimate" (LtU). I found these books being mentioned and recommended in various posts and forum topics on LtU from time to time. As I found out more about these books, I became interested in reading them. Since they were relatively obscure, I had no hopes of finding them here in India. The prospect of having to fork out hefty sums of money for buying the books via something like Amazon.com made me apply the brakes on my normal impulse of buying an interesting book when I come across it.

Imagine my delight then, when I stumbled upon the fact that all of these books had an Indian reprint available at an extremely affordable price. Incidentally, all of these books were originally published by MIT Press and the Indian reprints are published by Prentice-Hall of India.

Here are the "LtU Books" along with the ISBNs of their Indian reprints and the corresponding price:
  1. "Types and Programming Languages" by Benjamin Pierce, ISBN: 81-203-2462-5, 350 rupees.
  2. "Concepts, Techniques, and Models of Computer Programming" by Peter Van Roy and Seif Haridi, ISBN: 81-203-2685-7, 450 rupees.
  3. "How to Design Programs" by Matthias Felleisen, Robert Bruce Findler, Matthew Flatt and Shriram Krishnamurthi, ISBN: 81-203-2461-7, 325 rupees.
  4. "Advanced Topics in Types and Programming Languages" by Benjamin Pierce, ISBN: 81-203-2792-6, 425 rupees.
  5. "The Elements of Computing Systems" by Noam Nisan and Shimon Schocken, ISBN: 81-203-2885-X, 195 rupees.
If you are not able to get these books in a local book store in your city, you can order them directly via the web site of Prentice-Hall of India. If you stay in Bangalore, you can also contact Suman M. (msuman AT phindia DOT com) to obtain these books directly from Prentice-Hall.

2006-12-14

Articles by Dheeraj Sanghi

Dheeraj Sanghi is a professor in the Computer Science and Engineering (CSE) department at IIT Kanpur. Our batch, the CSE class of 1996, studied "Computer Networks" under him and he was an active patron of the Association of Computer Activities (ACA).

Recently I stumbled upon a collection of his articles on various topics, including career counselling for students who want to study CS in India, improvements to the undergraduate programme in IIT Kanpur, views on the recent move by the Indian government to impose quotas in the IITs, etc.

Though I found myself disagreeing with some of his points, I found these articles quite interesting as they touch upon topics that I have been thinking about in recent times.