Using internal HDD as external HDD

[Disclaimer: Please exercise adequate precaution if at all you try to duplicate what is given below.]

Since long I was chewing upon an idea, something which would help me connect my old hard-disks directly (without having to boot the CPU). The old CPU that I had 2 hard drives, an 8.4GB and an 80GB one, and they had been lying unused as I had shelved my desktop about an year back.
More than a need, the thought fascinated me.

I remembered having connected IDE hard disks when from the desktop whenever my friends used to give me their machines for repairing. It was a simple task of making the new drive a slave, and using the CD-ROM IDE cable to connect the additional drive. Then I used to enter POST and get the new drive detected, and voila!, everything worked.

The requirement was different here in that I wanted to be able to connect the hard drives without having to fiddle with the internals of the laptop. Few days ago, came across a post by Joel in which he had mentioned about being able to connect his SATA HDD using a docking station. This was the rise of the phoenix.

I began looking for a similar option for IDE, and a random search got me a number of options. By the next day, I was a proud owner of a USB-IDE connector cable, which additionally supports SATA HDDs. Connecting and getting the HDD recognized was a no brainer. The only thing to take care of is that the jumper had to be kept at master (see illustration here). The partitions were recognized with ease, except for the fact that since I was on Windows, the linux (ext3) partition had some issue in being recognized (more about that later).
I repartitioned the disk and it was good to go afresh!

I was, however, more interested in getting both the HDDs to work simultaneously. Here there were some issues: firstly, the USB-to-IDE cable had come with a power adapter, which had a single 12V DC output, and couldn’t be connected. Secondly, I needed an option which is extensible — say, in future if I wanted to be able to add another device (for example CD-ROM (hypothetical scenario)), it should be possible.
Thus came a need for a power supply which would help me connect a number of devices at the same time. Yes, SMPS it is. So I took off the SMPS from the CPU, and, thus came an interesting issue. I had no clue about the way to boot it. I had an idea, that it had something to do with the 20-pin female connector (ATX), which had a slot on the motherboard. I also obviously didn’t want to short an incorrect combination and blow up the SMPS — that would have spoiled all the fun! After a considerable amount of googling, I got to learn that the 13th (Green wire -> Power-on) and 14th (Black wire -> Ground) pin had to be shorted in order to simulate a power signal (see this). I plugged-in the SMPS, and shorted the two pin slots which got the SMPS working. [It was however warned on a number of sites and forums that there should always be some load on the SMPS, before that is done. Alteast one HDD.]

The above got both the hard disks running. The only issue that I didn’t anticipate earlier was that the IDE-to-USB as well as IDE cable are all female ports — this kept me from being able to connect both the HDDs recognized at the same time, through the typical master-slave combination, and connecting the IDE to the IDE-to-USB. I’m still on the lookout for a IDE cable that would have one male port.

I was also concerned about the HDD overheating because it was out in the open, so the heat sinking provided by the CPU case was gone. CPU processors have a heat sink attached to them, so I thought it would be a good idea if I could use the same for cooling the HDD. I got the heat-sink off the processor and the motherboard. The  other good thing about this heat-sink was that it had a fan attached to it. Once I got it off, I detached the fan, and cleaned it thoroughly, and fit it back. Thus came up the next issue to get the fan running. The fan had a 3-pin female connector, off which I was unsure that which were the power ones. The fan label said nothing about it, or the polarity. What it did say was that it needed 12V DC, which I was a relief because I had plenty of it to spare. 🙂 The SMPS also had 2 4-pin connectors (for devices like Floppy drives etc.) The issue was how to connect this 4-pin female to a 3-pin female (Fan). Apparently there are 3-pin to 4-pin adapters available, but I didn’t have the patience to wait another day to lookout for it.

Next round of googling led me to understand that the three wires are classified as: Yellow (+12V), Black (Ground), and Green (Signal). Since the 4-pin SMPS connector (known as berf) also had Yellow and Black wires, I used an old wire to connect the corresponding ports between the two pins. The result was a delight — Now my HDDs were running, and my heat-sink (if at all it’s serving any purpose) sits atop one of the HDD, with fan running over it. As of now this arrangement looks a bit flimsy — with the heat-sink just kept over it, but I guess it’s much better than having the HDD without it. Also, seems there’s a thermal paste which could help me affix the heat-sink, but that’s a low priority.

OK, coming back to the issue of repartitioning the HDD — the ext3 partition apparenly had some issue while formatting using Partition Magic 8.0 (WinXP). I switched to Ubuntu, where all the linux partitions got recognized, and Ubuntu has a wonderful utility called GParted, which formatted the ext3 partition without any hassle.

All of the above, left a much more satisfied me! 🙂

Along came Firefox3.0b5…

Upgraded to the much-awaited Hardy Heron (Ubuntu 8.04) over the weekend. The upgrade was smooth, though it took me a long time. Installing from the scratch, I realised, would’ve been much easier. Also, I’ve tried everything possible (or so I believe) to get the sound working, but, I guess, there’s still something I’m missing. Anyway, I’ll keep trying.

Anyways, I wanted to talk about Firefox, rather than Hardy in this post. They bundled Firefox 3 Beta 5 along with the OS, and I instantly fell in love with it. One of the best things is (actually, the credit should go to the del.icio.us team) how easily it embeds all my bookmarks. It’s much more convenient to manage, plus, I have an option of directly bookmarking everything to del.icio.us as well.
There’s also a del.icio.us toolbar, which shows your recently bookmarked links, and can be customized according to one’s tag preference.
Moreover, I can now bookmark any link by simply clicking a star icon (akin to gmail) on the address bar. That’s so cool!

Another thing that was irksome in earlier version of Firefox was that the bookmarks drop-down would close when one deletes a bookmark. That’s been taken care of.

I’m sure there’d be a host of other features. Hmmm, browsing has never been this good an experience.

Links:

  1. http://www.mozilla.com/en-US/firefox/all-beta.html : Get Firefox 3 Beta 5
  2. http://www.ubuntu.com/getubuntu/download : Get Hardy Heron
  3. http://www.mozilla.com/en-US/firefox/3.0b5/releasenotes/#whatsnew : Firefox 3 Beta 5 feature info
  4. http://del.icio.us/about/ : del.icio.us info

Continuous Integration with CruiseControl

Continuous Integration (CI), as the name suggests, is a Software Development Methodology where the emphasis is laid on integrating the system on-the-fly. That is to say, the practice of component or module integration being a one time event is discouraged. Instead, component or module integration forms and integral part of the build cycle.

The concept of continuously integrating the code was introduced, as an offshoot of Extreme Programming by Martin Fowler and Kent Beck. In his paper, Fowler talks about the advantages of the agility that comes-in along CI, with the benefit of always having a version of running code (that is integrated, built, and tested) at the everyone’s disposal. Fowler also promotes the idea of integration being a non-event, as against the traditional one-time integration and build activity, in which a number of unforeseen dependencies might cause a problem.

CI is a concept, which does not mandate a need for a tool to be implemented. Yet there are numerous tools available, many of them open source, which facilitate the implementation of CI with considerable ease. Whether to manually manage the CI, or do it with the help of a tool, is at an individual’s discretion, which could be based upon various factors like time and resources.
It should be noted though that CI should be seen as a mindset, an approach, rather than a tool or a framework. Some people have been quite vocal about this.

Of the several tools available for CI, we’ll discuss one called CruiseControl (CC) that was introduced by ThoughtWorks. CC allows the implementation all the aspects of CI that Fowler had mentioned in his paper, including self-testing code (a term coined by Fowler), artifact archiving and, seamless integration with version control environments, to name a few.

Let’s now go through each of the practices that CI suggests, and evaluate how a tool like CruiseControl could help us in it:

One of the key features of CC is it’s the ability to handle multiple project configurations via a single interface. Users could switch between different projects and handle each separately. Each project has a separate build in the CC work directory so that project-level alterations can be managed independently.

Integration with Versioning Environment
CI seems apt for a development structures where we typically have relatively smaller teams, working on a project. In fact, one of the tenets of XP is that, project team should be small (~5-10 people). Since CI strongly emphasises on the concept of continuously checking-in the code, thinking in a wider perspective, we could see this as a way to measure individual throughput, with the advantage of being able to highlight the concept of code modularisation, and embedding that in developers’ minds. CC integrates with most of the popular (and proprietary) versioning tools.

Automated Builds
When we said continuously checking-in above, it has to be clearly understood that the main line of code (Trunk/Root) must always have the latest build, which should _never_ break. The importance of having non-breaking code in the repository main line (Trunk/Root) could not be emphasised less. It is implicit, and understood that the code checked-in in the mainline in the repository can never break. Since each committer would check-in the code only after testing — first in his/her local environment, and then after integrating with the latest code from the trunk — the chances of encountering surprises in the main build are greatly reduced.
Such test-driven development is strongly recommended by CI.

Integrated Tests
CC allows the integration of JUnit test cases (which could be thought of as sanity tests[1]) to be carried out in the main build, as soon as a commit is detected. Once the CC build is over, the results are published, and could also be mailed to all the concerned parties. Such features help in timely detection of any broken builds, and initiation of remedial actions, if required, thereby.

Making the Mainline Unbreakable
The role of versioning tools in a software project is indispensable. This is true even for projects where CI is not being adopted. Fowler strongly emphasises on the idea of the putting everything useful in the repository. Needless to mention, when we say ‘everything useful’ we exclude things like executables, and generated files. The idea is to be able to build the entire system on a virgin machine[2], without any hiccup.
All the required dependencies for being able to build the project should be available within the repository, so that anyone could perform a basic repository mainline checkout, and build the entire system.

With CC, this recommendation of CI can be addressed by making project-specific builds, that may call an in-project build of the entire system, after having performed an update from the versioning environment. That is, the build should first perform a versioning system update, followed by a build of the entire system. Any artifacts generated, could then be archived within the CC file structure, for further usage.

Conclusion
CI is a software practice that could help us in significantly reducing the risks in the software development cycles. The advantage of this methodology may not be immediate, but there are considerable long-term benefits. With the added advantage of influencing, and aligning the developers’ mindset, to continuously thrive for integration.
CI is not dependent upon a tool to be implemented. However, of the many tools aimed at helping in CI, CruiseControl is one of the most popular, because it allows seamless integration of various aspects of CI.

References

  1. http://martinfowler.com/articles/continuousIntegration.html : Martin Fowler’s paper on CI
  2. http://jamesshore.com/Blog/Continuous-Integration-is-an-Attitude.html : James Shore’s write-up on CI
  3. http://jamesshore.com/Agile-Book/how_to_be_agile.html : James Shore on How to be Agile
  4. http://extremeprogramming.org: Extreme Programming
  5. http://cruisecontrol.sourceforge.net/gettingstarted.html : Get Started on CruiseControl
  6. http://docs.huihoo.com/cruisecontrol/DrivingOnCruiseControl_Part1.html : CruiseControl installation reference
  7. http://en.wikipedia.org/wiki/Continuous_Integration : List of various CI tools

[1] or as Fowler puts it, since the requirements of self-testing code are weaker in CI (they are more to do with Test-Driven Development), testing during CI is more about exploring the design of the system, rather than finding bugs
[2] a machine where the basic environ

ment (OS, JVM, Server etc) is already installed

Ubuntu 'Gutsy' Server on VirtualBox

Thanks to peeyush for his post where he talks about VirtualBox, I could install Ubuntu desktop _and_ server (both 7.10) on the office machine. Now, since I work for an organisation, where installations-that-require-admin-privileges are not appreciated at all, this was a big feat! (Though, I have the admin privileges too. *wink*)

About VirtualBox:
VirtualBox provides a virtual environment to “install” a range of other OSs on your base OS. The base OS is referred to as HostOS. The “other OS” is called GuestOS.
The good thing about VirtualBox is that it does away the need to have a CD/DVD drive on your machine, by giving you an option to directly boot from cd-image files (.iso), which was very helpful in my case. (Yes, we don’t have CD-ROMs too!)
Secondly, switching between the real (host) and virtual (guest) OSs is just a touch of a button — which I liked a lot.
Apart from the above, there are a number of other features aimed at making the integration of guest, and host look seamless.

OK, after the VirtualBox installation, I faced some minor issues in installing the Ubuntu server. Apparently, the issue was because the non-availability of PAE module, in the simulated environment causes a crash when the machine reboots after the (server) installation process. This has been reported as a bug in VirtualBox, but seems like both VirtualBox and Ubuntu are pointing fingers are each other.
Some forums, however, suggested a solution, which worked for me, and which I present here, so that Linux newbies (like yours truly) don’t have to go through an ordeal.

The issue:
When the server boots from the hard-disk, the following fatal error is thrown:
"PANIC: CPU too old for this kernel"

Why?:
VirtualBox does not support PAE, which the Ubuntu server assumes to be present on the platform it’s being installed. When the installation is over, and the machine reboots, it in for a surprise…woah…no PAE?!!

The resolution:
1. Boot from the CD (or the mounted image), and choose the rescue mode and get a shell in the / (root). (Rescue mode gives you an option of getting an ‘installer shell’ or an ‘installed shell’. Choose the latter.)
2. Install linux-generic (instead of the default linux-server). This can be done by:

sudo apt-get install linux-generic

[Make sure you’re connected to the Internet (and proxies are configured), so that you get the latest packages.]

3. Remove linux-server:

sudo apt-get remove linux-server

4. Exit the root shell, and reboot the machine.

[Some people said that they had followed the same approach and had to fix the file corruption (sync) issues manually using fsck. I’m not sure, because I didn’t face any such issue.]

That’s about it! Oh, and there’s a there-just-one-more-thing:
5. If you find the boot menu irksome, you could edit:

/boot/grub/menu.lst

Hash-out the bad bad linux-server part.
[As a general advice (which would save me a number of curses too): Please backup each file prior to editing it!]

Links/References:
1. http://www.virtualbox.org/wiki/Downloads : Get VirtualBox
2. http://www.ubuntu.com/products/WhatIsUbuntu/serveredition : Get Ubuntu Server 7.10
3. https://bugs.launchpad.net/ubuntu/+source/virtualbox-ose/+bug/126863 : Bug description for Ubuntu
4. http://tombuntu.com/index.php/2007/09/05/making-ubuntu-server-work-in-virtualbox/ : A more straight-forward approach to the same issue. Which I saw after having written this post. 🙁

Confessions of a Linux newbie

Well, it happened again (to be versed on the lines of a famous track)! And I was awed by the charms of Ubuntu 7.10.

Being the newbie that I am in Linux, I was looking for a distribution which would leave little for me to do, when it comes to installation, and getting the hardware working. I remembered hearing the name ‘Debian’ a number of times from some Linux pros. I downloaded 4.0_r2, burnt the basic installation, and tried installing.

Set-up wasn’t a hassle, apart from some minor hiccups. It’s later that I realised that those hiccups would actually pose a bigger issue. Once installed, the poor OS, couldn’t detect the graphics card, and X crashed. Some googling made me download Linux-specific drivers for the card.
But since, my Debian install was a basic one, I couldn’t build it. That led me to another issue. Using ‘apt’ required the setting up the wireless card. And to get the drivers running for the wireless, there’s a dependency on a MAC package. Building MAC package required, probably, Debian source to be available. Which takes us back to the issue of Debian being a basic install (Chicken-and-Egg?)

After hours of effort, and no progress, I finally gave-up!

I also had the Ubuntu 7.10 disk downloaded ‘just-in-case’. My case, it seemed, was strong enough to give it a try. And that’s the basis of the awe that I mentioned in the beginning.
With absolutely no hassle, almost every hardware on the machine got detected. Even the wireless, which I expected the least! The only issue is getting sound to work, which, I presume would take little effort (aka googling.) I am glad.

I loved the GUI, and the fact that it’s supported for free by a group of people who are enthusiastic about free software. Can’t help them much technically, but I guess there are other ways I could. Let’s see. Once installed, the Synaptic updater got me the latest recommended security patches.

Aside to the above: Seems so cool, when you show-off your Linux installation to the Windows junta. The other day, I was telling someone…that I do an ESR in this office, just because I worked in NCST for some time. 🙂

Anyways, final word: if you’re new to Linux, do give Ubuntu 7.10 a try. You’ll love it!

References (apart from the links included above):
1. http://www.linux-on-laptops.com/hosted/Dell-Vostro-1400-Ubuntu-Gutsy.html: On how to get Ubuntu working on Dell laptops (the write-up is for Vostro, but I guess it would be more or less the same for Inspiron/Latitude range)
2. http://www.ubuntu.com/getubuntu: Get Ubuntu
3. http://www.ubuntu.com/products/whatisubuntu/desktopedition: Ubuntu info
4. http://www.debian.org/distrib/: Get Debian.

Sorting using Comparable or Comparator

Of late, I’ve been involved in taking some interviews. Many of the candidates seem to be confused about the very simple concept of Comparator and Comparable.
Comparable is an interface in which the contract is to provide your implementation of compareTo() method. For example:


// compares on the basis of last name
class MyVO implements Comparable{

private String firstName;
private String lastName;
...
public int compareTo(Object o){
return lastName.compareTo(((MyVO)o).getLastName());
}

...
}

Whereas, Comparator is similar to the C concept of function pointers (functors). In Java, we incorporate this by defining a class which implements Comparator, and overriding compare() method.
If we think a bit broad, we can relate to Strategy design pattern — provide different Comparator implementations for different situations. For example, you could switch from sorting on the basis of firstName instead of

lastName at runtime, based upon user-input.

A comparator implementation is as follows.


class LastNameComparator implements Comparator{

public int compare(Object o1, Object o2){
return ((MyVO)o1).getLastName().compareTo(((MyVO)o2).getLastName());
}

}

Once this is done, there are numerous ways in which you could sort your Collection.

See: Using the Strategy Design Pattern for Sorting POJOs

Adding a local user under Cygwin

Suppose we want to add Gromit [password: passw0rd] as a Cygwin user.
Issue the following commands on the console:

$ net user Gromit passw0rd /add /yes
$ mkpasswd -l -u Gromit >> /etc/passwd

The first line adds a new user under NT.
In the second line we’re appending the user password to the existing list of passwords. Now, create a directory in /home for the new user:

/home/Gromit

…and make Cygwin aware of the new user.

Note that the user won’t be able to login until NT has a place to store his/her profile. For this, either login as that new user (on local machine), or issue a command to create a directory

<sysDir>\Documents and Settings\Gromit

…and we should be done!


Reference taken from here

SLSB to Spring

Recently we were stuck-up in a seemingly easy interaction of EJB and Spring. More specifically Stateless Session Bean (SLSB) to Spring.
We went through a number of examples present on the Internet, but nothing seemed to provide a clear-cut approach. Or, maybe, they expected me to be intelligent enough :).

Here, I present the solution which finally worked for us. This example uses Weblogic 8.1 server. We’ve not tested it on any other server. I’ll update this post as and when we test it on other servers.

Continue reading SLSB to Spring