Sunday, May 20, 2012

Dual-monitor issue in Ubuntu 12.04

Recently I upgraded the OS of my old Dell Inspiron 1320 to Ubuntu 12.04. Everything seems to work fine, except the following two issues:

Slow network

This is an old issue which I have experienced since 11.10. It's because of the compatibility of iwlagn drivers with 802.11n. 802.11n, which was released in 2009, is the latest standard of WLAN (except 802.11ac which is still under development). Anyway it has the maximum achievable data rate compared with other 802.11 standards. Therefore it's preferred by iwlagn and, unfortunately in our case, enabled by default. To work around with this issue, just simply disable 802.11n by appending the following content to anyone of conf. files under /etc/modprobe.d/
options iwlwifi 11n_disable=1
Note: the iwlwifi is the actual module that has been used on your system. It might be iwlwifi or iwlagn (or something else which I don't know yet). To figure out which one you are using, you might use the following command to get a list of active modules.

lsmod

Unable to use dual-monitor

When trying to set up dual-monitor using the standard way (i.e. through Display), I had the following error:
The selected configuration for displays could not be applied requested position/size for CRTC 147 is outside the allowed limit: position=(1920, 180), size=(1440, 900), maximum=(1920, 1920)
It's about the configuration of XORG, a simple solution is to adjust the Virtual section in /etc/X11/xorg.conf into the one fits your 2nd monitor's resolution. For example:
Section "Screen"
  Identifier "Default Screen"
  Device "Default Video Device"
  DefaultDepth  24
  SubSection "Display"
    Virtual 1920 1080
  EndSubSection
EndSection
For more info, please refer to this page: http://askubuntu.com/questions/68185/dual-monitors-behaving-strangely-with-ati-mobility-radeon-hd-3650

Sunday, May 8, 2011

Bugs I experienced with Natty

1. Bash auto-completion. This bug is described in https://bugs.launchpad.net/ubuntu/+source/bash/+bug/769866, the solution, which is also provided within the bug-post, is to change '-o default' on line 1587 of file /etc/bash_completion to '-o filenames'.

2. Compiz has 100% CPU usage after the machine resumes from sleep. No solution presented on the Internet yet. :-(

3. To be continued, even though I hope there will not be any more.

Wednesday, April 27, 2011

Thoughts on password preservation

Usually within the user authentication module of a system, username and password are preserved in a database or similar data structures, and moreover, the password should be encrypted first before being stored in the database for safety.

Here comes the issue about the encryption: should the encrypted password be associated with the username?
I don't know the answer (future work to do); but in my opinion, it should be.

Fact in User authentication of OpenNebula:
When doing the user authentication for oneadmin (default admin account), I always get the following error:
Error: [UserPoolInfo] User couldn't be authenticated, aborting call.
I digged deeper on this issue: I looked into the one.db and found that there is no password for oneadmin but I did set the password for it. Here is the output from user_pool table of one.db:
oid|user_name|password|enabled
0|oneadmin||1
 I fixed this in a stupid but effective way: by updating its password field with the encrypted password of another account (which has the same password), which means in this case, encrypted password is NOT associated with the username.

Additional notes to passwordless ssh login

There are plenty of tutorials about passwordless ssh login to remote machines. Here is a simple guide:
1. @local: Generate rsa key-pair (skip this step if it exists) with #ssh-keygen -t rsa
2. @local: Append the content in .ssh/id_rsa.pub to .ssh/authorized_keys on remote server
               #cat $HOME/.ssh/id_rsa.pub | ssh USER@REMOTE 'cat >> $HOME/.ssh/authorized_keys'
3. DONE

Additional notes:
SSH has a restricted requirement on the permission of $HOME directory on the remote machine. Here is something you need to check if the above method fails:

@remote: #chmod 755 $HOME
@remote: #chmod 700 $HOME/.ssh
@remote: #chmod 744 $HOME/.ssh/authorized_keys

Tuesday, March 8, 2011

Remove unused metrics from Ganglia

As described on the Ganglia wiki page (or IBM's related post), there are currently two ways to add customized metrics to ganglia (3.1.7):

  • Spoofing with gmetric
  • Writing loadable metric module with Python or C
Every time when we modify the module (or simply don't need it anymore), the old module will stay in the system, and a new figure corresponding to the module will be added (not updated) to the Ganglia front end. To remove the unused metrics, two approaches could be used here:

  • For gmetric: if you go through the man page of gmetric, you probably would notice the -d (--dmax) option which sets the expiration time of a metric if no new message is received within dmax seconds. It's default value is 0 which means this metric will never expire. To remove the unused metrics, just simply re-run the gmetric command with --dmax set to a positive integer.
  • For loadable modules: there is no simple way to do the job. Since each gmond multicast (default way) its information to the collector (gmetad), it probably has to kill not only the gmetad process and gmond process where this metric originates, but all gmond processes related to this multicast. The safest way is to kill all gmond and gmetad processes (if it's possible and easy to do). See this post for more details: