Friday, January 13, 2012

VMware ESXi 4.1 critical and security update via command line utility

VMware ESXi provides outstanding environment for virtualization and it's free. Free may means extra work required to maintain this environment with update issues.

To search for any patch available, please visit:
http://www.vmware.com/patchmgr/findPatch.portal

Correctly quote the product and the version for your existing environment and you will find useful patches for use.

Download the required patches and make sure VMware vSphere CLI package has been installed properly so you can open up Terminal/Command Prompt to run its perl script tasks. Assuming you are working on Windows desktop, you may find the following paths after CLI installation:

Embedded Perl:
C:\Program Files (x86)\VMware\VMware vSphere CLI\Perl\bin\perl.exe

vSphere CLI included its own Perl package for operation. It is important to quote absolute path of correct perl program so those required Perl modules are involved during the tasks.

Perl scripting tasks location:
C:\Program Files (x86)\VMware\VMware vSphere CLI\bin\*.pl

Before the scripting task is carried on the ESXi host, it is necessary to shut down all guest VMs and then put ESXi host in maintenance mode. These can be done via VMware vSphere Client. Install this if you don't have one. The latest free edition is VMware vSphere v4.

The following Perl script commands will be required to run specific tasks:


#

#Put ESXi host in maintenance mode



#Check ESXi host information

>"C:\Program Files (x86)\VMware\VMware vSphere CLI\Perl\bin\perl.exe" vihostupdate.pl --server IP_ADDRESS_ESXi_HOST -operation info



#Make a query on any patch installed on ESXi host

>"C:\Program Files (x86)\VMware\VMware vSphere CLI\Perl\bin\perl.exe" vihostupdate.pl --server IP_ADDRESS_ESXi_HOST --query



#List all updates available in the downloaded patch package

>"C:\Program Files (x86)\VMware\VMware vSphere CLI\Perl\bin\perl.exe" vihostupdate.pl --server IP_ADDRESS_ESXi_HOST --list --bundle "C:\temp\update-from-esxi4.1-4.1_update02.zip"



#Compare and see which patch bulletin applicable to ESXi host

>"C:\Program Files (x86)\VMware\VMware vSphere CLI\Perl\bin\perl.exe" vihostupdate.pl --server IP_ADDRESS_ESXi_HOST --scan --bundle "C:\temp\update-from-esxi4.1-4.1_update02.zip"



#Perform actual installation of patch with bulletin specified

>"C:\Program Files (x86)\VMware\VMware vSphere CLI\Perl\bin\perl.exe" vihostupdate.pl --server IP_ADDRESS_ESXi_HOST --install --bundle "C:\temp\update-from-esxi4.1-4.1_update02.zip" --bulletin ESXi410-Update02



#Once completed, reboot ESXi host and put it back to operation mode, start guest VMs for testing








Monday, January 9, 2012

Remotely unlocking LUKS encrypted root partition via SSH

It has been a while since famous DropBear script invented and became Ubuntu package for streamlining the unlocking mechanism of encrypted partition remotely from client computer.

Recently, an inspiring article has been found for a practical approach to this special feature. Of course, it might involve some manual operations during the server reboots and wait for your input to unlock it.

A working solution tested on Ubuntu 9.10 Karmic Koala:

Assuming you have an Ubuntu machine with fully encrypted partition, and you would like to unlock this partition during the boot-up but you are not sitting there in front of the machine. Definitely, you can do it somewhere with another computer.

On Ubuntu machine:

To setup Dropbear SSH Server:
$

$ sudo apt-get install dropbear busybox


Please remind that the package like early-ssh is not required in this case and it is assumed that a working OpenSSH server has been installed and running properly.

Latest version like Ubuntu 11.04 might have problems with the original dropbear script and requires manual modification. It is recommended to refer to  this link for more information.

Now update INITRAMFS:
$

$ update-initramfs -u

It's time to enable the root account in Ubuntu whereas only root user can login Dropbear SSH Server successfully.
$

$ sudo passwd root



You must take extra precaution while enabling root account can lead to security issue on your OpenSSH service. It is recommended to disable root login for OpenSSH by changing the following lines in /etc/ssh/sshd_config.
# change in /etc/ssh/sshd_config

PermitRootLogin no

This is useful in preventing the outsider trying to login as root.

Within the timeframe of DropBear SSH session, the root account is effective anyway. After the unlock operation is finished, DropBear will let the regular OpenSSH server to take over and retires gracefully. Therefore, you may expect DropBear session to be disconnected after you enter correct passphrase to unlock the partition. Then you will need to login again with its OpenSSH session instead.

Before restarting the machine, it is necessary to copy the private key out of Dropbear SSH Server.
A reference command would be like this (assuming you use Linux client to connect to Ubuntu machine):
#Use SCP to copy a certificate from remote machine to local Linux machine
scp user@remote.server:/etc/initramfs-tools/root/.ssh/id_rsa ~/.ssh/remote_dropbear_id_rsa


If you are using Windows client, then you can download WinSCP to copy the file directly onto your desktop. You might notice that an error occurs while you are copying due to file permission. This means you need to set a proper file permission of id_rsa on Ubuntu machine before you copy it out.

After that, reboot your Ubuntu machine and wait for the prompt of passphrase. However, Dropbear SSH Server will also loaded up behind the scene and wait for the authenticated client to enter passphrase remotely. Who is the authenticated client? That might be the one holding the private key generated by Ubuntu machine.

On Client computer:

Assuming you use Windows client to connect and you will need two pieces of software to make it work
Putty.exe
Puttygen.exe

Basically, comprehensive package of Putty includes both of these executables once you download and install its Windows installer from here.

Native Linux generated private key needs to be converted to .PPK file for import in Putty client. That's why we need Puttygen.exe. For instructions, please refer to this link under the section of "Converting the OpenSSH private key to Putty format". To login remotely with the converted private key, please refer to the section of "Logging in Openssh using id_rsa.ppk".

When Ubuntu machine is rebooted and pending for passphrase input, you can login from the client computer remotely with Putty.exe with a proper private key. Bear in mind that you need to login as root user into DropBear SSH console.

Once you login successfully to Dropbear console, you can enter the passphrase from there with a script command:
$

$ echo -ne "**encryptionpassphrase**" > /lib/cryptsetup/passfifo


whereas **encryptionpassphrase** is the required passphrase to unlock the encrypted partition.

You may not notice anything over the console. If the passphrase matches, Ubuntu machine will unlock the disk partition and start to load up at the other side. You can close Dropbear console and then login again into the OpenSSH console of Ubuntu machine with you regular user account from the client computer.

Note: An issue has been experienced regarding IP address distributed by DHCP server. To prevent a different IP address allocated to Ubuntu machine after reboot. It is necessary to setup static IP configuration to make sure IP address unchanged even after Dropbear SSH server loads up. Please refer to this link for more information.




















Friday, January 6, 2012

Merge VMWare snapshot with base VM to reduce size of vmdk

One server is hosting VM Server v2.x and has been reverted to a previous snapshot after crash. Within the host, the files snapshot vmdk and base vmdk now co-exist even after I remove the snapshot by GUI menu item. The file size of vmdk files is huge and almost a double of the current disk size on guest VM.

To merge and reduce the size for those vmdk files, a piece of freeware VMware Convertor does really help! By converting a new VM from existing VM, multiple .vmdk files have been merged and reduced in size in the destination folder. Now, it becomes one single .vmdk file with smaller size.

Before the conversion, the guest VM must be shutdown completely. After all, you will get a .vmx and a .vmdk file in a smaller size.

Furthermore, you may want to actually shrink the disk within Guest VM. This is possible if you have installed latest version of VMware tools inside Guest VM.


Shrinking a disk is a two-step process:

In the first step, called wiping, VMware Tools reclaims all unused portions of disk partitions (such as deleted files) and prepares them for shrinking. Wiping takes place in the guest operating system.

The second step is the shrinking process itself, which takes place on the host. Workstation reduces the size of the disk's files by the amount of disk space reclaimed in the wipe process.


For the first step, follow the below instructions inside running target Guest VM:


1. Launch the vmware-toolbox.

Windows guest — double-click the VMware Tools icon in the system tray, or choose Start > Settings > Control Panel, then double-click VMware Tools.
Linux or FreeBSD guest — become root (su -), then run vmware-toolbox.

2. Click the Shrink tab.

3. Select the virtual disks you want to shrink, then click Prepare to Shrink.

A dialog box tracks the progress of the wiping process.

Note: If you deselect some partitions, the whole disk is still shrunk. However, those partitions are not wiped for shrinking, and the shrink process does not reduce the size of the virtual disk as much as it could with all partitions selected.

4. Click Yes when VMware Tools finishes wiping the selected disk partitions.

A dialog box tracks the progress of the shrinking process. Shrinking disks may take considerable time.

5. Click OK to finish.

For the second step, you are ready to issue a command at the terminal on Host machine:

Assuming Host machine is running Windows, you can issue a command like this (change target vmdk path accordingly).

"C:\Program Files\VMware\VMware Server\vmware-vdiskmanager.exe" -k  "\*destination*\target.vmdk"


Reference: http://www.vmware.com/support/ws5/doc/ws_disk_shrink.html











Tuesday, December 6, 2011

PHP Accelerator for XAMPP on Linux

Since PHP 5.3 was released, it had been even harder to find stable accelerator to boost up the performance. Recently, I searched through the news and blogs about any breakthrough of PHP accelerator and found some good news. One of those package called APC has released new packages which claimed to fully support PHP 5.3. So far it has good responses among the users regarding stability and performance. Also, APC is going to be deployed as built-in accelerator in PHP 5.4 soon.

To install new package of APC from the scratch, we need something more installed on Linux platform:

Packages required:
Autoconf: Whatever version is fine.
XAMPP: Version from 1.7.2 to 1.7.7 should be okay.
XAMPP development source: This must be corresponding to the original version of XAMPP package.
APC: Latest APC package available for the best support.

To install Autoconf on Linux, like Ubuntu:
#
$sudo apt-get install autoconf


To install XAMPP:
Download and install XAMPP package from http://www.apachefriends.org/en/xampp-linux.html
#
$wget http://www.apachefriends.org/download.php?xampp-linux-1.7.7.tar.gz

$sudo tar xvfz xampp-linux-1.7.7.tar.gz -C /opt


Of course, you need to finish the basic setup and make sure Apache server and MySQL are running before you proceed to the following steps.

To install XAMPP development source:
Use the following link and modify version of XAMPP pacakge as required.
Here, assuming we have installed XAMPP 1.7.7 package:
#
$wget http://www.apachefriends.org/download.php?xampp-linux-devel-1.7.7.tar.gz




$sudo tar xvfz xampp-linux-devel-1.7.7.tar.gz -C /opt




To compile and install APC source:
#Download and extract source files

#
$wget -O APC-latest.tar.gz http://pecl.php.net/get/APC

$tar xvfz APC-latest.tar.gz

$cd APC-*


#run phpize while in APC source directory
#
$sudo /opt/lampp/bin/phpize

#Configure source with setting appropriate config path pointing to XAMPP php-config file
#
$sudo ./configure --with-php-config=/opt/lampp/bin/php-config

#Compile and install
#
$sudo make

$sudo make install


Now, add new line into php.ini and restart Apache server to initialize APC module:

#
$sudo sh -c "echo 'extension=apc.so' >> /opt/lampp/etc/php.ini"

$sudo /opt/lampp/lampp stopapache



$sudo /opt/lampp/lampp startapache


To check if APC is running, type the following command:
#
$sudo /opt/lampp/bin/php -r 'echo phpinfo();' | grep apc --color

And then you should see some related information about running APC module when things don't go wrong.





Thursday, November 24, 2011

Google Health: a valuable lesson on eHealth project

Just got the news about Google going to tidy up their stocks before the year end. Google Health has been chosen for termination and actually it is not the only one project to be closed down by the big boss in 2012. Medical records are always binding to a legal burden which hinders the developers in terms of data sharing and peer review. It's just not ready to be opened to the public in many cases. One way or the other, the users need to give trust to their treating doctors and let them read the records. Once the records go online, the service provider must hold the responsibilities to protect the patient's data which is actually binding to the law like HIPAA in U.S.. Different countries have different laws to govern how clinical data can be shared and handled. This is immediately turned into a legal issue once any mistake happens in between the service provider and the users. Even Google has a team of lawyers to guard against all those lawsuits potentially arisen among the people in U.S., it added up the cost of protecting those assets of clinical data which are not legally owned by the service provider. As the wised reckon, Google try to use a small team to conquer a large universe of data but that universe of data was not there. It may be residing somewhere under the influence of law, nonetheless, it is just out of reach for the Internet Giant.

Friday, November 18, 2011

Occasional blank page on IE6 during a site visit

Even though the market share of IE6 is on the way of shrinking around the world, the truth is many organization still deploy this particular version of web browser for LAN users. I always have a doubt on the statistics provided by IE6 Countdown website.

Anyway, I noticed some of the client computers are still running IE6 SP2 which is a bit better in security but still not flawless. IE6 is supposed to handle well in HTTP/1.1 connections and it always sends out HTTP/1.1 request. If not sure, have a look on the following URL and checkout the appropriate option to force IE6 to send HTTP/1.1 request:

http://www.ehow.com/how_6516962_fix-internet-explorer-provided-dell.html

Okay, even the client side is ready to handle HTTP/1.1, the proxy server or even the web server at the far side may not send back HTTP/1.1 response. In case of Apache 2.x server, the default settings make it suppress HTTP/1.1 response but send back HTTP/1.0 response instead, particularly for Internet Explorer.

You may find the settings under httpd-ssl.conf as follows:


#
BrowserMatch ".*MSIE.*" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0

According to EricLaw's opinion, this actually force Apache server to give out HTTP/1.0 response to any version of IE, even if IE sends out HTTP/1.1 request at the client side. This could lead to intermittent blank pages on IE6 which is actually error-prone to this kind of responses from the web server.


Some people suggest they simply comment out the settings to avoid this problem. Somehow, IE6 is not perfect enough if we don't put nokeepalive and ssl-unclean-shutdown directives into the filter. Other problems may still arise of IE6 or above version.

To overcome most of the problems among those different version of IE, we have put in detailed criteria to selectively tackle the problem of particular browser version.

The recommended settings would be like these:


#
#IE Version 2 to 5 should be downgraded to HTTP/1.0 for compatibility
#IE Version 1.0 not even support HTTP/1.1 so ignored
BrowserMatch ".*MSIE [2-5]\..*" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0
#
#Prudent settings for newer version of IE
BrowserMatch ".*MSIE [6-9]\..*" \
nokeepalive ssl-unclean-shutdown


Please beware of backward-slash plus double dots "\.." in the expression.

Except for some performance degrade on IEs, this should keep those browsers running fairly okay at the client side. In the far future, we might have to think about how to handle the case of IE10, IE11 or IE12 in case they are released. Hope those problems will be gone in the newer release of IE.

After all, Firefox, Safari and Chrome do not show any symptom like this in IE. So, finger crossed;)


Monday, November 7, 2011

In-depth look into Path MTU Discovery

Recently, I have been searching for a ultimate solution to remedy those situations whereas TCP traffic doesn't come through at the client side. The symptom could be occasional blank page on the browser.

It points me back to study the root cause of broken router or mis-configured firewall settings. For security reasons, some network administrator would make prudent settings to block ICMP traffic from the WAN whereas ICMP (type 3, code 4) packet is one key element in traditional Path MTU Discovery (PMTU-D) process initiated by the server.

Normally, MTU size is default to 1500 bytes at perfect network environment, especially the LAN. It's not the case in a real world situation where MTU size may vary. PMTU-D helps solve this problem in most cases. However, firewall administrator may simply block ICMP packet for security purposes. ICMP packet is useful for getting feedback from the destination for adjusting a possible MTU size to let the traffic come through without defragmentation, i.e., no further DF flag is required during the communication.

Hopefully, IETF Working Group has worked out another way to detect MTU without the need of ICMP packet. Packetization Layer Path MTU Discovery (PLPMTUD, RFC 4821) can work over the network layer above IP, i.e., TCP or UDP. This makes PLPMTUD can work independent of ICMP messages. It starts probing from a small packet size set as inital MSS, then increase progressively to a larger size until a packet loss happens. The optimum size will be used as MTU for that particular connection.

On Linux, there are several network parameters for tweaking:

/proc/sys/net/ipv4/tcp_mtu_probing
/proc/sys/net/ipv4/tcp_base_mss
/proc/sys/net/ipv4/ip_no_pmtu_disc


Possible values of tcp_mtu_probing are:
0: Don't perform PLPMTUD
1: Perform PLPMTUD only after detecting a "blackhole" in old-style PMTUD
2: Always perform PLPMTUD, and use the value of tcp_base_mss as the initial MSS.

Setting tcp_mtu_probing to 1 makes sure that PLPMTUD will start only when black hole router is detected along the way to the destination IP.

Default value of tcp_base_mss is 512 and is supposed to remain the same.

ip_no_pmtu_disc is default to 0 whereas traditional PMTUD can be used at all time. Setting this to 1 seems to make it totally skip the old fashioned way to detect MTU size by using ICMP message.

Ref:
http://kb.pert.geant.net/PERTKB/PathMTU
http://www.znep.com/~marcs/mtu/
http://kerneltrap.org/mailarchive/linux-net/2008/5/24/1928074/thread



Wednesday, November 2, 2011

Back to the old school days of TCP/IP

Just imagine that you are hosting your favorite web site which loads smoothly at your side, even at home, on the bus or in the coffee shop. Til one day, you received a complaint from your old school friend who challenged that you might have really poor web authoring skills. Sometime, what he had seen was a blank page while he was visiting your web site. Blank, in that way, white color with no extra icon, text or even error message, purely blank. Pressing [Refresh] button and things appear again. It sounds like you have been hosting a broken web server. Anyway, it was fine to visit any other web sites from your friend's computer except this one.

"Is the web site crippling enough for me to visit?!!", you might ask. No worry, there is nothing wrong with the web server. Bad client settings of web browser? Maybe.

However, you might not work out how the web browser actually works to display a blank page like this. This, in my opinion, indicates that the web traffic doesn't actually come through. Internet is a huge network coordinated by different routing equipment. They can be linked up with fiber optics, copper wires or even wireless signal in the air. Along the years, the speed of Internet is growing faster than ever before. People change their gears from a wired 56K modem to nowadays USB/built-in Wi-Fi adapter up to 300mbps in transmission. They might forget those days while scientists were excited about the data packets being sent through a co-axial cable successfully.

There are some articles reviewing one of the common TCP/IP features called TPC extensions for high performance network, a.k.a., RFC1323. You may have interest to have a look at the request for comment like this in bare text format which was dated back to 1992.

http://www.ietf.org/rfc/rfc1323.txt

Among those, TCP Window Scaling Option contributes faster packet transfer by controlling receiving window dynamically inside the network adapter. This boosts the rate of data transfer to next level. A single server can significantly increased its network capability by turning this feature on.

Somehow, this has been seen as industry standard for years. People argued that it still has not been implemented well on some routers and causes all sort of problems in basic TCP/IP communication.

A blogger pointed out that some router would reset a vital parameter during TCP communication and lead to misunderstanding by the computers at both ends.

http://inodes.org/2006/09/06/tcp-window-scaling-and-kernel-2617/

In this case, people would normally not aware of the problem until they realized that they couldn't access some web sites and think the sites might be down for service.

There might be some broken routers or switches along the pathway to the target computer whereas they might been poorly managed and finally the doomsday comes. It might also be a long time ago since the network administrator updated the firmware on their routers. So, what about your side? Can you do something to ease that problem even you know that their router won't last long? For the best interests of the end users, we still need to find a way to sort this out.

Okay. In order to get TCP Window Scaling working, TCP Window Scaling Option must be turned ON from both ends. Either one of them has turned it OFF, TCP Window Scaling would not happen. Nowadays, this basic TCP feature might possibly turned on at the client side. Therefore, to sacrifice a bit of your server performance, you need to turn off TCP Window Scale Option on your server. What I say a bit could means huge difference of the user experience at far side. Depending on the actual server performance, it might take longer time to transmit the same set of data packets with TCP Window Scale Option OFF.

If all these don't work for you, then check your browser settings or hardware like network adapter. You may find a reason for this.

Reference materials can be found in the following links:

http://prowiki.isc.upenn.edu/wiki/TCP_tuning_for_broken_firewalls

http://wiki.squid-cache.org/KnowledgeBase/BrokenWindowSize

http://packetlife.net/blog/2010/aug/4/tcp-windows-and-window-scaling/

Monday, September 19, 2011

KeepAlive or not?

People have been discussing about turning an Apache feature on/off while I have been scratching my head wondering why the clients at the far side are suffering slow HTTPS connection behind NAT router and firewall.

HTTP v 1.1 protocol provide nice feature KeepAlive for Apache server to keep active connection opening to serve ongoing requests from the same client. In case of a prefork MPM Apache instance, it means a lock on the work process to this particular client and will no longer release until the client is completely fulfilled and let go.

For dynamic web application written by PERL, PHP or any other scripting language, a prefork process might be occupied by these resources intensive requests which cause a heavy loading at the server side. KeepAlive would create prolonged period of time for the process to work for one client only while others are waiting for the service. The process may actually work for many other clients at the same period of time. To free this, a temporary solution would be turning KeepAlive feature off in the configuration file like "httpd-default.conf".

A blogger Steve has found a programming way to sort this out but it may not good enough for ultimate setting embedded in Apache server itself. Anyway, it is still a workable solution.

For a better understanding, please read this.



Linux VM TCP UDP Tunning

Network optimization among server VMs is a hot topic as read in recent discussion. Back to the old-school optimization strategies, I find them useful in the VM world.

The followings are Linux network settings for high throughput interface card:


#
#
# Add these lines to /etc/sysctl.conf as appropriate
#

vm.swappiness = 10

net.core.wmem_max = 8388608

net.core.rmem_max = 8388608

net.core.rmem_default = 65535

net.core.wmem_default = 65535

net.ipv4.tcp_rmem = 4096 87380 8388608

net.ipv4.tcp_wmem = 4096 65536 8388608

net.ipv4.tcp_mem = 8388608 8388608 8388608

net.ipv4.route.flush = 1

net.ipv4.tcp_window_scaling = 1

net.ipv4.tcp_timestamps = 1

net.ipv4.tcp_sack = 1

Thanks for the tips:
http://wwwx.cs.unc.edu/~sparkst/howto/network_tuning.php
http://fasterdata.es.net/fasterdata/host-tuning/linux/