Thursday, December 6, 2012

IE6 SELECT element overlapping the front DIV element issue (Revisited) - An ultimate solution

It's been a very long time for IE6 to be deprecated while some organisations still insist on using it as a standard browser on Windows platform. IE6+WinXP is the best combination so far I have ever met such a dilemma on making the webapp advancing forward or coming back to take care of them.

"Well, customer first!", it bit me as the old man said.

Since we like easy fix, we would like to go a scripting way to deal with that. Actually, there's no easy fix at all until you know what you're doing.

Nice article with follow-up by an ASP.NET developer is found while it applies to any other projects using PHP, Python, Ruby or Perl. Anyway, we go for the way by using jQuery.

People are arguing about the way in using iframe to overcome a situation:


  • People work on web applications for IE6.
  • People think about using Javascript to create dynamic pop-up menu with DIV elements.
  • People would like to create DIV overlay on top of the current web page which has SELECT elements underneath.
  • DIV overlay can cover any other web elements except SELECT boxes and it looks ugly.

SELECT element is found to be one of the ActiveX objects in IE6 implementation. This doesn't apply to Firefox or Chrome or even Safari as they don't use ActiveX technology.

ActiveX is set to be higher z-index than any other web elements so it will always appear on top. This causes bad user experience while DIV overlay menu cannot cover a SELECT element. User may complain and say that your app has a bug. Well, a bug in IE6, indeed.

In an old way, we used to hide SELECT elements underneath but it's a bit out of control when the webpage contains some previously hidden SELECT elements and you don't want to make them appear again by accident.

As it's something about ActiveX objects, we go for the ActiveX way to resolve this.

Solution! Solution!

Among those ActiveX objects, IFRAME is fortunately one of them, too. So, we can make use of transparent IFRAME object to cover everything including existing ActiveX objects like SELECT. When a new ActiveX object is created after the old one, it would get the higher z-index on top.

Some people suggest simply embedding the IFRAME element into DIV container but it doesn't really work when accessing the webpage over SSL connection (HTTPS). Most secured web application would provide services over HTTPS. In this case, it will trigger an alert window with that Javascript statement in src attribute.


<div>

...
<iframe id="transparent_frame" src="javascript:false;"></iframe>
</div>

Here comes a better approach:


<div>

...
<iframe id="transparent_frame" src="javascript:'<html></html>';"></iframe>
</div>

This displays nothing but empty HTML page within iframe element. It simply keeps the browser silent over HTTPS and do the things we want.

Of course, we need to set CSS style for this IFRAME to be transparent and covering the whole area of DIV container (set your favourite width and height, please) so it will not affect any other elements within DIV container.

CSS Stylesheet:


#transparent_frame iframe
{
    position: absolute;
    z-index: -1;
    filter: mask();
    border: 0;
    margin: 0;
    padding: 0;
    top: 0;
    left: 0;
    width: 9999px; /* Set desired width */
    height: 9999px; /* Set desired height */
    overflow: hidden;
}

Is the problem solved? Not yet, it's bloody IE!

One problem that came across would be the memory leak in IE browser when we add new IFRAME dynamically by jQuery and then remove it. I can see the memory consumption of IEXPLORER.EXE increases dramatically in Task Manager until the system becomes unresponsive.

To avoid this, please make sure we are not creating new IFRAME one at a time when DIV overlay is shown up. Try reusing the same IFRAME to do the job and things would be fine.

In the experiment, a single IFRAME is created and reused until next page reloading without any memory leak issue on IE browsers which is good.


Ref:
http://weblogs.asp.net/bleroy/archive/2005/08/09/how-to-put-a-div-over-a-select-in-ie.aspx





Wednesday, November 21, 2012

Tuning and optimization of NGINX for 2 Million HTTP concurrent connections

As heard from the news about the crash of Click Frenzy website, they were going to deal with 1 million  connections. But it turned out to be more than 2 million connections.

While talking about a reliable IT infrastructure to support this level of user group at peak hours, does anyone mention about NGINX acting as the front-end server to deal with that?

An article from Taobao.com mentioned about an experiment using NGINX to simulate a condition of 2 million concurrent connections to a single server. I was surprised that people out there shared it on Twitter even it wasn't posted in plain English. Here's the translation:


Tuning and optimization of NGINX for 2 Million concurrent connections

For the server performance, one of the vital indicators is the maximum number of queries per second, i.e., qps. There are exceptions for certain types of applications that we do care more about the maximum number of concurrent connections rather than qps, although we still consider qps as one of the system performance indicators. The comet server for Twitter belongs to this kind of species. Other examples like online chat room and instant messaging applications are also similar in nature. For the introduction of Comet-typed applications, please refer to the previous posts. For this kind of system, there would be a lot of messages produced and delivered to the clients. Those client connections are on hold even during the idle time. When a huge number of clients connect to the system, a long queue of concurrent connections would be made and held by the system.

First of all, we need to analyse the resources consumptions for this kind of service. These mainly involve CPU usage, network bandwidth and memory. To optimize the system, we need to find the whereabouts of the bottleneck. Among those concurrent connections, part of them may not be sending data at all time and be considered as idle connections occasionally. These connections at idle state don’t actually exhaust the CPU or network resources, but barely occupy some spaces in memory.

Based on the assumptions above, the system can support a much higher number of concurrent connections desired, with adequate amount of system memory. Could this happen in the real world? If yes, this could also be a challenge to the CPU core for supporting such a huge client group.

To start an experiment for the theory above, we need to have a running server and a huge number of clients as well. Also, server program and client program are required to finish the tasks.

Here’s the scenario:
  • Each client initiates a connection and sends out a request to the server.
  • The connection is on hold at the server side with no actual response.
  • This state is maintained until a maximum number of concurrent connections are reached, i.e., 2 million concurrent connections.


1. Preparation of Server-side
As per the assumption above, we need to have a server with large amount of memory for the deployment of Comet application by using NGINX. Here’s the specification of the server:

  • Summary: Dell R710, 2 x Xeon E5520 2.27GHz, 23.5GB / 24GB 1333MHz
  • System: Dell PowerEdge R710 (Dell 0VWN1R)
  • Processors: 2 x Xeon E5520 2.27GHz 5860MHz FSB (16 cores)
  • Memory: 23.5GB / 24GB 1333MHz == 6 x 4GB, 12 x empty
  • Disk-Control: megaraid_sas0: Dell/LSILogic PERC 6/i, Package 6.2.0-0013, FW 1.22.02-0612,
  • Network: eth0 (bnx2):Broadcom NetXtreme II BCM5709 Gigabit Ethernet,1000Mb/s
  • OS: RHEL Server 5.4 (Tikanga), Linux 2.6.18-164.el5 x86_64, 64-bit


The program at the server-side is so simple. An NGINX based comet module can be written to achieve the purposes of accepting user request and putting the connection on hold with no response. Apart from this, Status module in NGINX can be used to monitor the maximum number of concurrent connection at real time.

Let’s tweak some system parameters on the server, within the file /etc/sysctl.conf:

net.core.somaxconn = 2048
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 4096 16777216
net.ipv4.tcp_wmem = 4096 4096 16777216
net.ipv4.tcp_mem = 786432 2097152 3145728
net.ipv4.tcp_max_syn_backlog = 16384
net.core.netdev_max_backlog = 20000
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_orphans = 131072

Then, issue the following command to make the new settings effective:
/sbin/sysctl -p


Here are a couple of things to notice:

Net.ipv4.tcp_rmem: this is used to allocate the size of READ buffer. The first one is for the minimum value, the second one is for the default value and the third one is for maximum value. To keep the memory consumption to minimum for each socket, the minimum value is set to 4096.

Net.ipv4.tcp_wmem: this is used to allocate the size of WRITE buffer. Both READ and WRITE buffer can actually affect the memory consumption of the socket within the core system.

Net.ipv4.tcp_mem is used to allocate the memory size for TCP, in terms of pages, not words. When the memory size exceeds the second parameter as defined, TCP will enter the pressure mode. At this time, TCP will try to stabilize its memory usage. In case, the memory size is reduced less than the first parameter, TCP will leave the pressure mode. However, if the memory size exceeds the third parameter, TCP will refuse to allocate extra socket for the other connections. At this moment, a whole list of log entries like “TCP: too many of orphaned sockets” will be shown by using dmesg command at the server-side.

Also, the parameters for net.ipv4.tcp_max_orphans should be set to allow a maximum number of sockets being held by no process. This should be considered carefully especially when we need to create a huge number of connections.

Apart from this, the server needs to open a certain amount of files descriptors, i.e., 2 million. There is a problem when setting a large size for file descriptors. But, no worry. Let’s talk about this later.

2. Preparation of client side
In this scenario we need to initiate a large amount of client connections. However, the number of local ports on the computer is limited. With the port range between 0 and 65535, there is a reserved range between 0 and 1023. In other words, only 64511 ports within the port range between 1024 and 65534 can be allocated. To achieve 2 million concurrent connections, 34 client computers will be required.
Of course, we can use Virtual IP to realize this number of clients. For each Virtual IP, around 64500 ports can used for binding whereas 34 Virtual IP can get things done. Fortunately, we have the company resources to carry out this experiment by using physical machines.

The default parameters set for the auto allocation of the ports is limited, i.e., from 32768 to 61000, so we need to modify the parameter on the client computer, within the file /etc/sysctl.conf:

net.ipv4.ip_local_port_range = 1024 65535

Followed by a line of commands like this to make it effective:
/sbin/sysctl -p

A program at the client side is writtern based on libevent module which continuously initiate new requests for connection.

3. Adjustment of File Descriptors
Due to a large number of socket required by Client-side and Server-side, we need to adjust the maximum value of File Descriptor.

At Client-side, we need to create more than 60,000 sockets, it’s fine to set the parameter to 100,000.

Within the file /etc/security/limits.conf, please add these two lines:
admin soft nofile 100000
admin hard nofile 100000

At Server-side, we need more than 2,000,000 sockets. That would be a problem if we set it to something like, “nofile 2000000”. In this case, the server will no longer be able to be accessed. After several attempts, it is discovered that the maximum value can only set to 1 million. After checking the source code of the kernel v2.6.25, there is a global defined value for this as 1024*1024, i.e., almost 1 million. Anyway, Linux kernel version 2.6.25 or later can be tweaked through the settings in /proc/sys/fs/nr_open. So, I can’t wait to upgrade the kernel to 2.6.32.

For the blog post about “ulimit”, please visit:

After the kernel upgrade, we can optimize it by the following command:
sudo bash -c 'echo 2000000 > /proc/sys/fs/nr_open'

Now, we can modile the nofile parameters again within the file /etc/security/limits.conf:

admin soft nofile 2000000
admin hard nofile 2000000

4. Final test
Throughout the test, “dmesg” console shows continuing messages about the newly optimized parameters for allocating server ports according to the settings in /sbin/sysctl. At the end, we have finished a test for 2 million concurrent connections.

To minimize the memory consumption for NGINX, the default value of “request_pool_size” is changed from 4k to 1k. Also, the default values of net.ipv4_wmem and net.ipv4.tcp_rmem have been changed to 4k.

At the time of 2 million connections, the following data has been collected via NGINX: 








At the same time, the condition of the system cache:




5. Conclusion
Normally, parameters like “request_pool_size” need to be adjusted during the real time. The default sizes of “net.ipv4.tcp_rmem” and “net.ipv4.tcp_wmem” should also be changed as well.





Wednesday, November 7, 2012

XRDP Login with Black Screen issue

After trying a couple of things on the Linux server and found nothing wrong with the configuration of XRDP server, I was wondering what exactly caused this whereas I got used to login properly without any issue with RDP client on Mac OS X.

Is it something wrong with an update on Microsoft RDP client, or something deep inside my connection profile?

Just found this a reply on the forum which points me to the right direction.

According to the replies in the forum
http://sourceforge.net/projects/xrdp/forums/forum/389417/topic/4880098,
it looks like a problem in the domain settings in my connection profile which has been remembered since I connected to some other servers whereas a domain needs to be specified. In Preference, I simply cleared the field Domain and now my XRDP session can be connected again with no further issue.


Apart from the black screen, here's our login session window again!


The same thing may happen on Windows RDP client as well. So, please beware of the Domain when logging in XRDP next time.











Monday, July 23, 2012

XRDP via SSH session

To make use of Windows Remote Desktop Client to connect to Graphical X session on Ubuntu Linux server, there are three basic components that would help make a secured connection:

XRDP server
SSH server
VNC server

XRDP protocol would be used to forward VNC session to Windows RDC. For security reason, it will be done through the data encapsulation by using SSH tunnel.

The benefit is that it doesn't require the client to install VNC viewer and use standard Remote Desktop Client on Windows computer.

For better security, Putty will be used to create SSH connection first and do the port forwarding as follows:

For example:
Port 5555 (client port, any other port than 3389 or 3390) => localhost:3389 (remote server port)

On the server, Firewall rules restrict that only SSH port is open to the users for connection. Through SSH session, we can connect it with Windows RDC by using URI like:

localhost:5555

Before all these happen, we need to make sure that SSH server, VNC server and XRDP server are installed  and configured properly on Linux server.

Use the following commands to install all required servers:

$ sudo apt-get install openssh
$ sudo apt-get install vnc4server
$ sudo apt-get install xrdp

Just skipped SSH and VNC setup whereas you could find lots of references on the other forums.

For XRDP, we need to make a little adjustment on the file "/etc/xrdp/xrdp.ini" .

Depending on the setup of vnc4server, you need to define the VNC port to be forward by XRDP protocol. Under the section "xrdp1" in the file "xrdp.ini", please comment out the line of "port=-1" and add a working VNC port for use, i.e., port=5901.

[globals]
bitmap_cache=yes
bitmap_compression=yes
port=3389
crypt_level=low
channel_code=1

[xrdp1]
name=sesman-Xvnc
lib=libvnc.so
username=ask
password=ask
ip=127.0.0.1
#port=-1
port=5901

Then restart XRDP service:

$ sudo service xrdp restart

Now, it's time to test from the client side by initiating SSH connection using Putty client.

Then open Remote Desktop Client on Windows comptuer with the URI:

localhost: 5555

You should a GUI login screen for xrdp session. Using linux username and password, you can login to VNC session like what a VNC viewer normally does.

Enjoy your RDC!



Thursday, July 19, 2012

Xubuntu 12.04 + XAMPP PPA + XRDP + VMware Tools

Xubuntu 12.04 is out for a while and it includes a faster way to install XAMPP package for web developers as well.

Just sum up the steps and procedures to bring up new Xubuntu server VM as follows:


After basic installation of Ubuntu 12.04 LTS Server Edition, try installing a package as follows:

$ sudo apt-get install python-software-properties

to get add-apt-repository command working. This is particularly used for PPA repository.

Perform the followings to get XAMPP package from repository:

$ sudo add-apt-repository ppa:upubuntu-com/web

$ sudo apt-get update

$ sudo apt-get install xampp

XAMPP directory should be under opt directory. To start XAMPP, type the command:

$ sudo /opt/lampp/lampp start


To install basic GNOME desktop 

$ sudo apt-get update

$ sudo apt-get upgrade

$ sudo apt-get install --no-install-recommends ubuntu-desktop


To install light weight desktop install xfce, using the following command:

$ sudo apt-get install xubuntu-desktop


To install VMware tools:

$ sudo apt-get install build-essential linux-headers-$(uname -r)

Go to vmware "VM" tab to install vmware tools

Proceed with instructions provided.

Keep it in mind whenever you update Ubuntu (kernel version changed ) you need to run the following script again:

$ vmware-config-tools.pl

I like SSH as it's straight forward and simple to use. What about using graphical application like SVN client running on a remote desktop? Apart from VNC server, XRDP is an alternative for remote GUI control.

Installation of xrdp with command:


Before anything, it's recommended to have upgrade and reboot first:

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install build-essential linux-headers-$(uname -r)
$ sudo reboot

Now it's ready to install xrdp package:

$ sudo apt-get install xrdp

Edit File "/etc/xrdp/startvm.sh" for xrdp session manager settings:

Replace the last line from ". /etc/X11/Xsession" to "/usr/bin/xfce4-session"

Try the command to restart session manager:

$ sudo /etc/init.d/xrdp restart


After a reboot, a xfce xrdp session should be ready for connection. With Windows Remote Connection Client, it's easy to conenct to the remote desktop on Xubuntu server.





Friday, January 13, 2012

VMware ESXi 4.1 critical and security update via command line utility

VMware ESXi provides outstanding environment for virtualization and it's free. Free may means extra work required to maintain this environment with update issues.

To search for any patch available, please visit:
http://www.vmware.com/patchmgr/findPatch.portal

Correctly quote the product and the version for your existing environment and you will find useful patches for use.

Download the required patches and make sure VMware vSphere CLI package has been installed properly so you can open up Terminal/Command Prompt to run its perl script tasks. Assuming you are working on Windows desktop, you may find the following paths after CLI installation:

Embedded Perl:
C:\Program Files (x86)\VMware\VMware vSphere CLI\Perl\bin\perl.exe

vSphere CLI included its own Perl package for operation. It is important to quote absolute path of correct perl program so those required Perl modules are involved during the tasks.

Perl scripting tasks location:
C:\Program Files (x86)\VMware\VMware vSphere CLI\bin\*.pl

Before the scripting task is carried on the ESXi host, it is necessary to shut down all guest VMs and then put ESXi host in maintenance mode. These can be done via VMware vSphere Client. Install this if you don't have one. The latest free edition is VMware vSphere v4.

The following Perl script commands will be required to run specific tasks:


#

#Put ESXi host in maintenance mode



#Check ESXi host information

>"C:\Program Files (x86)\VMware\VMware vSphere CLI\Perl\bin\perl.exe" vihostupdate.pl --server IP_ADDRESS_ESXi_HOST -operation info



#Make a query on any patch installed on ESXi host

>"C:\Program Files (x86)\VMware\VMware vSphere CLI\Perl\bin\perl.exe" vihostupdate.pl --server IP_ADDRESS_ESXi_HOST --query



#List all updates available in the downloaded patch package

>"C:\Program Files (x86)\VMware\VMware vSphere CLI\Perl\bin\perl.exe" vihostupdate.pl --server IP_ADDRESS_ESXi_HOST --list --bundle "C:\temp\update-from-esxi4.1-4.1_update02.zip"



#Compare and see which patch bulletin applicable to ESXi host

>"C:\Program Files (x86)\VMware\VMware vSphere CLI\Perl\bin\perl.exe" vihostupdate.pl --server IP_ADDRESS_ESXi_HOST --scan --bundle "C:\temp\update-from-esxi4.1-4.1_update02.zip"



#Perform actual installation of patch with bulletin specified

>"C:\Program Files (x86)\VMware\VMware vSphere CLI\Perl\bin\perl.exe" vihostupdate.pl --server IP_ADDRESS_ESXi_HOST --install --bundle "C:\temp\update-from-esxi4.1-4.1_update02.zip" --bulletin ESXi410-Update02



#Once completed, reboot ESXi host and put it back to operation mode, start guest VMs for testing








Monday, January 9, 2012

Remotely unlocking LUKS encrypted root partition via SSH

It has been a while since famous DropBear script invented and became Ubuntu package for streamlining the unlocking mechanism of encrypted partition remotely from client computer.

Recently, an inspiring article has been found for a practical approach to this special feature. Of course, it might involve some manual operations during the server reboots and wait for your input to unlock it.

A working solution tested on Ubuntu 9.10 Karmic Koala:

Assuming you have an Ubuntu machine with fully encrypted partition, and you would like to unlock this partition during the boot-up but you are not sitting there in front of the machine. Definitely, you can do it somewhere with another computer.

On Ubuntu machine:

To setup Dropbear SSH Server:
$

$ sudo apt-get install dropbear busybox


Please remind that the package like early-ssh is not required in this case and it is assumed that a working OpenSSH server has been installed and running properly.

Latest version like Ubuntu 11.04 might have problems with the original dropbear script and requires manual modification. It is recommended to refer to  this link for more information.

Now update INITRAMFS:
$

$ update-initramfs -u

It's time to enable the root account in Ubuntu whereas only root user can login Dropbear SSH Server successfully.
$

$ sudo passwd root



You must take extra precaution while enabling root account can lead to security issue on your OpenSSH service. It is recommended to disable root login for OpenSSH by changing the following lines in /etc/ssh/sshd_config.
# change in /etc/ssh/sshd_config

PermitRootLogin no

This is useful in preventing the outsider trying to login as root.

Within the timeframe of DropBear SSH session, the root account is effective anyway. After the unlock operation is finished, DropBear will let the regular OpenSSH server to take over and retires gracefully. Therefore, you may expect DropBear session to be disconnected after you enter correct passphrase to unlock the partition. Then you will need to login again with its OpenSSH session instead.

Before restarting the machine, it is necessary to copy the private key out of Dropbear SSH Server.
A reference command would be like this (assuming you use Linux client to connect to Ubuntu machine):
#Use SCP to copy a certificate from remote machine to local Linux machine
scp user@remote.server:/etc/initramfs-tools/root/.ssh/id_rsa ~/.ssh/remote_dropbear_id_rsa


If you are using Windows client, then you can download WinSCP to copy the file directly onto your desktop. You might notice that an error occurs while you are copying due to file permission. This means you need to set a proper file permission of id_rsa on Ubuntu machine before you copy it out.

After that, reboot your Ubuntu machine and wait for the prompt of passphrase. However, Dropbear SSH Server will also loaded up behind the scene and wait for the authenticated client to enter passphrase remotely. Who is the authenticated client? That might be the one holding the private key generated by Ubuntu machine.

On Client computer:

Assuming you use Windows client to connect and you will need two pieces of software to make it work
Putty.exe
Puttygen.exe

Basically, comprehensive package of Putty includes both of these executables once you download and install its Windows installer from here.

Native Linux generated private key needs to be converted to .PPK file for import in Putty client. That's why we need Puttygen.exe. For instructions, please refer to this link under the section of "Converting the OpenSSH private key to Putty format". To login remotely with the converted private key, please refer to the section of "Logging in Openssh using id_rsa.ppk".

When Ubuntu machine is rebooted and pending for passphrase input, you can login from the client computer remotely with Putty.exe with a proper private key. Bear in mind that you need to login as root user into DropBear SSH console.

Once you login successfully to Dropbear console, you can enter the passphrase from there with a script command:
$

$ echo -ne "**encryptionpassphrase**" > /lib/cryptsetup/passfifo


whereas **encryptionpassphrase** is the required passphrase to unlock the encrypted partition.

You may not notice anything over the console. If the passphrase matches, Ubuntu machine will unlock the disk partition and start to load up at the other side. You can close Dropbear console and then login again into the OpenSSH console of Ubuntu machine with you regular user account from the client computer.

Note: An issue has been experienced regarding IP address distributed by DHCP server. To prevent a different IP address allocated to Ubuntu machine after reboot. It is necessary to setup static IP configuration to make sure IP address unchanged even after Dropbear SSH server loads up. Please refer to this link for more information.




















Friday, January 6, 2012

Merge VMWare snapshot with base VM to reduce size of vmdk

One server is hosting VM Server v2.x and has been reverted to a previous snapshot after crash. Within the host, the files snapshot vmdk and base vmdk now co-exist even after I remove the snapshot by GUI menu item. The file size of vmdk files is huge and almost a double of the current disk size on guest VM.

To merge and reduce the size for those vmdk files, a piece of freeware VMware Convertor does really help! By converting a new VM from existing VM, multiple .vmdk files have been merged and reduced in size in the destination folder. Now, it becomes one single .vmdk file with smaller size.

Before the conversion, the guest VM must be shutdown completely. After all, you will get a .vmx and a .vmdk file in a smaller size.

Furthermore, you may want to actually shrink the disk within Guest VM. This is possible if you have installed latest version of VMware tools inside Guest VM.


Shrinking a disk is a two-step process:

In the first step, called wiping, VMware Tools reclaims all unused portions of disk partitions (such as deleted files) and prepares them for shrinking. Wiping takes place in the guest operating system.

The second step is the shrinking process itself, which takes place on the host. Workstation reduces the size of the disk's files by the amount of disk space reclaimed in the wipe process.


For the first step, follow the below instructions inside running target Guest VM:


1. Launch the vmware-toolbox.

Windows guest — double-click the VMware Tools icon in the system tray, or choose Start > Settings > Control Panel, then double-click VMware Tools.
Linux or FreeBSD guest — become root (su -), then run vmware-toolbox.

2. Click the Shrink tab.

3. Select the virtual disks you want to shrink, then click Prepare to Shrink.

A dialog box tracks the progress of the wiping process.

Note: If you deselect some partitions, the whole disk is still shrunk. However, those partitions are not wiped for shrinking, and the shrink process does not reduce the size of the virtual disk as much as it could with all partitions selected.

4. Click Yes when VMware Tools finishes wiping the selected disk partitions.

A dialog box tracks the progress of the shrinking process. Shrinking disks may take considerable time.

5. Click OK to finish.

For the second step, you are ready to issue a command at the terminal on Host machine:

Assuming Host machine is running Windows, you can issue a command like this (change target vmdk path accordingly).

"C:\Program Files\VMware\VMware Server\vmware-vdiskmanager.exe" -k  "\*destination*\target.vmdk"


Reference: http://www.vmware.com/support/ws5/doc/ws_disk_shrink.html