Monday, November 29, 2010

Build up a VMWare ESXi environment on a desktop PC

People have been discussing on the topics of setting up ESXi environment on a desktop PC either at home or for testing purposes. Although I have been searching on the forums to get some ideas why my installation has always been failed, the last conclusion I got is buying an Intel e1000 compatible network adapter first. Even the cheapest Intel network adapter will do a good job for you. Trust me, you won't need anything else but this. As I have ever seen, most forum users may get stuck in compiling not-quite-compatible onboard chipset NIC driver for use under ESX environment. This is not their fault, indeed. VMware makes Intel network card as a must-have component in the VMWare infrastructure.

I finally figured out that the most up-to-date ESXi server (say, version 4.1) can actually recognize the problematic Realtek onboard NIC, however, ONLY AFTER you insert an Intel e1000 network card into the slot. The thing is that the ESXi installation will only succeed once it detects Intel network adapter (obviously not the onboard NIC), making the network adapter as default management console's network interface to the outside world. Some people would just simply tell us having more Intel network adapters in stock for ESX maintenance and support.

After several trials with different versions of installation disc on my HP desktop PC, I found out another trap on the so-called HP customized version of ESXi Installable Disc which actually includes a timer program causing a crash even after the installation is completed. You may probably see a crash screen with error message once your ESXi server keeps running for the first 10 minutes. So you'd better off NOT using any customized version of ESXi Installable Disc. Use the official Installation Disc found on VMWare official site, please.

Regarding the memory, I can't get much but enough for 4GB RAM which is almost good for running a resources-hungry Windows server and a slim Linux based NAT router VM appliance like pfsense. Pfsense has a nice web GUI for you to do configuration via HTTP/HTTPS protocol. The use of NAT router is to do some port forwarding and maximize the use of one single public IP address to the WAN. You may probably use it as a portal to your VMs inside ESXi environment.

Once the installation is done with the aforementioned hardware added to your desktop PC, you can use another computer to connect it via VSphere client. Rememeber to use the IP address assigned to vmnic1 which is for management console connection only.



The architecture is simple enough. Assuming you have one onboard NIC vmnic0 and one Intel network adapter vmnic1, you are going to use vnmic1 for ESXi management console and vmic0 for the router.

Supposing you have created one virtual switch (vSwitch0) for management console connection by VSphere client and another virtual switch (vSwitch1) for VM pool including NAT router and other VMs, then you are good to go.

Here is the big picture

Tuesday, October 5, 2010

IE6 gzip problem

Once in a time, IE6 has been successful that people would make it as a default browser. It has been recognized among the organizations. But, now we have a lot more choices like Firefox, Chrome and Safari. Although the speed of hardware peripheral is faster than ever, the network speed is still varying from area to area. For saving bandwidth and server loading, modern browsers support compression feature which allows the compressed content transferred over the network and then have it decompressed for processing. It's been in doubt that IE6 can truly support this feature. There are criticism over IE6 gzip handling among the forum users. Most developers would just skip it to get most widely acceptance among the users. Indeed, not just IE, older browsers might not support gzip content.

The thing is that we can isolate those problematic browsers when incorporating this good feature. Apache server can be configured to skip annoying bugs over gzip in this way:


# Insert filter
SetOutputFilter DEFLATE

# Netscape 4.x has some problems...
BrowserMatch ^Mozilla/4 gzip-only-text/html

# Netscape 4.06-4.08 have some more problems
BrowserMatch ^Mozilla/4\.0[678] no-gzip

# MSIE masquerades as Netscape, but it is fine
# BrowserMatch \bMSIE !no-gzip !gzip-only-text/html

# NOTE: Due to a bug in mod_setenvif up to Apache 2.0.48
# the above regex won't work. You can use the following
# workaround to get the desired effect:
BrowserMatch \bMSI[E] !no-gzip !gzip-only-text/html

# Don't compress images
SetEnvIfNoCase Request_URI \
\.(?:gif|jpe?g|png)$ no-gzip dont-vary
# Make sure proxies don't deliver the wrong content
Header append Vary User-Agent env=!dont-vary


IE6 masquerades itself as Mozilla 4 so it is okay to put in no-gzip parameter for this kind of browsers.

Proxy server is also part of the causes when it starts to change header information for User-Agent. From Apache's manual, you will find this:


The mod_deflate module sends a Vary: Accept-Encoding HTTP response header to alert proxies that a cached response should be sent only to clients that send the appropriate Accept-Encoding request header. This prevents compressed content from being sent to a client that will not understand it.
If you use some special exclusions dependent on, for example, the User-Agent header, you must manually configure an addition to the Vary header to alert proxies of the additional restrictions. For example, in a typical configuration where the addition of the DEFLATE filter depends on the User-Agent, you should add:
Header append Vary User-Agent


All these measures aim at keeping the HTTP content delivery smooth enough and unchanged throughout the proxy servers.

Friday, September 10, 2010

VMware Tools on Ubuntu Guest

VM guest running on it own drivers for network adapter and memory management may experience performance drop and instability over a period of time. The best way to gain more support is trying to install vmware-tools from vmware.com. Currently, the newest version of repository is 4.1latest which should suite the needs for all VMware products.

There is an open source version of vm-tools which is not supported by VMware.com so you may take your risk if you try it that way.

Useful information like official documentation from VMware is here: http://www.vmware.com/pdf/osp_install_guide.pdf

I found this useful as they tell you where to find the package and the public PGP key. Also an important module like vmware-open-vm-tools-kmod-source has to be compiled and installed inside the VM before actual installation of vmware-tools. Or else, part of the vm services will be broken legs and will not start.

Ubuntu has its discussion over this topic with some tips and instructions:
https://help.ubuntu.com/community/VMware/Tools

They mentioned about a new command called apt-add-repository which is not available in the version prior to 9.10, so you may need to get around this by other methods for installation. The preparation also involves un-installation of previous version of vmware tools and open-vm-tools as well.

Regarding apt-get PGP error, you may follow this link:
https://help.ubuntu.com/community/SecureApt

After all, you will need to reboot the VM and check its vmware-tools running status:
$ /etc/init.d/vmware-tools status

For verification of kernel modules, please issue the following command:
$ /sbin/lsmod

You might expect there should be something like vmxnet, vmblock and vmmemctl loaded after the reboot.

Hope this helps!

Friday, June 25, 2010

Take a glance at HI Service identifier

Healthcare Identifier (HI) Service consists of three types of service identifiers:
  • IHI
  • HPI-I
  • HPI-O
Each identifier contains 16 digit reference numbers. The identifier is created according to the International Standard [ISO7812: AS3523.1&2-2008]. You may get some ideas from wiki of ISO/IEC 7812.

The structure of the identifier has been divided into three sections:

Issuer Identification Number (1st - 6th digit):
The first 5 digits is set to be "80036" for healthcare industry.
The 6th digit will be used for types of service identifiers like IHI (defined as "0"), HPI-I (defined as "1") and HPI-O (defined as "2").

    Individual Account Identification (7th -15th digit):
    It contains the unique reference number.

    Check Digit (16th digit):
    It can be calculated using IIN and IAI. The calculation is based on the Luhn formula modulus 10 "double-add-double" check digit [ISO7812].

    To assign a unique IHI, the following identifying information will be used:

    • Name
    • Date of birth
    • Date of birth accuracy indicator
    • Sex

    Extra information may be included, like:

    • Address (always included in verified IHIs)
    • Birth plurality (where relevant, and for a designated period)
    • Birth order (where relevant, and for a designated period)
    • Date of death (if applicable)
    • Date of death accuracy indicator
    • Alias (multiple alias allowed)
    • Trusted Data Source TDS identifier (Medicare Australia and Department of Veteran's Addairs identified as initial TDS)


    Interim number, which can be generated at the point of care for unidentified individuals, is called provisional IHI or unverified IHI.

    IHI can be classified as "verified", "unverified", "provisional", "deceased" or "retired". The HI Service will support classifications for those IHIs which are suspected or confirmed to be duplicate or replicate. An Evidence of Identity (EOI) process will be carried out to verify such initial information for the IHI.

    The details of the concept of operation can be found here.

    Legislation for Healthcare Identifiers Service passed by Parliament

    Today, the legislation to set up the Healthcare Identifiers Service has been passed by Parliament in Australia. It means the people can have their unique identifiers for e-Health records and the organizations like hospitals and clinics will have their own unique identifiers as well. It facilitates the data exchange dramatically among various kinds of national healthcare information systems. Software render would find it easier to identify an unique patient among the sea of diverse eHRs and collaborate among the other healthcare information systems.

    What a great effort, mate!

    Media release

    Thursday, June 17, 2010

    Backup: a practical point of view

    In people's mind, a backup could mean a whole bunch of files to be copied and pasted somewhere. This is partly true. We can copy file-by-file or bit-by-bit. It really depends on the level of backup operation.

    Here I would like to sum up a few points so hope the people around may know more about the backup operations of a system.

    There are three kinds of backup operations to be carried out in order to keep the system running well. What it means by running well is that the system is running with practically up-to-date data, managed application source code and working opearting system files.

    Data backup makes sure the data collected by the system is practically up-to-date. The backup target is specific to the master database server. To fulfil a high level of service uptime, the backup operation is carried out in the offline secondary (slave) database server which synchronizes the data with online master database. Carrying out such data backup operation will lead to a drop of overall system performance so we need to schedule it carefully and avoid any peak time of service demands. The principle is keeping minimum but most recent copies of backup as you can.

    Backup of application source code can be done through using SVN server which provides the version control over the source code and maintains different versions of the source code among Production server, Test server, and Development workstation. It makes sure the source code can be reverted to a particular version on Production server at any time, assuming that this particular version of source code to be compatible with the current structure of database.

    Backup of opearting system files keeps up-to-date software and configuration settings on currently workable state of Production server. It mainly includes software update, custom configuration settings and scheduled tasks for maintaining service uptime and data backup operation. This is extremely useful when a disaster recovery is not triggered by a minor incident at the data centre. The reason is that disaster recovery is defined as fire, flooding and physical damage to the equipment in the data centre. Lower level of disaster like incorrect system configuration and third-party software installation would not trigger such recovery process but can lead to a corrupted system state which makes Production server non-workable. In this case, the data centre has no choice but retrieves the initial image copy of the server which is out-dated and needs to be applied all the patches, settings and scheduled tasks again to make it workable. Regularly taking system snapshots of Production server can shorten the time to recover the service. In a virtual environment like VMware, the snapshots of system state can be taken in a regular schedule while Production server is running. The reverting process to previous system state is quite handy and can be efficiently handled by any VMware administrator. This is part of business continuity features by using virtualization technology. To meet a requirement of business continuity, the system administrator should schedule the tasks to take system snapshot of Production server at regular period. It ensure that a reasonable system state can be recovered in case of the aforementioned incidents.

    All those backups ensure that the system state can be kept practically up-to-date after the recovery from most incidents.


    Tuesday, June 15, 2010

    Import/Export BLOB data type correctly in MySQL database

    MySQLDump is one of the built-in tools which is useful enough for data import/export on daily operation. To deal with complex data types like Binary or BLOB, it is good to export them in HEX code in order to avoid any invalid character set issue or corrupted data during data import.

    MySQL manual has already mentioned about this:

    --hex-blob

    Dump binary columns using hexadecimal notation (for example, 'abc' becomes 0x616263). The affected data types are BINARYVARBINARY, the BLOB types, and BIT.


    Thursday, June 10, 2010

    Gnome Network Manager - A must read note

    When trying to figure what's happening for my NetworkManager settings being in-consistent with the settings in original interface control file /etc/network/interfaces, I found this note:

    Why won't Network Manager manage my Networks?

    If your network connection is listed in /etc/network/interfaces, it is unavailable to NetworkManager with it's default setup (read this to see how to change it to manage these connections: http://wiki.debian.org/NetworkManager ). The best option for a standard setup is to open the file using

    sudo gedit /etc/network/interfaces 
    
    and comment out (ie put a # in front of) or delete every line in the /etc/network/interfaces file except the two with lo in them- they read

    auto lo
    iface lo inet loopback

    Things are you really want to manage static IP yourself. Then you add new line in /etc/network/interfaces, remember, but not in Network Manager. Any non-dhcp (static) interface in /etc/network/interfaces will not be governed by NetworkManager. Then you may ignore the alert icon on NetworkManager even if you have connection access to the outside with your static interface settings.

    Fix MAC address to eth0 mapping issue on Debian based Guest VM

    When using VMware server/ESXi in production environment, new MAC address is allocated to a new VM being added. Same situation happens on the same VM if it is configured to have new network adaptor. A different MAC address is allocated to that network adaptor and it never duplicates. Debian based Linux will automatically map this new MAC address to a different network interface, i.e., eth? (where ? is always greater than 0) In this case, it will never map back to original network interface, say, eth0.

    Now check the file "/etc/network/interfaces", the content should be:



    Assuming you always have one network adaptor installed on your system and that one is expected to be eth0:

    To make sure new adaptor's MAC address being mapped to eth0 network interface, there is a file that raises such interest:
    "70-persistent-net.rules"

    I don't know what's 70 means but it always appears when you have network adaptor installed on your system.

    Try issue following command to locate this file in your linux environment:


    This file is created by udev service at startup where auto detection of network adaptor takes place. New content will be added to the bottom of the file automatically. It determines the output results you could find by issuing ifconfig command.



    In this case, eth2 is active, not for eth0.


    Basically, the file "70-persistent-net.rules" contains all the detected PCI devices and then do the mapping with them. Here is a typical one:



    Well, obviously you will find it useful to have a look at the NAME key paire whereas eth0 is specified. Meanwhile, assigned MAC address (either physical or virtual one) is shown after ATTR{address}.

    This kind of settings will be accumulated along the time when new network adaptors are added or MAC address is changed. New string with NAME="eth?" will be added to the end of the file so we can tell from the aforementioned file that the MAC address has been changed twice and finally eth2 becomes the current active network interface.

    If you want to force new MAC address being mapped to eth0 network interface, you can delete this file, namely "70-persistent-net.rules", with the following command:



    Then modify the content in "/etc/network/interfaces" to substitute eth2 with eth0:



    After system reboot, you will find a brand-new "70-persistent-net.rules" file reappearing in the directory but the content will be:



    Now, the network adaptor is mapped to eth0 again.

    For the administrative point of view in VMware environment, it is not a good practice to manually reset network interface mapping like this. That would be good if we can automate it. Here is how:

    • locate directory "/etc/init.d"
    • pick one of networking related script like "/etc/init.d/networking" and edit it
    • locate stop) section within script file "/etc/init.d/networking"
    • add the following line towards the end of the section (right before double semi-colon ;;):
    • save it
    • If you find a symbolic link linked to this script file within /etc/rc0.d/ directory, then it means it will run stop) section of "/etc/init.d/networking" during system shutdown process, i.e., deleting "70-persistent-net.rules" automatically. If not, you may need to create a symbolic link yourself within /etc/rc0.d/ directory. The command would be like this:
    $ sudo ln -s /etc/init.d/networking /etc/rc0.d/networking
    


    Now, everything is ready. Try reboot the system to let things happen.

    Whenever MAC address is going to be changed at next bootup, the problematic "70-persistent-net.rules" file would have been deleted first during shutdown period. Then a new "70-persistent-net.rules" file will be created with the correct mapping of MAC address to eth0 again at next startup. It makes sure new MAC address is always mapped to eth0 adaptor, with the basic assumption that there is only one network interface card installed on the system.

    Tuesday, April 6, 2010

    Apache 403 forbidden for image files


    I think it’s such a wired thing that it’s actually related to permission issue. The working solution can be found in http://ubuntuforums.org/archive/index.php/t-763605.html .
    What you could do is:

    Duplicate a folder and then reset all file permissions required. Everything works FINE AGAIN!

    Friday, March 12, 2010

    File encryption on Linux

    AESPIPE is a package for convenient pipeline encryption tool via command line. 

    It's useful to have an instant encryption on the fly. Database backup would be one of the most important tasks carried out for daily operation. Encryption helps backup operator to secure the backup which can then be copied to anywhere for storage, without too much concern over the data privacy.

    To backup mysql database with further encryption, we can type the following command:






    Note: -P parameter would point to a key file which contains the password for encryption purposes. Of course, you'll need that when you want to decrypt the file again.

    To retrieve the original backup file from the encrypted copy, we can type this:






    Friday, February 5, 2010

    Using Nautilus to browse networks

    There was a famous error shown up while using Nautilus File Manager to browse Network:



    Seems wired, especially when you are doing the same thing over a remote connection to your Ubuntu server.
    After a research among the materials on the Internet, a solution is found without installing any other components as mentioned by the other gurus. Try this in the terminal:


    gksu is the command to open GUI program in the terminal while & sign keeps this as a separate process after initialization.

    dbus-launch does do the tricks, i.e. network browsing.

    For detailed explanation, please read this:
    http://linux.die.net/man/1/dbus-launch