Tuesday, December 6, 2011

PHP Accelerator for XAMPP on Linux

Since PHP 5.3 was released, it had been even harder to find stable accelerator to boost up the performance. Recently, I searched through the news and blogs about any breakthrough of PHP accelerator and found some good news. One of those package called APC has released new packages which claimed to fully support PHP 5.3. So far it has good responses among the users regarding stability and performance. Also, APC is going to be deployed as built-in accelerator in PHP 5.4 soon.

To install new package of APC from the scratch, we need something more installed on Linux platform:

Packages required:
Autoconf: Whatever version is fine.
XAMPP: Version from 1.7.2 to 1.7.7 should be okay.
XAMPP development source: This must be corresponding to the original version of XAMPP package.
APC: Latest APC package available for the best support.

To install Autoconf on Linux, like Ubuntu:
#
$sudo apt-get install autoconf


To install XAMPP:
Download and install XAMPP package from http://www.apachefriends.org/en/xampp-linux.html
#
$wget http://www.apachefriends.org/download.php?xampp-linux-1.7.7.tar.gz

$sudo tar xvfz xampp-linux-1.7.7.tar.gz -C /opt


Of course, you need to finish the basic setup and make sure Apache server and MySQL are running before you proceed to the following steps.

To install XAMPP development source:
Use the following link and modify version of XAMPP pacakge as required.
Here, assuming we have installed XAMPP 1.7.7 package:
#
$wget http://www.apachefriends.org/download.php?xampp-linux-devel-1.7.7.tar.gz




$sudo tar xvfz xampp-linux-devel-1.7.7.tar.gz -C /opt




To compile and install APC source:
#Download and extract source files

#
$wget -O APC-latest.tar.gz http://pecl.php.net/get/APC

$tar xvfz APC-latest.tar.gz

$cd APC-*


#run phpize while in APC source directory
#
$sudo /opt/lampp/bin/phpize

#Configure source with setting appropriate config path pointing to XAMPP php-config file
#
$sudo ./configure --with-php-config=/opt/lampp/bin/php-config

#Compile and install
#
$sudo make

$sudo make install


Now, add new line into php.ini and restart Apache server to initialize APC module:

#
$sudo sh -c "echo 'extension=apc.so' >> /opt/lampp/etc/php.ini"

$sudo /opt/lampp/lampp stopapache



$sudo /opt/lampp/lampp startapache


To check if APC is running, type the following command:
#
$sudo /opt/lampp/bin/php -r 'echo phpinfo();' | grep apc --color

And then you should see some related information about running APC module when things don't go wrong.





Thursday, November 24, 2011

Google Health: a valuable lesson on eHealth project

Just got the news about Google going to tidy up their stocks before the year end. Google Health has been chosen for termination and actually it is not the only one project to be closed down by the big boss in 2012. Medical records are always binding to a legal burden which hinders the developers in terms of data sharing and peer review. It's just not ready to be opened to the public in many cases. One way or the other, the users need to give trust to their treating doctors and let them read the records. Once the records go online, the service provider must hold the responsibilities to protect the patient's data which is actually binding to the law like HIPAA in U.S.. Different countries have different laws to govern how clinical data can be shared and handled. This is immediately turned into a legal issue once any mistake happens in between the service provider and the users. Even Google has a team of lawyers to guard against all those lawsuits potentially arisen among the people in U.S., it added up the cost of protecting those assets of clinical data which are not legally owned by the service provider. As the wised reckon, Google try to use a small team to conquer a large universe of data but that universe of data was not there. It may be residing somewhere under the influence of law, nonetheless, it is just out of reach for the Internet Giant.

Friday, November 18, 2011

Occasional blank page on IE6 during a site visit

Even though the market share of IE6 is on the way of shrinking around the world, the truth is many organization still deploy this particular version of web browser for LAN users. I always have a doubt on the statistics provided by IE6 Countdown website.

Anyway, I noticed some of the client computers are still running IE6 SP2 which is a bit better in security but still not flawless. IE6 is supposed to handle well in HTTP/1.1 connections and it always sends out HTTP/1.1 request. If not sure, have a look on the following URL and checkout the appropriate option to force IE6 to send HTTP/1.1 request:

http://www.ehow.com/how_6516962_fix-internet-explorer-provided-dell.html

Okay, even the client side is ready to handle HTTP/1.1, the proxy server or even the web server at the far side may not send back HTTP/1.1 response. In case of Apache 2.x server, the default settings make it suppress HTTP/1.1 response but send back HTTP/1.0 response instead, particularly for Internet Explorer.

You may find the settings under httpd-ssl.conf as follows:


#
BrowserMatch ".*MSIE.*" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0

According to EricLaw's opinion, this actually force Apache server to give out HTTP/1.0 response to any version of IE, even if IE sends out HTTP/1.1 request at the client side. This could lead to intermittent blank pages on IE6 which is actually error-prone to this kind of responses from the web server.


Some people suggest they simply comment out the settings to avoid this problem. Somehow, IE6 is not perfect enough if we don't put nokeepalive and ssl-unclean-shutdown directives into the filter. Other problems may still arise of IE6 or above version.

To overcome most of the problems among those different version of IE, we have put in detailed criteria to selectively tackle the problem of particular browser version.

The recommended settings would be like these:


#
#IE Version 2 to 5 should be downgraded to HTTP/1.0 for compatibility
#IE Version 1.0 not even support HTTP/1.1 so ignored
BrowserMatch ".*MSIE [2-5]\..*" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0
#
#Prudent settings for newer version of IE
BrowserMatch ".*MSIE [6-9]\..*" \
nokeepalive ssl-unclean-shutdown


Please beware of backward-slash plus double dots "\.." in the expression.

Except for some performance degrade on IEs, this should keep those browsers running fairly okay at the client side. In the far future, we might have to think about how to handle the case of IE10, IE11 or IE12 in case they are released. Hope those problems will be gone in the newer release of IE.

After all, Firefox, Safari and Chrome do not show any symptom like this in IE. So, finger crossed;)


Monday, November 7, 2011

In-depth look into Path MTU Discovery

Recently, I have been searching for a ultimate solution to remedy those situations whereas TCP traffic doesn't come through at the client side. The symptom could be occasional blank page on the browser.

It points me back to study the root cause of broken router or mis-configured firewall settings. For security reasons, some network administrator would make prudent settings to block ICMP traffic from the WAN whereas ICMP (type 3, code 4) packet is one key element in traditional Path MTU Discovery (PMTU-D) process initiated by the server.

Normally, MTU size is default to 1500 bytes at perfect network environment, especially the LAN. It's not the case in a real world situation where MTU size may vary. PMTU-D helps solve this problem in most cases. However, firewall administrator may simply block ICMP packet for security purposes. ICMP packet is useful for getting feedback from the destination for adjusting a possible MTU size to let the traffic come through without defragmentation, i.e., no further DF flag is required during the communication.

Hopefully, IETF Working Group has worked out another way to detect MTU without the need of ICMP packet. Packetization Layer Path MTU Discovery (PLPMTUD, RFC 4821) can work over the network layer above IP, i.e., TCP or UDP. This makes PLPMTUD can work independent of ICMP messages. It starts probing from a small packet size set as inital MSS, then increase progressively to a larger size until a packet loss happens. The optimum size will be used as MTU for that particular connection.

On Linux, there are several network parameters for tweaking:

/proc/sys/net/ipv4/tcp_mtu_probing
/proc/sys/net/ipv4/tcp_base_mss
/proc/sys/net/ipv4/ip_no_pmtu_disc


Possible values of tcp_mtu_probing are:
0: Don't perform PLPMTUD
1: Perform PLPMTUD only after detecting a "blackhole" in old-style PMTUD
2: Always perform PLPMTUD, and use the value of tcp_base_mss as the initial MSS.

Setting tcp_mtu_probing to 1 makes sure that PLPMTUD will start only when black hole router is detected along the way to the destination IP.

Default value of tcp_base_mss is 512 and is supposed to remain the same.

ip_no_pmtu_disc is default to 0 whereas traditional PMTUD can be used at all time. Setting this to 1 seems to make it totally skip the old fashioned way to detect MTU size by using ICMP message.

Ref:
http://kb.pert.geant.net/PERTKB/PathMTU
http://www.znep.com/~marcs/mtu/
http://kerneltrap.org/mailarchive/linux-net/2008/5/24/1928074/thread



Wednesday, November 2, 2011

Back to the old school days of TCP/IP

Just imagine that you are hosting your favorite web site which loads smoothly at your side, even at home, on the bus or in the coffee shop. Til one day, you received a complaint from your old school friend who challenged that you might have really poor web authoring skills. Sometime, what he had seen was a blank page while he was visiting your web site. Blank, in that way, white color with no extra icon, text or even error message, purely blank. Pressing [Refresh] button and things appear again. It sounds like you have been hosting a broken web server. Anyway, it was fine to visit any other web sites from your friend's computer except this one.

"Is the web site crippling enough for me to visit?!!", you might ask. No worry, there is nothing wrong with the web server. Bad client settings of web browser? Maybe.

However, you might not work out how the web browser actually works to display a blank page like this. This, in my opinion, indicates that the web traffic doesn't actually come through. Internet is a huge network coordinated by different routing equipment. They can be linked up with fiber optics, copper wires or even wireless signal in the air. Along the years, the speed of Internet is growing faster than ever before. People change their gears from a wired 56K modem to nowadays USB/built-in Wi-Fi adapter up to 300mbps in transmission. They might forget those days while scientists were excited about the data packets being sent through a co-axial cable successfully.

There are some articles reviewing one of the common TCP/IP features called TPC extensions for high performance network, a.k.a., RFC1323. You may have interest to have a look at the request for comment like this in bare text format which was dated back to 1992.

http://www.ietf.org/rfc/rfc1323.txt

Among those, TCP Window Scaling Option contributes faster packet transfer by controlling receiving window dynamically inside the network adapter. This boosts the rate of data transfer to next level. A single server can significantly increased its network capability by turning this feature on.

Somehow, this has been seen as industry standard for years. People argued that it still has not been implemented well on some routers and causes all sort of problems in basic TCP/IP communication.

A blogger pointed out that some router would reset a vital parameter during TCP communication and lead to misunderstanding by the computers at both ends.

http://inodes.org/2006/09/06/tcp-window-scaling-and-kernel-2617/

In this case, people would normally not aware of the problem until they realized that they couldn't access some web sites and think the sites might be down for service.

There might be some broken routers or switches along the pathway to the target computer whereas they might been poorly managed and finally the doomsday comes. It might also be a long time ago since the network administrator updated the firmware on their routers. So, what about your side? Can you do something to ease that problem even you know that their router won't last long? For the best interests of the end users, we still need to find a way to sort this out.

Okay. In order to get TCP Window Scaling working, TCP Window Scaling Option must be turned ON from both ends. Either one of them has turned it OFF, TCP Window Scaling would not happen. Nowadays, this basic TCP feature might possibly turned on at the client side. Therefore, to sacrifice a bit of your server performance, you need to turn off TCP Window Scale Option on your server. What I say a bit could means huge difference of the user experience at far side. Depending on the actual server performance, it might take longer time to transmit the same set of data packets with TCP Window Scale Option OFF.

If all these don't work for you, then check your browser settings or hardware like network adapter. You may find a reason for this.

Reference materials can be found in the following links:

http://prowiki.isc.upenn.edu/wiki/TCP_tuning_for_broken_firewalls

http://wiki.squid-cache.org/KnowledgeBase/BrokenWindowSize

http://packetlife.net/blog/2010/aug/4/tcp-windows-and-window-scaling/

Monday, September 19, 2011

KeepAlive or not?

People have been discussing about turning an Apache feature on/off while I have been scratching my head wondering why the clients at the far side are suffering slow HTTPS connection behind NAT router and firewall.

HTTP v 1.1 protocol provide nice feature KeepAlive for Apache server to keep active connection opening to serve ongoing requests from the same client. In case of a prefork MPM Apache instance, it means a lock on the work process to this particular client and will no longer release until the client is completely fulfilled and let go.

For dynamic web application written by PERL, PHP or any other scripting language, a prefork process might be occupied by these resources intensive requests which cause a heavy loading at the server side. KeepAlive would create prolonged period of time for the process to work for one client only while others are waiting for the service. The process may actually work for many other clients at the same period of time. To free this, a temporary solution would be turning KeepAlive feature off in the configuration file like "httpd-default.conf".

A blogger Steve has found a programming way to sort this out but it may not good enough for ultimate setting embedded in Apache server itself. Anyway, it is still a workable solution.

For a better understanding, please read this.



Linux VM TCP UDP Tunning

Network optimization among server VMs is a hot topic as read in recent discussion. Back to the old-school optimization strategies, I find them useful in the VM world.

The followings are Linux network settings for high throughput interface card:


#
#
# Add these lines to /etc/sysctl.conf as appropriate
#

vm.swappiness = 10

net.core.wmem_max = 8388608

net.core.rmem_max = 8388608

net.core.rmem_default = 65535

net.core.wmem_default = 65535

net.ipv4.tcp_rmem = 4096 87380 8388608

net.ipv4.tcp_wmem = 4096 65536 8388608

net.ipv4.tcp_mem = 8388608 8388608 8388608

net.ipv4.route.flush = 1

net.ipv4.tcp_window_scaling = 1

net.ipv4.tcp_timestamps = 1

net.ipv4.tcp_sack = 1

Thanks for the tips:
http://wwwx.cs.unc.edu/~sparkst/howto/network_tuning.php
http://fasterdata.es.net/fasterdata/host-tuning/linux/

Thursday, September 8, 2011

[RESOLVED] Slow Windows 7 Guest VM after upgrade to SP1

VMware has pointed out that VMware Workstation prior to version 7.1.4 is known to have problem with Windows 7 Host upgraded to Service Pack 1 (SP1).

So, what about Windows 7 Guest VM equipped with SP1? I have my PC installed VMware 7.1.3 which only allows the administrator to perform any upgrade on it. Thanks to the company policy.

After the upgrade of Guest VM to SP1, Guest VM nearly caused the system to hang for minutes to do simple tasks and a high CPU usage at all time. People blamed on VMware's memory optimization feature which is enabled by default to optimize Guest VM memory consumption.

Temporary solution would be adding the following line into config.ini file located at C:\ProgramData\VMware\VMware Workstation:

vmmon.disableHostParameters = TRUE


This turns off VM memory optimization feature and some other possible host configurable features. After saving the change onto config.ini file, a clean reboot on hosting system is recommended.

Now, your favourite Win7 SP1 Guest VM should be back on track with presumably high performance.

Reference:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1036185

Friday, August 5, 2011

Fedora yum update slow symptom

I have my netbook installed with Fedora 14 which has been fine-tuned to have better wide-screen display driver, faster start-up and so on. Finally, it takes me more than an hour to do the first "yum update" process which should be finished in 10 minutes in Ubuntu Netbook edition.

People suggest using a yum plugin called fastestmirror:
$
$su -c 'yum install yum-plugin-fastestmirror'


After the installation, it does not really help to boost up the download speed. It is suspected that it takes slow mirror sites for downloading.

Digging into the details of /etc/yum/pluginconf.d/fastestmirror.conf, I found one useful option called "include_only". And then I uncomment one line and change it to:

#
include_only=mirror.aarnet.edu.au

AARNET supposed to be fastest mirror in Australia, so you might want to add your fastest mirror in your own country.

Before executing yum update, it is better to remove the cache file timedhosts.txt which was used for time checking from previous actions:

$
$su -c 'rm /var/cache/yum/i386/14/timedhosts.txt'


Now, run:

$
$su -c 'yum update'

You should be able to pull things from the repository mirror at top speed.

RBAC, pitfall or challenge?

A funny meeting has been held a few days ago while a topic has raised clinical researcher's interest in making good clinical software by using proper role based access control model. It seems to me that RBAC model has been a traditional topic among university students in CS Faculty. It could be a never-ending story when people are trying to find a way to create a generic model for general clinical applications.

When talking about managing tons of various e-health projects, people are always excited in finding an ultimate way to replicate existing model and turn it into a new one. This seems to be a revolution. They might imagine there should be plug-in available for all those web projects they have been working on. A gadget like this might cause those developers to scratch the hairs out of their heads. People might call it generic authentication module or generic adapter for user authentication.

On typical Active Directory based authentication model, local users are authenticated against a central AD server within the domain. How can we apply similar concept of this to make it working among various clinical applications?

We may see those applications as middle clients who need to authenticate the down-stream users for giving out access right and assigning user privilege. The central authentication module can be simply a database instance which stores all those user privileges and functional access rights while the popular webapps hosting on Apache server can make use of the database for central authentication. Different webapps will need different set of data tables to uniquely identify a user particularly for them. The good thing is that we can manage the authentication module at a central location while all other webapps will need to contact this module to collect necessary information for authentication and assigning proper access right to all kinds of features and functions provided locally in the webapps.

I can only tell this is not a new concept but people will realize the convenience of constructing new web projects with an easy way to incorporate user authentication from a third-party database server. Furthermore, we can construct API to bridge those applications developed on Java, C# or Python. This makes me aware of the term called "User Master Index". Similar to managing those patient records, it is more likely for the developers to construct something like "Patient Master Index". They serve the same purpose for a central management of user/patient identity which would be sharing among the applications. Also, the process of constructing the webapps becomes much more efficient.

Friday, July 22, 2011

Solution to CIFS mount failure

Once the OS is upgraded to Ubuntu 9.10, the network drive mounted in CIFS type is no longer working again.

Trying to tackle the issue by the following command:
..
CIFS VFS: cifs_mount failed w/return code = -22

In a Google search, one post has been found useful.

Re-installing smbfs package again solves this problem:

..
$ sudo apt-fast install smbfs

Friday, July 15, 2011

Ubuntu Jaunty 9.04 old package repository

Ubuntu release Jaunty 9.04 has been deprecated while the repository is completely taken off from the mirror server and Ubuntu server as well. Since missing on the repository servers, this repository is being hunted by the users who stick to this particular release for a long time. For people who might have forgotten to upgrade the OS on their own, this could be a bad news. That means no more package available could be installed, via regular way.

To port the old release repository for maintenance and last minute update, a post on the forum points to an interesting location:

http://old-releases.ubuntu.com/

which stores all those obsolete releases of Ubuntu repositories including Jaunty 9.04. However, this requires some changes in sources.list file for proper redirection.

Open /etc/apt/sources.list and edit as follows:

#
# deb cdrom:[Ubuntu-Server 9.04 _Jaunty Jackalope_ - Release i386 (20090421.1)]/ jaunty main restricted

# deb cdrom:[Ubuntu-Server 9.04 _Jaunty Jackalope_ - Release i386 (20090421.1)]/ jaunty main restricted
# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to
# newer versions of the distribution.

#deb http://mirror.aarnet.edu.au/pub/ubuntu/archive/ jaunty main restricted
#deb-src http://mirror.aarnet.edu.au/pub/ubuntu/archive/ jaunty main restricted
deb http://old-releases.ubuntu.com/ubuntu/ jaunty main restricted
deb-src http://old-releases.ubuntu.com/ubuntu/ jaunty main restricted


## Major bug fix updates produced after the final release of the
## distribution.

## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team. Also, please note that software in universe WILL NOT receive any
## review or updates from the Ubuntu security team.
#deb http://mirror.aarnet.edu.au/pub/ubuntu/archive/ jaunty universe
#deb-src http://mirror.aarnet.edu.au/pub/ubuntu/archive/ jaunty universe
deb http://old-releases.ubuntu.com/ubuntu/ jaunty universe
deb-src http://old-releases.ubuntu.com/ubuntu/ jaunty universe

## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team, and may not be under a free licence. Please satisfy yourself as to
## your rights to use the software. Also, please note that software in
## multiverse WILL NOT receive any review or updates from the Ubuntu
## security team.
#deb http://mirror.aarnet.edu.au/pub/ubuntu/archive/ jaunty multiverse
#deb-src http://mirror.aarnet.edu.au/pub/ubuntu/archive/ jaunty multiverse
deb http://old-releases.ubuntu.com/ubuntu/ jaunty multiverse
deb-src http://old-releases.ubuntu.com/ubuntu/ jaunty multiverse

## Uncomment the following two lines to add software from the 'backports'
## repository.
## N.B. software from this repository may not have been tested as
## extensively as that contained in the main release, although it includes
## newer versions of some applications which may provide useful features.
## Also, please note that software in backports WILL NOT receive any review
## or updates from the Ubuntu security team.
# deb http://us.archive.ubuntu.com/ubuntu/ jaunty-backports main restricted universe multiverse
# deb-src http://us.archive.ubuntu.com/ubuntu/ jaunty-backports main restricted universe multiverse

## Uncomment the following two lines to add software from Canonical's
## 'partner' repository.
## This software is not part of Ubuntu, but is offered by Canonical and the
## respective vendors as a service to Ubuntu users.
# deb http://archive.canonical.com/ubuntu jaunty partner
# deb-src http://archive.canonical.com/ubuntu jaunty partner

#deb http://security.ubuntu.com/ubuntu/ jaunty-security restricted main multiverse universe
#deb http://mirror.aarnet.edu.au/pub/ubuntu/archive/ jaunty-updates restricted main multiverse universe
deb http://old-releases.ubuntu.com/ubuntu/ jaunty-security restricted main multiverse universe
deb http://old-releases.ubuntu.com/ubuntu/ jaunty-updates restricted main multiverse universe

Save and exit.

After that, you may try the following to check if you can checkout the updates correctly:
$
$ sudo apt-get update

You may now install those missing packages previously supported in Ubuntu release 9.04 before you decide to upgrade to a newer but still obsolete release 9.10.

Saturday, July 9, 2011

Ubuntu Upgrade from 9.04 to 10.04 - a CD/DVD way

Something interested is found from the post about the way to upgrade Ubuntu 9.04 subsequently to 9.10 and then to 10.04. The method involves performing Ubuntu upgrade by using an ISO image of Ubuntu 9.10 downloaded from the repository server.

Upgrading Using the Alternate CD/DVD

Download the alternate installation CD via:
$
$ sudo wget http://mirror.aarnet.edu.au/pub/ubuntu/releases/9.10/ubuntu-9.10-alternate-i386.iso

Burn the ISO to a CD and insert it into the CD-ROM drive of the computer to be upgraded.

If the ISO file is on the computer to be upgraded, you could avoid wasting a CD by mounting the ISO as a drive with a command like:
$
$ sudo mount -o loop ~/Desktop/ubuntu-9.10-alternate-i386.iso /media/cdrom0

A dialog will be displayed offering you the opportunity to upgrade using that CD.

Follow the on-screen instructions.
If the upgrade dialog is not displayed for any reason, you may also run the following command using Alt+F2:

$
$ gksu "sh /media/cdrom0/cdromupgrade"

After an upgrade to Ubuntu 9.10 is successfully completed, another wave of upgrade to 10.04 can be done through GUI update manager.

Ref: https://help.ubuntu.com/community/KarmicUpgrades

Sunday, June 19, 2011

Aspire ONE (Model 0751H) GMA500 widescreen display setup on Fedora 14

Before installation of the problematic Poulsbo driver on Ferdora 14 (freshly installed), please try the followings first:

$ su -c 'yum clean all'
$ su -c 'yum update'

Reboot to let the updated kernel running.

Install rpmfusion free and non-free repositories rpms:

$ su -c 'yum localinstall --nogpgcheck \
http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm \
http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-stable.noarch.rpm'

Thanks to AdamW's articles, the correct steps are listed here:

To install Poulsbo driver:

$ su -c 'yum –enablerepo=rpmfusion-nonfree-updates-testing \
install xorg-x11-drv-psb'

To fix the problem of missing psb.ko for whatever updated kernel:

$ su -c'yum –enablerepo=rpmfusion-nonfree-updates-testing \
install akmod-psb'

Reboot to take effect for widescreen display.

Optimize the following sections in /etc/X11/xorg.conf:

Section "Device"
Identifier  "Videocard0"
Driver      "psb"
Option      "IgnoreACPI"
Option      "MigrationHeuristic" "greedy"
Option      "ShadowFB" "true"
EndSection

Section "Extensions"
Option     "Composite" "Enable"
EndSection

Section "Screen"
Identifier "screen1"
Device "Videocard0"
DefaultColorDepth 24
EndSection

Section "DRI"
Mode 0666
EndSection

Edit /etc/grub.conf as follows:

Add the following parameters to the end of line of kernel command:

elevator=noop mem=896m acpi_osi=Linux acpi_backlight=vendor

Reboot and see if the system performance is improved.

Friday, June 17, 2011

An Open Source version of RedHat Enterprise Linux

I have been looking for other alternatives which mimics or offers free version of Enterprise OS like RedHat Enterprise Linux. Two of them have been found so far, like CentOS and Scientific Linux.

RHEL6 has been released for a while but CentOS didn't quite catch up with the latest version of RHEL, so I turned to use Scientific Linux which has version 6 now. Also, the fancy of SL6 may be due to its support by famous national laboratories around the world, such as Fermilab and CERN. These labs are working on high energy physics and make use of SL6 inside their IT infrastructure.

SL6 is actually rebuilt from the source RPMs of RHEL6. People are working on security fixes and put them upstream back to RedHat repository. This makes SL6 binary compatible with RHEL6. Now I use SL6 on my Aspire ONE netbook as well.

The good thing that I have experienced is the high quality of software packages as compared with other open source edition of Linux which may crash and make your system useless in some occasions. You may not obtain the latest update in time but the stability and reliability offered by typical enterprise class system is certain.
Although "Scientific" is not a cool name to me, it is surely a good choice of enterprise class OS in production environment.

Site reference:
http://www.scientificlinux.org/

Wednesday, June 1, 2011

User Privileges for MySQL administration and automation

To automate MySQL administration like using auto backup script, the user privileges play an important role on how MySQL server to be control with enough security measures to prevent mis-using such user role.

Assuming "mysqldump" and "mysqladmin" are going to be used in such automation, the following privileges should be clearly identified and applied to a particular user role for successful MySQL administration:


  • SUPER
  • SELECT
  • SHOW DATABASES
  • SHOW VIEW
  • LOCK TABLES
  • RELOAD
  • SHUTDOWN
  • PROCESS
  • REPLICATION SLAVE
  • REPLICATION CLIENT

Example of assigning privileges:
>c:\xampp\mysql\bin\mysql –u –p –-port=3306
Enter password: ************

mysql>CREATE USER ‘replica’@’localhost’ IDENTIFIED BY ‘password’;
mysql>GRANT REPLICATION SLAVE ON *.* TO ‘replica’@’localhost’;


For details of particular privilege, please refer to the manual page:
http://dev.mysql.com/doc/refman/5.5/en/privileges-provided.html

Converting EXT3 partition into EXT4 on Ubuntu

EXT4 type partition is the successor of EXT3, but it may not be turned on by default installation. Indeed, EXT4 driver is backward compatible with EXT3/EXT2 partition. The conversion from EXT3 to EXT4 partition is straight forward.

Please backup important data and software you have been using on Ubuntu Linux.

BEFORE DOING ANYTHING, PLEASE SHUTDOWN ALL LAMPP RELATED PROCESSES:
$ sudo /opt/lampp/lampp stop
To switch the existing Ext3 file system to the new Ext4 driver, it is necessary to edit a configuration called "fstab":
$ sudo nano /etc/fstab
Look for ext3 on the line that defines your disk and change it to ext4.

To ensure the right device to be converted, you may check the name of the device by issuing the following command:
$ df -h

Make a note of the device as you will need it in the next step. In the example above, the device for the encrypted partition is "/dev/mapper/encrypted".

Next reboot your system. This is required so that you switch to the Ext4 Kernel driver. Do not continue without rebooting first.

Enable Ext4 features:
$ sudo tune2fs -O extents,uninit_bg,dir_index /dev/mapper/encrypted
WARNING: Once you run this command, the filesystem will no longer be mountable using the ext3 filesystem!

After running this command (specifically, after setting the uninit_bg parameter), it is also important to run fsck to fix up some on-disk structures that tune2fs has modified:
$ e2fsck -yfDC0 /dev/mapper/encrypted
Notes:

Running fsck will complain about "One or more block group descriptor checksums are invalid" - this is expected and one of the reasons why tune2fs requests to fsck.
By enabling the extents feature new files will be created in extents format, but this will not convert existing files to use extents. Non-extent files can be transparently read and written by Ext4.
If you convert your root filesystem ("/") to ext4, and you use the GRUB boot loader, you will need to install a version of GRUB which understands ext4. Your system may boot OK the first time, but when your kernel is upgraded, it will become unbootable (press Alt+F+F to check the filesystem).
If you do the conversion for the root fs on a live system you'll have to reboot for fsck to run safely. You might also need to add rootfstype=ext4 to the kernel's command line so the partition is not mounted as ext3.

WARNING:
It is NOT recommended to resize the inodes using resize2fs with e2fsprogs 1.41.0 or later, as this is known to corrupt some filesystems.
If you omit "uninit_bg" on the tunefs command, you can skip the fsck step.

Installing open-vm-tools on Ubuntu Guest VM

To help boost the performance of Ubuntu Guest VM, it is recommended to install VMware tools. The problem is it may not fully compatible with various release of Ubuntu. Ubuntu do have their own repository of Open VM tools for use. So, why not have a try on it?

Installing open-vm-tools from Ubuntu repository:

The VMware tools are part of open-vm-tools. Make sure that the "multiverse" repository is enabled and do:

Install kernel headers so modules will work
We need this on a 10.04 guest running in a Fusion 3 host
$ apt-get install linux-headers-virtual
Install kernel modules
$ apt-get install --no-install-recommends open-vm-dkms
EITHER: install tools for an xorg install
$ apt-get install open-vm-tools
OR: a headless install
$ apt-get install --no-install-recommends open-vm-tools

Direct Upgrade Ubuntu from 9.04 to 9.10

Ubuntu upgrade from 9.04 to 9.10 can never be easy like this:

Before the actions, it is essential to backup any running database and web server for disaster recovery and then turn them off completely.

You can easily upgrade over the network with the following procedure.

1. Start System/Administration/Update Manager

2. Click the Check button to check for new updates.

3. If there are any updates to install, use the Install Updates button to install them, and press Check again after that is complete.

4. A message will appear informing you of the availability of the new release.

5. Click Upgrade.

6. Follow the on-screen instructions.

Finally, reboot the system and see all changes.

There was compatibility issue regarding XAMPP package since people complained they couldn't get XAMPP based server startup after the reboot. In my case, it did happen before. In recent trial of GUI upgrade directly from Ubuntu Update Manager, the problem seems to be solved. I have even got another notice from Update Manager about Ubuntu upgrade to version 10.04 as well.

Tuesday, May 31, 2011

Resync MySQL Slave with Master

For various kinds of reasons, it is not unusual to find MySQL Slave instance in a broken state during the replication. The quick fix requires a perfect database backup of Master instance. From the backup file, two parameters must be extracted for restoration of data onto Slave Server:

bin-log filename: MASTER_LOG_FILE
bin-log position: MASTER_LOG_POS

Of course, the overall system performance may be affected during the backup operation. However, MySQL Master server can be kept running as normal without any interruption.

You may find it difficult to find all the switches required for a successful online backup operation. Fortunately, an example is here.

Before anything happens, you'll find it easier to manage all those by opening two separate terminal consoles for both Master and Slave instances.

Terminal console for Master server:

The following command (recommended to run under “sudo –s” environment) will dump all databases on Master instance:
$ mysqldump -uroot -p \-S /etc/mysql/mysql.sock \
--master-data --hex-blob --opt --single-transaction \
--comments --dump-date --no-autocommit \
--all-databases > target_backup.sql

Please make sure the path of socket file of MySQL Master server is correct.

With “--master-data” switch, Mysqldump program will make a comment for the bin-log filename and position which can be really useful for resynchronization afterwards.

With “--single-transaction” switch, a global lock is acquired for a very short time at the beginning of the dump while any action on logs happens at the exact moment of the dump.

With “--hex-blob” switch, image data with BLOB type can be well preserved inside the backup without any disruption.

Once finished, you'll get a perfect backup file from MySQL Master server named "target_backup.sql".

Terminal console for Slave server:

Now, it is time to put the data onto MySQL Slave server. Please beware that the socket file used here becomes “/etc/mysql2/mysql2.sock”. Don’t it mix up with that of Master instance.

$ mysql -uroot -p -S /etc/mysql2/mysql2.sock < target_backup.sql

Now, login to MySQL Slave server:

$ mysql –root –p –S /etc/mysql2/mysql2.sock


Please check if MySQL user role named “slaveuser” with “replication” privileges does exist on Slave server.

Then, execute the following commands to resynchronize Slave server:

mysql> STOP SLAVE;
mysql> RESET SLAVE;
mysql> CHANGE MASTER TO MASTER_HOST=’localhost’,
>MASTER_PORT=3306,
>MASTER_USER=’slaveuser’,
>MASTER_PASSWORD=’XXXXXXXX’,
>MASTER_LOG_FILE=’mysql-bin.XXXXXXXX’,
>MASTER_LOG_POS=XXXXX
>;
mysql> START SLAVE;

For the corresponding values of MASTER_LOG_FILE and MASTER_LOG_POS, please read through the heading comments inside “target_backup.sql” file.

To check the status, please issue the following command:

mysql> SHOW SLAVE STATUS;

It is useful to check the error log of MySQL Slave server to confirm whether everything is back on track.

Thursday, May 26, 2011

CeBIT Australia 2011

CeBIT 2011 is going to be held between 31 May 2011 and 2 June 2011. Both Technology Enthusiasts and Government Sectors are involved in this biggest event near Darling Harbour in Sydney. As I can imagine how many people will be there since last CeBIT event, it is definitely a good chance to meet top technologists and also listen to Julia about the new progress of the big plan for National Broadband Network.

Apart fromt the exhibition itself, there are upcoming hot topics to be discussed in the conferences. Major events include:
  • NBN Conference
  • Cloud Computing Conference
  • eGovernment Forum
  • eHealth Conference
  • Executive Briefing
  • Retail Conference
  • WebForward Conference
 Can't wait to be there and hopefully see you in the exhibition!

Sunday, May 22, 2011

PHP PDF generating over HTTPS issue on IE6 with blank page returned [REALLY SOVLED]

This is really a typical question raised at all time about mysterious IE6 PDF opening issue over HTTPS connection:


How can we successfully output PDF dynamically via HTTPS connection on various browser including IE6?


Whatever server-side language you use, you may encounter this yourself once or more in your life. As people trying to find a definite reason for this, it seems to be more than one anyway.


Say, using PHP, people encourage to add appropriate headers to make PDF stream opening on the fly directly in IE6 browser.



//size_of_stream is counted by bytes
//Display pdf text stream direct on the browser (IE bug fixed!)
//by setting the content-type


header("HTTP/1.1 200 OK");
header("Status: 200 OK");
header("Accept-Ranges: bytes");
header("Connection: Keep-Alive");


//Comment out for debugging purposes
//header("Cache-Control: public");
//Try setting 1 delta second for caching long enough for Adobe Addon to load PDF content
header("Cache-Control: public, max-age=1");
//Only need to specify User-Agent in Vary header as IE7 only accept that
//Default Vary header value is not welcomed at all
//Fixing IE7 bug for Vary header
header("Vary: User-Agent");
header('Pragma: public');
if (!is_null($size_of_stream)){header("Content-Length: ". $size_of_stream);}
header("Content-Type: application/pdf");
header('Content-Disposition: inline; filename="whatever.pdf"');
header("Content-Transfer-Encoding: binary\n");


This is almost true in many cases, including all the popular browsers like Firefox, Safari and Chrome. Yet there is still exception that PDF may not open properly on IE6, especially for small sized PDF stream.


Searching around for any server-side solution, it was really disappointed in terms of support of Internet Explorer 6.


Finally, I have a glimpse on a comment posted in the other forum which seems to work on almost any case with major version of Internet Explorer:


In Internet Explorer

  1. Select Tools
  2. Click Internet Options
  3. Select the Advanced tab
  4. Make sure "Do not save encrypted pages to disk" option near the bottom in the Security section is unchecked
This implies there is hardly any programming way to ensure PDF opening perfectly working.

Option "Do not save encrypted pages to disk" should only be enabled by default on Windows Server, whereas on most desktop PC the followings should be applied for security reason:


  1. Go to the Tools menu
  2. Click Internet Options
  3. Click the Advanced tab
  4. In the "Settings" box, scroll down to the section labeled "Security" 
  5. click to check the box next to the "Empty Temporary Internet Files folder when browser is closed" option
  6. Click OK to finish



This option does not delete cookies, but it will clear your cache of other files when you close your browser.

Now, the thing is how we are going to let people in an organization to follow this. This makes me unhappy:(




Related links:


http://joseph.randomnetworks.com/2004/10/01/making-ie-accept-file-downloads/#comment-670


http://robm.fastmail.fm/articles/iecachecontrol.html




However, this is not the end of the story.


To be enthusiastic for my work and diligent regarding all these, I keep digging deep into what's happening on a scenario which I have met in producing PDF on the fly.


For IE6, part of my PDF streams can be displayed correctly regardless of what option "Do not save encrypted pages to disk" disabled or not. When I tried to check the size of a successful PDF generated on the fly, they all show a bigger size like 120KB, 80KB or 76KB.


I did find some developer posts about the problem regarding PDF stream size to be displayed on IE. As I didn't remember wrong, someone mentioned about 8 Kilobytes in size to be minimum requirement on IE6.


When I go back to check that problematic PDF file by Firefox, it shows a size of 4 Kilobytes. Well, it's time to do an experiment on this.


Using PHP, it is easy to echo PDF stream first. Then you can calculate the size of stream by using function like strlen() to check size of string. Multiply this length by 1024, you'll get Kilobyte size yourself.


$len = strlen($pdf_stream)*1024; //length in bytes


Using simple algorithm to check it the length if it is less than 8192 bytes or not. When this is the case, you can first echo the PDF stream first.


echo $pdf_stream;


Then padding the spaces in advance:


for ($v=0;$v<=8192;$v++){
   echo ' '; //output space character
}


This makes sure the actual PDF output plus extra padding spaces occupying at least 8KB in size for IE compatibility.


Now, the problem is really solved across the browsers:)



Monday, May 16, 2011

Libgcc_s.so.1 bug on Linux

After a series of package installation onto Linux machine, some errors started to popup in the log. As found from a series of Google-search, it seems to be a problem widely spread among the software on Linux platform. This has been marked as a bug for attention.

Error message would be like this:
...libgcc_s.so.1: version 'GCC_4.2.0' not found (required by /usr/lib/libstdc++.so.6)...

This can happen on all kinds of software already installed onto the machine. People can barely find the cause until someone found a bug for this and suggest a fix - REMOVE IT!

However, it is safe to rename or move the problematic library to somewhere for retrieval. I would just rename it like this:

$ cd /destination_folder...
$ sudo mv libgcc_s.so.1 libgcc_s.so.1.bak

Sometimes you may want to check the file dependency before you take action:
$ ldd /destination_folder.../libgcc_s.so.1

The error should be gone so far.

Some useful materials can be found from the links below:
Fix "version `GCC_4.2.0' not found" for VMware 1.0.6 SErver and Ubuntu
`GCC_4.2.0' not found (required by /usr/lib/libstdc+

Wednesday, May 11, 2011

Base64 image fix for Internet Explorer

Although Internet Explorer 6 has been released for almost ten years, it is not likely that people are rushing to have a newer version, or even better alternative, of this piece of software among the hospitals.

For this famous version of Internet Explorer, trivial like unsupported image display in a way of data URI scheme seem to be annoying while nurses and doctors complain that they can't really see the chart images or even signature image (supposing in smaller size) on the web pages.

Although data uri images are widely supported by the other web browsers like Chrome, Firefox, Safari and Opera, there is strong reason for us to take care of IE. We have to accept that it is still a web browser which is widely spread among those ward computers, i.e., a niche market for web developer.

For any advancer who keeps using new stuff to catch the eyes, there will always be a dilemma on backward compatibility. By looking around in the sea of Google searches, I can barely find quite a lot of opinions on solving the problem. Most of them would urge you to keep focusing on the browsers. While some other developers suggest an intrusive way to modify almost every web page in the project, I prefer this way:


It was found from a blog article since 2005 but it sheds the light on a question:

Can we actually repair (fix) those images after they have been shown up on a incompatible browser?


The answer is yes. 


Here is the recipe:

  • JQuery core script file for integration.
  • A custom JQuery function in Javascipt file.
  • A min-sized PHP script file for Base64 image data processing.



By using client-side Javascript and an external PHP file, we can redirect in-line Base64 image stream data to external HTTP request for obtaining a compatible image object back from server-side image processing.

For nowadays web solution like PHP (v5.3) + JQuery (v1.6), it is not bad to review the source code from the above blog article and see what we can do for the new decade.


We would like to apply the fix with one function call fixBase64Image() once the web page is loaded completely. This function will search through the DOM elements and apply the fix to the target elements appropriately. A check on browser type and version is also possible to eliminate unnecessary action on non-related element. The target elements here are the image elements "img" with search criteria for the property of image source "src" which contains data uri stream like:



img src="data:image/gif;base64,..."



For client-side Javascript function (supposing JQuery has been involved in your project):

function fixBase64Image() {
 var BASE64_data = /^data:.*;base64/i;
 var BASE64_Path = "base64transfer.php";
 if ($.browser.msie){
  $("img").each(function(){
   // check matched image source
   if (BASE64_data.test($(this).attr("src"))) {
    // pass image stream data to external php
    var newSrc = BASE64_Path + "?" + ($(this).attr("src")).slice(5);
    $(this).attr("src",newSrc);
   }
  });
 }
 
};

The Javascript function will repair the broken images by replacing the source path of IMG element with an external PHP request to "base64transfer.php" while Base64 image data is encapsulated in HTTP request as the query string.


For the newly created external PHP file named "base64transfer.php",five lines of code are enough to support this:

$data = splitexplode(";", $_SERVER["QUERY_STRING"]);
$type = $data[0];
$data = splitexplode(",", $data[1]);
header("Content-type: ".$type);
echo base64_decode($data[1]);


This PHP script simply converts the query string, supposing a long string of Base64 image data, into raw image stream data and then send it back to the browser for image display.


This can be considered as a quick fix to the lack of support to Base64 in-line images in Internet Explorer 6.





Wednesday, March 2, 2011

MySQL Replication

There are good materials around for setting up multiple MySQL instance on local machine for data replication. But, the fact is that MySQL server only supports offline backup for INNODB type database. For online backup, Slave MySQL instance can be setup separately for backup purposes while Master instance can remain online to serve the users. Once Slave instance finishes its job, it can be startup again and synchronize with Master instance during the black-out period. This is a cost-effective solution for real-time MySQL backup.

The official manual have the details about this:
http://dev.mysql.com/doc/refman/5.5/en/replication.html

For basic tutorials, you may find it here:
http://forge.mysql.com/wiki/Replication/Tutorial

I love the presentation which bring a brief but clear information about what have to be done:
http://assets.en.oreilly.com/1/event/2/MySQL%20Replication%20Tutorial%20Presentation%202.pdf

To setup data replication on existing MySQL server, it is good to have a copy of /data directory first and then rename it to something like /data_slave and hook it up under the same parent folder of /data. This ensures a good start for setting up new database instance from the grounds up.

You may need to make a duplicate of my.ini which has already been used by the master MySQL instance. Rename that duplicate to something like, my2.ini and keep it in the same folder as the master my.ini file. Regarding the content of my2.ini, I suggest it is good to change the parameters first before starting those instances.

In my.ini, we have some basic settings like these:


[mysqld]
server-id=1
port            = 3306 
socket          = "C:/xampp/mysql/mysql.sock"
basedir="C:/xampp/mysql" 
tmpdir="C:/xampp/tmp" 
datadir="C:/xampp/mysql/data"
pid_file="mysql.pid"
innodb_data_home_dir = "C:/xampp/mysql/data"
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = "C:/xampp/mysql/data"
#innodb_log_arch_dir = "C:/xampp/mysql/data"

#Important setting for error prone builtin innodb plugin
innodb_use_sys_malloc = 0

innodb = ON

#Necessary for master instance
log-bin = mysql-bin

log_error="mysql_error.log"


In my2.ini, we should have changed those to:

[mysqld]
server-id=2
port            = 3307 
socket          = "C:/xampp/mysql/mysql2.sock"
basedir="C:/xampp/mysql" 
tmpdir="C:/xampp/tmp" 
datadir="C:/xampp/mysql/data2"
pid_file="mysql2.pid"
innodb_data_home_dir = "C:/xampp/mysql/data2"
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = "C:/xampp/mysql/data2"
#innodb_log_arch_dir = "C:/xampp/mysql/data2"

#Important setting for error prone builtin innodb plugin
innodb_use_sys_malloc = 0

innodb = ON

#Necessary for slave instance
relay-log = mysqld-relay-bin

log_error="mysql_error.log"

The concept behind this is that we are going to setup two MySQL instances on the same machine, with the master instance using port 3306 and the slave instance using port 3307, for communication. For your convenience, you may change it to any other port available on the server.

Supposing we have already have the master instance of MySQL started and running, now we need another one acting as a slave instance for backup purposes.

In Windows environment, it is necessary to setup a new Windows service running at the background for better system management.

Before this, you may want to test if my2.ini ready to be a running service. To test this in a debugging mode, it is good to use mysqld-debug.exe to start a debugging process with the target configuration file (my2.ini).

mysqld-debug.exe --defaults-file="c:\mysql\bin\my2.ini"

Take a look at the error log file (mysql_error.log) when anything goes wrong, you won't miss it.

When the test is finished and you are satisfied, you can create a new Windows service called mysql2 running in the background for easy management.

c:\mysql\bin\mysqld --install mysql2 --defaults-file="c:\mysql\bin\my2.ini"

After that, you may start this Windows service.

net start mysql2

It is recommended to use MySQL Workbench to manage those master and slave instance. For details, please read their manual.

You can download a copy of MySQL Workbench via: http://www.mysql.com/downloads/workbench/

It is also recommended to download the no-install.zip copy on Windows platform. The reason is that .msi installer file will require an pre-installation of .NET framework 3.5 or above which may install extra pieces of software like IIS web server while you might already have one instance of Apache server installed on the machine. They both race for port 80, anyway.

Once you can start both MySQL instances, you may start issuing commands in MySQL consoles to make data replication happens among them.

It's good to take a look at this first: http://dev.mysql.com/doc/refman/5.5/en/replication-howto-slaveinit.html

You may also check out the steps here: http://www.howtoforge.com/mysql_database_replication_p2

During this time, you may want to dump the data out of master instance and then feed it into slave instance for easy startup. Take a look at this: http://dev.mysql.com/doc/refman/5.5/en/replication-howto-masterstatus.html

To check the replication status on master instance, try the following commands in MySQL console:
mysql>unlock tables;


mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000020 |      107 |              |                  |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)



mysql> SHOW SLAVE HOSTS;
+-----------+------+------+-----------+
| Server_id | Host | Port | Master_id |
+-----------+------+------+-----------+
|         2 |      | 3306 |         1 |
+-----------+------+------+-----------+
1 row in set (0.00 sec)


Once the replication happens, it will continue even after a system restart on the server itself. Sounds like a magic!