Friday, February 7, 2014

Mac OS X: Prevent .DS_Store file creation over network connections

Mac user may find it uncomfortable for leaving the trace when opening files or folders on the remote file server. Some hidden files like .DS_Store will be created automatically, sadly, without any acknowledgment to the user.

Here comes the hint to disable this feature on remote storage access (for Mac OS X 10.4 or later only):

To configure a Mac OS X user account so that .DS_Store files are not created when interacting with a remote file server using the Finder, follow the steps below:
Note: This will affect the user's interactions with SMB/CIFS, AFP, NFS, and WebDAV servers.
  1. Open Terminal.
  2. Execute this command:
    defaults write com.apple.desktopservices DSDontWriteNetworkStores true
  3. Either restart the computer or log out and back in to the user account.
If you want to prevent .DS_Store file creation for other users on the same computer, log in to each user account and perform the steps above—or distribute a copy of the newly modified com.apple.desktopservices.plist file to the ~/Library/Preferences folder of other user accounts.

Additional Information

These steps do not prevent the Finder from creating .DS_Store files on the local volume, and these steps do not prevent previously existing .DS_Store files from being copied to the remote file server.
Disabling the creation of .DS_Store files on remote file servers can cause unexpected behavior in the Finder (clickhere for an example).
Ref: http://support.apple.com/kb/ht1629


Wednesday, January 22, 2014

Start headless Windows guest VM in Virtualbox

Virtualbox offers open source implementation of desktop virtualization technology for muli-OS users and developers.

Newest version of Virtualbox is v4.3.6 while running a headless VM as remote server instance can lead to lower resources consumption and faster response on the host computer.

Before starting a headless VM, you need to make sure the VM has Remote Display setup properly. Once a headless VM starts running, the easy way to access the VM would be using RDP viewer. This is particularly useful when you run a headless Windows VM.

To run a VM in headless mode, we can issue the following command:

$
$ vboxheadless --startvm "WHATEVER NAME IT IS FOR VM"


In case of using Virtualbox GUI, you can hold SHIFT key while clicking on START button for the target VM.

If you run into a famous error like NS_ERROR_CALL_FAILED: segmentation fault 11, you should take a look at the setting of the VM whereas 3D acceleration under Video tab should be turned off. You probably don't need 3D acceleration on a headless server VM so it doesn't cause much performance impact at all.


Tuesday, July 30, 2013

pfSense e-mail alerts sent to multiple recipients

pfSense is an all-in-one UTM appliance for firewall, security and network management purposes.



Since my lab doesn't allow a static IP assigned to my pfSense box so I sticked on to DHCP IP address which may be varying along the time. The fact is that I don't own the DHCP server myself so I can't even assign a fixed LAN IP address to this box. Each time something has changed, I need to get back to pfSense console and find out it's WAN IP address myself. That means I can't find out IP address remotely at home whenever DHCP server changes its IP address.

The latest version of pfSense is v2.2.2 at the time of writing.

pfSense comes with various kinds of installable packages whereas one of the useful one would be "mailreport".

To install this mailreport, just get in pfSense web console and click [System]->[packages], then choose to install mailreport.

Once finished, you can open it up by clicking [Status]->[Email Reports] option.

In there, you can create new email report. Just remember to add one new item under "Report Commands" Section. This can be a simple Unix command like ifconfig which shows current IP address of the pfSense box. I personally set it to send out email report once per day in the morning so I get the latest information I need before starting to work.

To configure email alert setting, click [System]->[Advanced] and then [Notifications] tab. Fill in correct SMTP information under Section SMTP E-mail.

But, hang on! There was a problem with the field "Notification E-mail Address" that it only takes one email address ONLY. That should be enough for solely test purposes.



However, I get my colleagues working with me so I would like to send email alerts to all members within my group. The pfSense web console is actually produced by PHP code, so we might need to have a little change inside the PHP source code itself.

Here are the changes we need to make for sending E-mail alerts to multiple recipients:

/etc/inc/mail_reports.inc


...
$mail->ContentType = 'text/html';
$mail->IsHTML(true);
$mail->AddReplyTo($config['notifications']['smtp']['fromaddress'], "Firewall Email Report");
$mail->SetFrom($config['notifications']['smtp']['fromaddress'], "Firewall Email Report");
$address = $config['notifications']['smtp']['notifyemailaddress'];
/* New lines start here */
    $addr_array = preg_split("/[\s,]*(\,|\;|\:)[\s]*/", $address);
    foreach($addr_array as $addr){
           $mail->AddAddress($addr, "Report Recipient ".($addr_count++));
    }
/* New lines end here */

/* Comment out the line below */
//$mail->AddAddress($address, "Report Recipient");
$mail->Subject = "{$config['system']['hostname']}.{$config['system']['domain']} Email Report: {$headertext}";
$mail->Body .= "This is a periodic report from your firewall, {$config['system']['hostname']}.{$config['system']['domain']}.

Current report: {$headertext}
\n
\n";
...

After the changes to the PHP file, you can test it by entering email addresses into the field  "Notification E-mail Address" with coma separator, like "recipient1@a.little.test.co,recipient2@a.little.test.co,recipient3@a.little.test.co". Just make sure no space is used between the email addresses and all those guys should receive the test messages immediately.

Fingers crossed;-)



Thursday, July 18, 2013

Compiling mod_auth_mysql.so under Mountain Lion OSX 10.8 with XAMPP for Mac (Apache v2.4)



First thing first! Install your favourite Bitnami XAMPP for Mac package:

http://www.apachefriends.org/en/xampp-macosx.html

At the time of writing, the latest version is v1.8.2 which includes newest Apache v2.4 as web server. This is where the problem is and we are going to sort this out and compile new mod_auth_mysql.so.

Make sure you setup XAMPP properly with appropriate passwords created for Apache and MySQL and so on…

!!!REMIND!!!
Before your spiritual work, make sure you stop Apache server so nothing should be affected during the compiling process.

The source code of mod_auth_mysql is a bit old to support Apache v2.4 web server whereas a little bit of extra work is required to get APXS compiling working.

Download C source code of mod_auth_mysql:

http://sourceforge.net/projects/modauthmysql/files/modauthmysql/3.0.0/mod_auth_mysql-3.0.0.tar.gz

Extract the mod_auth_mysql-3.0.0.tar.gz file which gives you a folder called "mod_auth_mysql-3.0.0".

Using Terminal command:

$ cd mod_auth_mysql-3.0.0


You will see the source file named "mod_auth_mysql.c" and we are going to work on it.

Download a patch file within the folder and patch it right there as follows:

$
$ curl http://www.zoosau.de/wp-content/uploads/mod_auth_mysql-300-apache-22.patch


$
$ patch < mod_auth_mysql-300-apache-22.patch


The patch fixes some problems for APXS compiling. For Apache v2.4, we have to do some more editing in the file "mod_auth_mysql.c".

Open up editor for the file "mod_auth_mysql.c":

$ open -e mod_auth_mysql.c


Modify the lines as described below:

==========================================================
LINE 908:
  return r->connection->remote_ip;

Changed to:
  return r->connection->client_ip;
==========================================================
LINE 1273:
const apr_array_header_t *reqs_arr = ap_requires(r);

Changed to:
const apr_array_header_t *reqs_arr = NULL;
==========================================================
LINE 1275:
const array_header *reqs_arr = ap_requires(r);

Changed to:
const array_header *reqs_arr = NULL;
==========================================================

Explanation:

It's a bit technical and requires you to read Apache manual first about new Apache 2.4 which explicitly takes ap_requires() function completely out of core services.

Looking through those forums and finally get something closed to that problem. A possible fix:

http://www.mail-archive.com/pld-cvs-commit@lists.pld-linux.org/msg313889.html

As function ap_requires() is removed from Apache v2.4 API, we can set related reference pointer to NULL in order to skip that problem in mod_auth_mysql.c file.

For the solution of LINE 908, thanks to the blogger on http://cootos.sinaapp.com/?p=94 .

Now comes the actual compiling work.

Reference is here:
http://www.nilspreusker.de/?s=mod_auth_mysql

But we need to modify the paths to port them to those paths in XAMPP for Mac package.

Using Terminal command as follows:

$
$
$ sudo /Applications/XAMPP/bin/apxs -c -i -a -D -lmysqlclient \
 -lm -lz -I/Applications/XAMPP/XAMPPfiles/include/ \
 -L/Applications/XAMPP/XAMPPfiles/include/ \
 -Wc,"-arch x86_64" -Wl,"-arch x86_64" mod_auth_mysql.c


It's a long command which I spilt it into four lines so you may have to recombine them in the text editor before you issue the actual command.

If things are going well, you should see the following output generated by the above command:

/Applications/XAMPP/xamppfiles/build/libtool --silent --mode=compile gcc -std=gnu99 -prefer-pic -I/Applications/XAMPP/xamppfiles/include/c-client -I/Applications/XAMPP/xamppfiles/include/libpng -I/Applications/XAMPP/xamppfiles/include/freetype2 -O3 -L/Applications/XAMPP/xamppfiles/lib -I/Applications/XAMPP/xamppfiles/include -I/Applications/XAMPP/xamppfiles/include/ncurses -arch x86_64  -DDARWIN -DSIGPROCMASK_SETS_THREAD_MASK -no-cpp-precomp -DDARWIN_10  -I/Applications/XAMPP/xamppfiles/include  -I/Applications/XAMPP/xamppfiles/include/apr-1   -I/Applications/XAMPP/xamppfiles/include/apr-1 -I/Applications/XAMPP/xamppfiles/include -arch x86_64 -I/Applications/XAMPP/XAMPPfiles/include/  -c -o mod_auth_mysql.lo mod_auth_mysql.c && touch mod_auth_mysql.slo
mod_auth_mysql.c: In function 'str_format':
mod_auth_mysql.c:891: warning: format '%d' expects type 'int', but argument 8 has type 'long int'
/Applications/XAMPP/xamppfiles/build/libtool --silent --mode=link gcc -std=gnu99 -Wl,-rpath -Wl,/Applications/XAMPP/xamppfiles/lib -L/Applications/XAMPP/xamppfiles/lib -I/Applications/XAMPP/xamppfiles/include -arch x86_64 -L/Applications/XAMPP/xamppfiles/lib -L/Applications/XAMPP/xamppfiles   -o mod_auth_mysql.la -arch x86_64  -L/Applications/XAMPP/XAMPPfiles/include/ -lmysqlclient -lm -lz -rpath /Applications/XAMPP/xamppfiles/modules -module -avoid-version    mod_auth_mysql.lo
/Applications/XAMPP/xamppfiles/build/instdso.sh SH_LIBTOOL='/Applications/XAMPP/xamppfiles/build/libtool' mod_auth_mysql.la /Applications/XAMPP/xamppfiles/modules
/Applications/XAMPP/xamppfiles/build/libtool --mode=install install mod_auth_mysql.la /Applications/XAMPP/xamppfiles/modules/
libtool: install: install .libs/mod_auth_mysql.so /Applications/XAMPP/xamppfiles/modules/mod_auth_mysql.so
libtool: install: install .libs/mod_auth_mysql.lai /Applications/XAMPP/xamppfiles/modules/mod_auth_mysql.la
libtool: install: install .libs/mod_auth_mysql.a /Applications/XAMPP/xamppfiles/modules/mod_auth_mysql.a
libtool: install: chmod 644 /Applications/XAMPP/xamppfiles/modules/mod_auth_mysql.a
libtool: install: ranlib /Applications/XAMPP/xamppfiles/modules/mod_auth_mysql.a
chmod 755 /Applications/XAMPP/xamppfiles/modules/mod_auth_mysql.so

Now it's time to modify a working https.conf for Apache v2.4.

In httpd.conf, find those lines with LoadModule * statements and add the following two statements at the bottom of that section:

#
#
#
LoadModule apreq_module modules/mod_apreq2.so
LoadModule mysql_auth_module modules/mod_auth_mysql.so
#


This make sure the target libraries are called up when Apache starts.

Before you start Apache server again to test if it runs or not, you have one more thing to do. In my experience, Apache server may not start because of libmyqlclient error. This is the case when we have no other MySQL client setup on the Mac before.

All you have to do is creating new symbolic link to let Apache v2.4 find the right library:

$
$ sudo ln -s /Applications/XAMPP/xamppfiles/lib/libmysqlclient.18.dylib \
 /usr/lib/libmysqlclient.18.dylib
$



Now start your Apache v2.4 web server and see if it's running properly. If yes, you are ready to use mod_auth_mysql module again!

Happy coding!!!

PS: Additionally you'll need to make sure PHP Session is working.
In /Applications/XAMPP/xamppfiles/etc/php.ini, you need to uncomment a line like this:
session.save_path = "/tmp"


This should do the trick;-)







Thursday, June 13, 2013

Setup Cygwin CRON service on Windows platform


Assuming we are going to run a Batch File containing those scripts for daily operations, Windows Scheduled Tasks feature offers useful scheduling capability for backup and cleaning operations on Windows server. However, you never get what you want when you want it on Windows platform.

It takes extra steps in configuring Local Policy to allow the user to have a permission of "Log on as Batch Job" in order to add a scheduled task successfully. That might not always happen in a SysAdmin point of view. This always remind me of the useful tools Cygwin which run linux program within Windows. Of course, you can also call Windows program from there. Combined with Cron in Cygwin, you can schedule daily tasks from Cygwin like Windows Scheduled Tasks.


To install cygwin, please refer to the following article:
http://docs.oracle.com/cd/E24628_01/install.121/e22624/preinstall_req_cygwin_ssh.htm

*Reminder:

  • You might need to install extra package for cron within cygwin setup.
  • During Cygwin setup, please make sure you have also selected cron by entering 'cron' in search field and mark it as INSTALL.
  • Please install Cygwin to the root directory of local drive to get rid of those annoying restrictions.


Once you've got Cygwin setup properly on Windows, you can start installing Cron service on Windows.

Here's the command to install new cron service:
#
#Within Command Prompt Terminal
C:\>cd c:\cygwin\bin
C:\cygwin\bin>cygrunsrv -I Cygwin_CRON_JOBS -p /usr/sbin/cron -a -n
#


On Windows desktop, open Control Panel -> Administrative tasks -> Services.
Then look up a service name "Cygwin_CRON_JOBS" which we specified as above.
Make sure this service is started and running properly.

For local user, we might just define the scheduled tasks like this:
#
# Within normal opening Cygwin Terminal
$ crontab -e
# Minute   Hour   Day of Month       Month          Day of Week        Command  
# (0-59)  (0-23)     (1-31)    (1-12 or Jan-Dec)  (0-6 or Sun-Sat)              
    0        2          12             *               0,6           /cygdrive/c/somewhere/something.bat
#

However, this doesn't work in most cases whereas we haven't got sufficient permissions to run CRON tasks in Windows.
For domain user, some extra work needs to be done:
#
# Open Cygwin Terminal by right clicking the icon and selecting [run as Administrator]
# Within Cygwin Terminal ~
$ touch /etc/crontab
# Take ownership for SYSTEM user on this file
$ chown SYSTEM /etc/crontab
# To avoid famous BAD FILE MODE error in Cygwin, try chmod command
# Cron stops working on world editable file due to security reason
# To stop error message, let's make it editable ONLY to the file owner
$ chmod 0644 /etc/crontab
$ crontab -e
# Minute   Hour   Day of Month       Month          Day of Week        Command  
# (0-59)  (0-23)     (1-31)    (1-12 or Jan-Dec)  (0-6 or Sun-Sat)              
    0        2          12             *               0,6           /cygdrive/c/somewhere/something.bat
#

To run Windows program within crontab, you can start from the following path:
/cygdrive/c/...
/cygdrive/d/...

which points to the root directory of the local drives where your favourite commands and batch files are locating.
Make sure the path is correct by listing them like:
#
# Within Cygwin Terminal
$ ls -l /cygdrive/c/Windows
#


For additional information about user account created for cron Windows service, please read here:
http://www.davidjnice.com/articles/cygwin_cron-service.html

Hope you can run your favourite tasks in Cron now.

Friday, June 7, 2013

Convert .p12 bundle to server certificate and key files for Nginx

SSL certificate is a must for nowadays e-commerce site whereas newly emerged web server like Nginx has gained so much attention due to its performance when dealing with heavy traffic to the web site. Why do those people choose Nginx?

Nginx's unique architecture makes it easy to handle large number of concurrent connections at one time with low CPU and memory consumption, compared with IIS and Apache.

Nginx has also taken the place as the front-end proxy server for traditional web servers like IIS and Apache.

Now, back to the topic we are facing today.

Assuming you have received .p12 file from your trust provider, you might need to know more about what a .p12 file is.

According to wiki, PKCS #12 defines an archive file format for storing many cryptography objects as a single file. It is commonly used to bundle a private key with its X.509 certificate or to bundle all the members of a chain of trust.

This means a .p12/.pfx file contains everything we need to provide SSL services, like server certificates, CA root certificate, intermediate chain certificates and server private key.

Unlike .pem file, .p12/.pfx file is in binary form so we cannot copy and paste those blocks for use in a human readable format. It needs a conversion tool like openssl to extract necessary files for the web server like Nginx.

Nginx is also sensitive to the order of server certificate and other CA root and chain certificates in a bundle .pem file so it may not start up properly with a .pem file which has been tempered with no proper knowledge.

Here're the two commands to generate necessary certificate bundle and server key files from a .p12/.pfx bundle file which is supposed to be directly imported into IIS web erver.

#
#
#Generate certificates bundle file
> openssl pkcs12 -nokeys -in server-cert-key-bundle.p12 -out server-ca-cert-bundle.pem
#
#
#Generate server key file
> openssl pkcs12 -nocerts -nodes -in server-cert-key-bundle.p12 -out server.key
#
#

whereas you might be asked to input the password which was included in .p12 file during the creation.

In Niginx.conf, we can include these two files for SSL connection:
#
#
server {
        listen   443 default ssl;
        server_name  localhost ...;

        ssl                  on;
        ssl_certificate      /some_where/ssl_cert/server-ca-cert-bundle.pem;
        ssl_certificate_key  /some_where/ssl_cert/server.key;

        ssl_session_timeout  5m;

        ssl_protocols  SSLv2 SSLv3 TLSv1;
        ssl_ciphers  HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers   on;
    
#...  
#


After that, Nginx should start up properly with HTTPS protocol ready for the web site.







Monday, February 25, 2013

Auto start Nginx on Linux reboot

After compiling Nginx from the source on older release of Ubuntu server, there are some steps to be performed in order to let Nginx service start automatically after a system reboot.

A startup script can be obtained here:
https://github.com/JasonGiedymin/nginx-init-ubuntu/blob/master/nginx

This script will be saved to /etc/init.d/nginx

However, there are additional changes on this script file in order to make it running properly.

For the existing version of Nginx v1.2.7, a couple of changes are required for pointing to the correct path of related files as follows:




# processname: nginx
# config:      /usr/local/nginx/conf/nginx.conf
# pidfile:     /usr/local/nginx/logs/nginx.pid
# Provides:    nginx


PATH=/usr/local/nginx/sbin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/local/nginx/sbin/nginx

NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf"




Once the changes are saved successfully, it'll be ready to add Nginx to the list of startup service on the server.

First, change the file permission as follows:

sudo chmod +x /etc/init.d/nginx


Second, use the following command to update the default run level of the startup script:

sudo /usr/sbin/update-rc.d -f nginx defaults 



After these, Nginx service will be started by itself during next system reboot.

You can also perform various tasks manually via the following command syntax:

sudo /etc/init.d/nginx (start|stop|reload|status)






Thursday, December 6, 2012

IE6 SELECT element overlapping the front DIV element issue (Revisited) - An ultimate solution

It's been a very long time for IE6 to be deprecated while some organisations still insist on using it as a standard browser on Windows platform. IE6+WinXP is the best combination so far I have ever met such a dilemma on making the webapp advancing forward or coming back to take care of them.

"Well, customer first!", it bit me as the old man said.

Since we like easy fix, we would like to go a scripting way to deal with that. Actually, there's no easy fix at all until you know what you're doing.

Nice article with follow-up by an ASP.NET developer is found while it applies to any other projects using PHP, Python, Ruby or Perl. Anyway, we go for the way by using jQuery.

People are arguing about the way in using iframe to overcome a situation:


  • People work on web applications for IE6.
  • People think about using Javascript to create dynamic pop-up menu with DIV elements.
  • People would like to create DIV overlay on top of the current web page which has SELECT elements underneath.
  • DIV overlay can cover any other web elements except SELECT boxes and it looks ugly.

SELECT element is found to be one of the ActiveX objects in IE6 implementation. This doesn't apply to Firefox or Chrome or even Safari as they don't use ActiveX technology.

ActiveX is set to be higher z-index than any other web elements so it will always appear on top. This causes bad user experience while DIV overlay menu cannot cover a SELECT element. User may complain and say that your app has a bug. Well, a bug in IE6, indeed.

In an old way, we used to hide SELECT elements underneath but it's a bit out of control when the webpage contains some previously hidden SELECT elements and you don't want to make them appear again by accident.

As it's something about ActiveX objects, we go for the ActiveX way to resolve this.

Solution! Solution!

Among those ActiveX objects, IFRAME is fortunately one of them, too. So, we can make use of transparent IFRAME object to cover everything including existing ActiveX objects like SELECT. When a new ActiveX object is created after the old one, it would get the higher z-index on top.

Some people suggest simply embedding the IFRAME element into DIV container but it doesn't really work when accessing the webpage over SSL connection (HTTPS). Most secured web application would provide services over HTTPS. In this case, it will trigger an alert window with that Javascript statement in src attribute.


<div>

...
<iframe id="transparent_frame" src="javascript:false;"></iframe>
</div>

Here comes a better approach:


<div>

...
<iframe id="transparent_frame" src="javascript:'<html></html>';"></iframe>
</div>

This displays nothing but empty HTML page within iframe element. It simply keeps the browser silent over HTTPS and do the things we want.

Of course, we need to set CSS style for this IFRAME to be transparent and covering the whole area of DIV container (set your favourite width and height, please) so it will not affect any other elements within DIV container.

CSS Stylesheet:


#transparent_frame iframe
{
    position: absolute;
    z-index: -1;
    filter: mask();
    border: 0;
    margin: 0;
    padding: 0;
    top: 0;
    left: 0;
    width: 9999px; /* Set desired width */
    height: 9999px; /* Set desired height */
    overflow: hidden;
}

Is the problem solved? Not yet, it's bloody IE!

One problem that came across would be the memory leak in IE browser when we add new IFRAME dynamically by jQuery and then remove it. I can see the memory consumption of IEXPLORER.EXE increases dramatically in Task Manager until the system becomes unresponsive.

To avoid this, please make sure we are not creating new IFRAME one at a time when DIV overlay is shown up. Try reusing the same IFRAME to do the job and things would be fine.

In the experiment, a single IFRAME is created and reused until next page reloading without any memory leak issue on IE browsers which is good.


Ref:
http://weblogs.asp.net/bleroy/archive/2005/08/09/how-to-put-a-div-over-a-select-in-ie.aspx





Wednesday, November 21, 2012

Tuning and optimization of NGINX for 2 Million HTTP concurrent connections

As heard from the news about the crash of Click Frenzy website, they were going to deal with 1 million  connections. But it turned out to be more than 2 million connections.

While talking about a reliable IT infrastructure to support this level of user group at peak hours, does anyone mention about NGINX acting as the front-end server to deal with that?

An article from Taobao.com mentioned about an experiment using NGINX to simulate a condition of 2 million concurrent connections to a single server. I was surprised that people out there shared it on Twitter even it wasn't posted in plain English. Here's the translation:


Tuning and optimization of NGINX for 2 Million concurrent connections

For the server performance, one of the vital indicators is the maximum number of queries per second, i.e., qps. There are exceptions for certain types of applications that we do care more about the maximum number of concurrent connections rather than qps, although we still consider qps as one of the system performance indicators. The comet server for Twitter belongs to this kind of species. Other examples like online chat room and instant messaging applications are also similar in nature. For the introduction of Comet-typed applications, please refer to the previous posts. For this kind of system, there would be a lot of messages produced and delivered to the clients. Those client connections are on hold even during the idle time. When a huge number of clients connect to the system, a long queue of concurrent connections would be made and held by the system.

First of all, we need to analyse the resources consumptions for this kind of service. These mainly involve CPU usage, network bandwidth and memory. To optimize the system, we need to find the whereabouts of the bottleneck. Among those concurrent connections, part of them may not be sending data at all time and be considered as idle connections occasionally. These connections at idle state don’t actually exhaust the CPU or network resources, but barely occupy some spaces in memory.

Based on the assumptions above, the system can support a much higher number of concurrent connections desired, with adequate amount of system memory. Could this happen in the real world? If yes, this could also be a challenge to the CPU core for supporting such a huge client group.

To start an experiment for the theory above, we need to have a running server and a huge number of clients as well. Also, server program and client program are required to finish the tasks.

Here’s the scenario:
  • Each client initiates a connection and sends out a request to the server.
  • The connection is on hold at the server side with no actual response.
  • This state is maintained until a maximum number of concurrent connections are reached, i.e., 2 million concurrent connections.


1. Preparation of Server-side
As per the assumption above, we need to have a server with large amount of memory for the deployment of Comet application by using NGINX. Here’s the specification of the server:

  • Summary: Dell R710, 2 x Xeon E5520 2.27GHz, 23.5GB / 24GB 1333MHz
  • System: Dell PowerEdge R710 (Dell 0VWN1R)
  • Processors: 2 x Xeon E5520 2.27GHz 5860MHz FSB (16 cores)
  • Memory: 23.5GB / 24GB 1333MHz == 6 x 4GB, 12 x empty
  • Disk-Control: megaraid_sas0: Dell/LSILogic PERC 6/i, Package 6.2.0-0013, FW 1.22.02-0612,
  • Network: eth0 (bnx2):Broadcom NetXtreme II BCM5709 Gigabit Ethernet,1000Mb/s
  • OS: RHEL Server 5.4 (Tikanga), Linux 2.6.18-164.el5 x86_64, 64-bit


The program at the server-side is so simple. An NGINX based comet module can be written to achieve the purposes of accepting user request and putting the connection on hold with no response. Apart from this, Status module in NGINX can be used to monitor the maximum number of concurrent connection at real time.

Let’s tweak some system parameters on the server, within the file /etc/sysctl.conf:

net.core.somaxconn = 2048
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 4096 16777216
net.ipv4.tcp_wmem = 4096 4096 16777216
net.ipv4.tcp_mem = 786432 2097152 3145728
net.ipv4.tcp_max_syn_backlog = 16384
net.core.netdev_max_backlog = 20000
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_orphans = 131072

Then, issue the following command to make the new settings effective:
/sbin/sysctl -p


Here are a couple of things to notice:

Net.ipv4.tcp_rmem: this is used to allocate the size of READ buffer. The first one is for the minimum value, the second one is for the default value and the third one is for maximum value. To keep the memory consumption to minimum for each socket, the minimum value is set to 4096.

Net.ipv4.tcp_wmem: this is used to allocate the size of WRITE buffer. Both READ and WRITE buffer can actually affect the memory consumption of the socket within the core system.

Net.ipv4.tcp_mem is used to allocate the memory size for TCP, in terms of pages, not words. When the memory size exceeds the second parameter as defined, TCP will enter the pressure mode. At this time, TCP will try to stabilize its memory usage. In case, the memory size is reduced less than the first parameter, TCP will leave the pressure mode. However, if the memory size exceeds the third parameter, TCP will refuse to allocate extra socket for the other connections. At this moment, a whole list of log entries like “TCP: too many of orphaned sockets” will be shown by using dmesg command at the server-side.

Also, the parameters for net.ipv4.tcp_max_orphans should be set to allow a maximum number of sockets being held by no process. This should be considered carefully especially when we need to create a huge number of connections.

Apart from this, the server needs to open a certain amount of files descriptors, i.e., 2 million. There is a problem when setting a large size for file descriptors. But, no worry. Let’s talk about this later.

2. Preparation of client side
In this scenario we need to initiate a large amount of client connections. However, the number of local ports on the computer is limited. With the port range between 0 and 65535, there is a reserved range between 0 and 1023. In other words, only 64511 ports within the port range between 1024 and 65534 can be allocated. To achieve 2 million concurrent connections, 34 client computers will be required.
Of course, we can use Virtual IP to realize this number of clients. For each Virtual IP, around 64500 ports can used for binding whereas 34 Virtual IP can get things done. Fortunately, we have the company resources to carry out this experiment by using physical machines.

The default parameters set for the auto allocation of the ports is limited, i.e., from 32768 to 61000, so we need to modify the parameter on the client computer, within the file /etc/sysctl.conf:

net.ipv4.ip_local_port_range = 1024 65535

Followed by a line of commands like this to make it effective:
/sbin/sysctl -p

A program at the client side is writtern based on libevent module which continuously initiate new requests for connection.

3. Adjustment of File Descriptors
Due to a large number of socket required by Client-side and Server-side, we need to adjust the maximum value of File Descriptor.

At Client-side, we need to create more than 60,000 sockets, it’s fine to set the parameter to 100,000.

Within the file /etc/security/limits.conf, please add these two lines:
admin soft nofile 100000
admin hard nofile 100000

At Server-side, we need more than 2,000,000 sockets. That would be a problem if we set it to something like, “nofile 2000000”. In this case, the server will no longer be able to be accessed. After several attempts, it is discovered that the maximum value can only set to 1 million. After checking the source code of the kernel v2.6.25, there is a global defined value for this as 1024*1024, i.e., almost 1 million. Anyway, Linux kernel version 2.6.25 or later can be tweaked through the settings in /proc/sys/fs/nr_open. So, I can’t wait to upgrade the kernel to 2.6.32.

For the blog post about “ulimit”, please visit:

After the kernel upgrade, we can optimize it by the following command:
sudo bash -c 'echo 2000000 > /proc/sys/fs/nr_open'

Now, we can modile the nofile parameters again within the file /etc/security/limits.conf:

admin soft nofile 2000000
admin hard nofile 2000000

4. Final test
Throughout the test, “dmesg” console shows continuing messages about the newly optimized parameters for allocating server ports according to the settings in /sbin/sysctl. At the end, we have finished a test for 2 million concurrent connections.

To minimize the memory consumption for NGINX, the default value of “request_pool_size” is changed from 4k to 1k. Also, the default values of net.ipv4_wmem and net.ipv4.tcp_rmem have been changed to 4k.

At the time of 2 million connections, the following data has been collected via NGINX: 








At the same time, the condition of the system cache:




5. Conclusion
Normally, parameters like “request_pool_size” need to be adjusted during the real time. The default sizes of “net.ipv4.tcp_rmem” and “net.ipv4.tcp_wmem” should also be changed as well.





Wednesday, November 7, 2012

XRDP Login with Black Screen issue

After trying a couple of things on the Linux server and found nothing wrong with the configuration of XRDP server, I was wondering what exactly caused this whereas I got used to login properly without any issue with RDP client on Mac OS X.

Is it something wrong with an update on Microsoft RDP client, or something deep inside my connection profile?

Just found this a reply on the forum which points me to the right direction.

According to the replies in the forum
http://sourceforge.net/projects/xrdp/forums/forum/389417/topic/4880098,
it looks like a problem in the domain settings in my connection profile which has been remembered since I connected to some other servers whereas a domain needs to be specified. In Preference, I simply cleared the field Domain and now my XRDP session can be connected again with no further issue.


Apart from the black screen, here's our login session window again!


The same thing may happen on Windows RDP client as well. So, please beware of the Domain when logging in XRDP next time.