Thursday, November 24, 2011

Google Health: a valuable lesson on eHealth project

Just got the news about Google going to tidy up their stocks before the year end. Google Health has been chosen for termination and actually it is not the only one project to be closed down by the big boss in 2012. Medical records are always binding to a legal burden which hinders the developers in terms of data sharing and peer review. It's just not ready to be opened to the public in many cases. One way or the other, the users need to give trust to their treating doctors and let them read the records. Once the records go online, the service provider must hold the responsibilities to protect the patient's data which is actually binding to the law like HIPAA in U.S.. Different countries have different laws to govern how clinical data can be shared and handled. This is immediately turned into a legal issue once any mistake happens in between the service provider and the users. Even Google has a team of lawyers to guard against all those lawsuits potentially arisen among the people in U.S., it added up the cost of protecting those assets of clinical data which are not legally owned by the service provider. As the wised reckon, Google try to use a small team to conquer a large universe of data but that universe of data was not there. It may be residing somewhere under the influence of law, nonetheless, it is just out of reach for the Internet Giant.

Friday, November 18, 2011

Occasional blank page on IE6 during a site visit

Even though the market share of IE6 is on the way of shrinking around the world, the truth is many organization still deploy this particular version of web browser for LAN users. I always have a doubt on the statistics provided by IE6 Countdown website.

Anyway, I noticed some of the client computers are still running IE6 SP2 which is a bit better in security but still not flawless. IE6 is supposed to handle well in HTTP/1.1 connections and it always sends out HTTP/1.1 request. If not sure, have a look on the following URL and checkout the appropriate option to force IE6 to send HTTP/1.1 request:

http://www.ehow.com/how_6516962_fix-internet-explorer-provided-dell.html

Okay, even the client side is ready to handle HTTP/1.1, the proxy server or even the web server at the far side may not send back HTTP/1.1 response. In case of Apache 2.x server, the default settings make it suppress HTTP/1.1 response but send back HTTP/1.0 response instead, particularly for Internet Explorer.

You may find the settings under httpd-ssl.conf as follows:


#
BrowserMatch ".*MSIE.*" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0

According to EricLaw's opinion, this actually force Apache server to give out HTTP/1.0 response to any version of IE, even if IE sends out HTTP/1.1 request at the client side. This could lead to intermittent blank pages on IE6 which is actually error-prone to this kind of responses from the web server.


Some people suggest they simply comment out the settings to avoid this problem. Somehow, IE6 is not perfect enough if we don't put nokeepalive and ssl-unclean-shutdown directives into the filter. Other problems may still arise of IE6 or above version.

To overcome most of the problems among those different version of IE, we have put in detailed criteria to selectively tackle the problem of particular browser version.

The recommended settings would be like these:


#
#IE Version 2 to 5 should be downgraded to HTTP/1.0 for compatibility
#IE Version 1.0 not even support HTTP/1.1 so ignored
BrowserMatch ".*MSIE [2-5]\..*" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0
#
#Prudent settings for newer version of IE
BrowserMatch ".*MSIE [6-9]\..*" \
nokeepalive ssl-unclean-shutdown


Please beware of backward-slash plus double dots "\.." in the expression.

Except for some performance degrade on IEs, this should keep those browsers running fairly okay at the client side. In the far future, we might have to think about how to handle the case of IE10, IE11 or IE12 in case they are released. Hope those problems will be gone in the newer release of IE.

After all, Firefox, Safari and Chrome do not show any symptom like this in IE. So, finger crossed;)


Monday, November 7, 2011

In-depth look into Path MTU Discovery

Recently, I have been searching for a ultimate solution to remedy those situations whereas TCP traffic doesn't come through at the client side. The symptom could be occasional blank page on the browser.

It points me back to study the root cause of broken router or mis-configured firewall settings. For security reasons, some network administrator would make prudent settings to block ICMP traffic from the WAN whereas ICMP (type 3, code 4) packet is one key element in traditional Path MTU Discovery (PMTU-D) process initiated by the server.

Normally, MTU size is default to 1500 bytes at perfect network environment, especially the LAN. It's not the case in a real world situation where MTU size may vary. PMTU-D helps solve this problem in most cases. However, firewall administrator may simply block ICMP packet for security purposes. ICMP packet is useful for getting feedback from the destination for adjusting a possible MTU size to let the traffic come through without defragmentation, i.e., no further DF flag is required during the communication.

Hopefully, IETF Working Group has worked out another way to detect MTU without the need of ICMP packet. Packetization Layer Path MTU Discovery (PLPMTUD, RFC 4821) can work over the network layer above IP, i.e., TCP or UDP. This makes PLPMTUD can work independent of ICMP messages. It starts probing from a small packet size set as inital MSS, then increase progressively to a larger size until a packet loss happens. The optimum size will be used as MTU for that particular connection.

On Linux, there are several network parameters for tweaking:

/proc/sys/net/ipv4/tcp_mtu_probing
/proc/sys/net/ipv4/tcp_base_mss
/proc/sys/net/ipv4/ip_no_pmtu_disc


Possible values of tcp_mtu_probing are:
0: Don't perform PLPMTUD
1: Perform PLPMTUD only after detecting a "blackhole" in old-style PMTUD
2: Always perform PLPMTUD, and use the value of tcp_base_mss as the initial MSS.

Setting tcp_mtu_probing to 1 makes sure that PLPMTUD will start only when black hole router is detected along the way to the destination IP.

Default value of tcp_base_mss is 512 and is supposed to remain the same.

ip_no_pmtu_disc is default to 0 whereas traditional PMTUD can be used at all time. Setting this to 1 seems to make it totally skip the old fashioned way to detect MTU size by using ICMP message.

Ref:
http://kb.pert.geant.net/PERTKB/PathMTU
http://www.znep.com/~marcs/mtu/
http://kerneltrap.org/mailarchive/linux-net/2008/5/24/1928074/thread



Wednesday, November 2, 2011

Back to the old school days of TCP/IP

Just imagine that you are hosting your favorite web site which loads smoothly at your side, even at home, on the bus or in the coffee shop. Til one day, you received a complaint from your old school friend who challenged that you might have really poor web authoring skills. Sometime, what he had seen was a blank page while he was visiting your web site. Blank, in that way, white color with no extra icon, text or even error message, purely blank. Pressing [Refresh] button and things appear again. It sounds like you have been hosting a broken web server. Anyway, it was fine to visit any other web sites from your friend's computer except this one.

"Is the web site crippling enough for me to visit?!!", you might ask. No worry, there is nothing wrong with the web server. Bad client settings of web browser? Maybe.

However, you might not work out how the web browser actually works to display a blank page like this. This, in my opinion, indicates that the web traffic doesn't actually come through. Internet is a huge network coordinated by different routing equipment. They can be linked up with fiber optics, copper wires or even wireless signal in the air. Along the years, the speed of Internet is growing faster than ever before. People change their gears from a wired 56K modem to nowadays USB/built-in Wi-Fi adapter up to 300mbps in transmission. They might forget those days while scientists were excited about the data packets being sent through a co-axial cable successfully.

There are some articles reviewing one of the common TCP/IP features called TPC extensions for high performance network, a.k.a., RFC1323. You may have interest to have a look at the request for comment like this in bare text format which was dated back to 1992.

http://www.ietf.org/rfc/rfc1323.txt

Among those, TCP Window Scaling Option contributes faster packet transfer by controlling receiving window dynamically inside the network adapter. This boosts the rate of data transfer to next level. A single server can significantly increased its network capability by turning this feature on.

Somehow, this has been seen as industry standard for years. People argued that it still has not been implemented well on some routers and causes all sort of problems in basic TCP/IP communication.

A blogger pointed out that some router would reset a vital parameter during TCP communication and lead to misunderstanding by the computers at both ends.

http://inodes.org/2006/09/06/tcp-window-scaling-and-kernel-2617/

In this case, people would normally not aware of the problem until they realized that they couldn't access some web sites and think the sites might be down for service.

There might be some broken routers or switches along the pathway to the target computer whereas they might been poorly managed and finally the doomsday comes. It might also be a long time ago since the network administrator updated the firmware on their routers. So, what about your side? Can you do something to ease that problem even you know that their router won't last long? For the best interests of the end users, we still need to find a way to sort this out.

Okay. In order to get TCP Window Scaling working, TCP Window Scaling Option must be turned ON from both ends. Either one of them has turned it OFF, TCP Window Scaling would not happen. Nowadays, this basic TCP feature might possibly turned on at the client side. Therefore, to sacrifice a bit of your server performance, you need to turn off TCP Window Scale Option on your server. What I say a bit could means huge difference of the user experience at far side. Depending on the actual server performance, it might take longer time to transmit the same set of data packets with TCP Window Scale Option OFF.

If all these don't work for you, then check your browser settings or hardware like network adapter. You may find a reason for this.

Reference materials can be found in the following links:

http://prowiki.isc.upenn.edu/wiki/TCP_tuning_for_broken_firewalls

http://wiki.squid-cache.org/KnowledgeBase/BrokenWindowSize

http://packetlife.net/blog/2010/aug/4/tcp-windows-and-window-scaling/