Tuesday, December 20, 2016

Solution to Homebrew upgrade issue after MacOS Sierra 12.12.x Upgrade

For anyone who has problems in upgrading Homebrew repository after upgrading to MacOS Sierra 12.12.

$
$ cd "$(brew --repo)" && git fetch && git reset --hard origin/master && brew update
$ brew update
$ brew upgrade
$ brew cleanup
$ brew cask install --force $(brew cask list)
$ brew cask cleanup
$

If you have changed permissions on folder /usr/local while installing Homebrew previously on El Capitan 10.11, you might want to run the following command to revert that change whereas new Homebrew doesn't need special permission over this folder:

$
$ sudo chown root:wheel /usr/local
$


Saturday, December 17, 2016

Quickfix: Boot up screen stuck with spin freezes after Mac OS Sierra upgrade



After a major upgrade of Mac OS Sierra, my MacBook didn't boot up anymore. It freezes on the grey screen with dead spinning wheel.

After checking any non-apple Kext module in Safe mode and fixing any disk error in DiskUtility in Recovery mode, I nearly have no clue on what's happening to the hard drive.

Here come across 7 steps which can be useful for anyone who cannot resolve the boot up problem after upgrading to Sierra.


  1. Shut down your Mac.
  2. Press the power button to start up your Mac.
  3. Immediately Hold down Command-S for single-user mode.
  4. On the terminal window Type fsck –fy and press return
  5. Type mount –uw and press return
  6. Type touch /private/var/db/.AppleSetupDone and press return
  7. Type exit and press return

Through Steps 1 to 3, you should have launched your Mac on a Single user mode.

In Step 3, the screen shows up the raw booting messages which show clearly what is running behind the scene.

Steps 4 to 7 will help check for file system consistency and remount the boot volume.
Regarding Step 6, what's point of creating empty file .AppleSetupDone?
Every time OS X boots, it checks for the existence of a file known as .AppleSetupDone. This empty file is created after the completion of Setup Assistant. It doesn't exist on a brand-new, out-of-the-box Mac, nor on one that has had a clean installation of OS X.
By removing this file, OS X will assume that Setup Assistant has never been run and will launch it as soon as OS X boots.
Setup Assistant is also run with root privileges, which is why it can create a new user account with administrator privileges without the need for any authorisation.

As I have gone through the wizards of Setup Assistant before bootup problem is encountered. I have not seen any wizard of Setup Assistant again.

After step 7, my MacBook booted into the logon screen which I usually see. I quickly login, and then restart it to see if things are working again. Finally, it booted up quickly and successfully to logon screen and let me login as usual.

As of writing, I can boot back into my Mac OS Sierra v10.12.2.


Monday, December 12, 2016

SDLC approach: SCRUM & Agile

You may know something about SDLC when you heard about "Waterfall" and "Agile".

People talking about Agile or Agile-like development might also talk about Scrum. Here's a bit digests from the specifications I came through:

For the term "Agile" as people normally say:

The Agile Movement (www.agilemethodology.org) notes that Agile is not in itself a methodology, but rather an alternative to traditional project management to try and help teams respond to unpredictability through incremental, iterative work cadences and empirical feedback. Agile methods are therefore alternatives to waterfall, or traditional sequential development. 

So, how about Scrum?

Scrum is described as the most popular way of using Agile and Agile-like methods and has based its assumptions around that approach. The use of the term Agile and/or Agile-like is intended to imply that Customer may be more interested in the principles of Agile as tailored to the project, rather than in strict adherence to any particular form of Agile.
Scrum is an iterative and incremental agile software development framework for managing product development.


In simple words, Agile is a mindset and Scrum is a method.

To get scrum fully implemented and make the software development truly agile, companies might need to adopt the necessary changes in management style, organisation culture, running processes and way of executing projects.










Thursday, December 1, 2016

Shrink the size of Docker.qcow2 to free valuable diskspace

Since my developing work is moved to Docker platform for Mac, it makes the machine setup quicker and easier. Now they have stable version of Docker for Mac for use which is great! However, the easiness comes with a price of file size inflation on the development machine. On Mac, it's not uncommon to find that we are running out of diskspace. After Docker for Mac is in place for a couple of months, there's a surprise over the size of their qcow2 file:


$ ls -l ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/
-rw-r--r--  1 user  staff   46GB Nov 1 14:47 Docker.qcow2
-rw-r--r--  1 user  staff    64K Nov 1 14:44 console-ring
-rw-r--r--  1 user  staff     5B Nov 1 14:44 hypervisor.pid
-rw-r--r--  1 user  staff     0B Nov 1 12:34 lock
drwxr-xr-x  4 user  staff   136B Nov 1 12:34 log/
-rw-r--r--  1 user  staff    17B Nov 1 14:44 mac.0
-rw-r--r--  1 user  staff    36B Nov 1 12:34 nic1.uuid
-rw-r--r--  1 user  staff     5B Nov 1 14:44 pid
-rw-r--r--  1 user  staff   141B Nov 1 14:44 syslog
lrwxr-xr-x  1 user  staff    12B Nov 1 14:44 tty@ -> /dev/ttys001


As you can see above, Docker.qcow2 grows up to 46GB which almost eats up half of the free space on SSD drive. I remember I have regularly remove unused images and containers. Even I have done this, the file size of Docker.qcow2 didn't actually stop growing.

In theory, Docker.qcow2 file keeps those layers and containers in use for Docker Engine. But the fact is that Docker doesn't come with a cleanup mechanism for all these. As long as we are pulling new images for testing and then delete them, those data remains inside Docker.qcow2 and will not be erased. This is why we see a huge file sitting on the harddrive as time goes by.

You may try deleting Docker.qcow2 file but you are going to destroy everything you've built inside the containers. After a restart of Docker engine, this file may still grow up to the previous size based on Docker's registry information for all those used or unused layers and containers.

Through using qemu utilities, we can shrink the size of .qcow2 file effectively.
$ brew update && brew install qemu
$
$ cd ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/
$ mv original_image.qcow2 original_image.qcow2_backup
$ qemu-img convert -O qcow2 original_image.qcow2_backup original_image.qcow2
$
$

Once we have confirmed Docker engine is up and running again, we can remove the backup file:
$ rm ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2_backup


Another way to reclaim used space within .qcow2 file is using dock_gc:
https://github.com/spotify/docker-gc

You can follow the instructions up there to build a custom Docker image based on your current Docker version number and then deploy it as Docker image and run the cleanup command like this:

$ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v /etc:/etc spotify/docker-gc


Reminder: The docker-gc container requires access to the docker socket in order to function, so we need to map it when running this command. The /etc directory is also mapped so that it can read any exclude files that we have created.

Once we git clone the source of docker-gc, we can start modifying to our needs.

To checkout:
$ git clone https://github.com/spotify/docker-gc.git


To build the source and upload to local Docker engine:
$ docker build -t spotify/docker-gc .


Combining docker_gc with qemu-img command, we can effectively reduce the size of .qcow2 file safe and sound.

Here's the modified version of my Dockerfile

FROM gliderlabs/alpine:3.2

ENV DOCKER_VERSION 1.12.3

# We get curl so that we can avoid a separate ADD to fetch the Docker binary, and then we'll remove it
RUN apk --update add bash curl 
RUN cd /tmp/ 
RUN curl -sSL -O https://get.docker.com/builds/Linux/x86_64/docker-${DOCKER_VERSION}.tgz 
RUN tar zxf docker-${DOCKER_VERSION}.tgz 
RUN mkdir -p /usr/local/bin/ 
RUN mv ./docker /usr/local/bin/ 
RUN chmod +x /usr/local/bin/docker 
RUN apk del curl 
RUN rm -rf /tmp/* 
RUN rm -rf /var/cache/apk/*

COPY ./docker-gc /docker-gc

VOLUME /var/lib/docker-gc

CMD ["/docker-gc"]



Here's the modified version of docker_gc file
#!/bin/bash

# Copyright (c) 2014 Spotify AB.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

# This script attempts to garbage collect docker containers and images.
# Containers that exited more than an hour ago are removed.
# Images that have existed more than an hour and are not in use by any
# containers are removed.

# Note: Although docker normally prevents removal of images that are in use by
#       containers, we take extra care to not remove any image tags (e.g.
#       ubuntu:14.04, busybox, etc) that are used by containers. A naive
#       "docker rmi `docker images -q`" will leave images stripped of all tags,
#       forcing users to re-pull the repositories even though the images
#       themselves are still on disk.

# Note: State is stored in $STATE_DIR, defaulting to /var/lib/docker-gc

# The script can send log messages to syslog regarding which images and
# containers were removed. To enable logging to syslog, set LOG_TO_SYSLOG=1.
# When disabled, this script will instead log to standard out. When syslog is
# enabled, the syslog facility and logger can be configured with
# $SYSLOG_FACILITY and $SYSLOG_LEVEL respectively.

set -o nounset
set -o errexit

GRACE_PERIOD_SECONDS=${GRACE_PERIOD_SECONDS:=3600}
STATE_DIR=${STATE_DIR:=/var/lib/docker-gc}
FORCE_CONTAINER_REMOVAL=${FORCE_CONTAINER_REMOVAL:=0}
FORCE_IMAGE_REMOVAL=${FORCE_IMAGE_REMOVAL:=0}
#DOCKER=${DOCKER:=docker}
DOCKER='/usr/local/bin/docker/docker'
PID_DIR=${PID_DIR:=/var/run}
LOG_TO_SYSLOG=${LOG_TO_SYSLOG:=0}
SYSLOG_FACILITY=${SYSLOG_FACILITY:=user}
SYSLOG_LEVEL=${SYSLOG_LEVEL:=info}
SYSLOG_TAG=${SYSLOG_TAG:=docker-gc}
DRY_RUN=${DRY_RUN:=0}
EXCLUDE_DEAD=${EXCLUDE_DEAD:=0}

for pid in $(pidof -s docker-gc); do
    if [[ $pid != $$ ]]; then
        echo "[$(date)] : docker-gc : Process is already running with PID $pid"
        exit 1
    fi
done

trap "rm -f -- '$PID_DIR/dockergc'" EXIT

echo $$ > $PID_DIR/dockergc


EXCLUDE_FROM_GC=${EXCLUDE_FROM_GC:=/etc/docker-gc-exclude}
if [ ! -f "$EXCLUDE_FROM_GC" ]
then
  EXCLUDE_FROM_GC=/dev/null
fi

EXCLUDE_CONTAINERS_FROM_GC=${EXCLUDE_CONTAINERS_FROM_GC:=/etc/docker-gc-exclude-containers}
if [ ! -f "$EXCLUDE_CONTAINERS_FROM_GC" ]
then
  EXCLUDE_CONTAINERS_FROM_GC=/dev/null
fi

EXCLUDE_IDS_FILE="exclude_ids"
EXCLUDE_CONTAINER_IDS_FILE="exclude_container_ids"

function date_parse {
  if date --utc >/dev/null 2>&1; then
    # GNU/date
    echo $(date -u --date "${1}" "+%s")
  else
    # BSD/date
    echo $(date -j -u -f "%F %T" "${1}" "+%s")
  fi
}

# Elapsed time since a docker timestamp, in seconds
function elapsed_time() {
    # Docker 1.5.0 datetime format is 2015-07-03T02:39:00.390284991
    # Docker 1.7.0 datetime format is 2015-07-03 02:39:00.390284991 +0000 UTC
    utcnow=$(date -u "+%s")
    replace_q="${1#\"}"
    without_ms="${replace_q:0:19}"
    replace_t="${without_ms/T/ }"
    epoch=$(date_parse "${replace_t}")
    echo $(($utcnow - $epoch))
}

function compute_exclude_ids() {
    # Find images that match patterns in the EXCLUDE_FROM_GC file and put their
    # id prefixes into $EXCLUDE_IDS_FILE, prefixed with ^

    PROCESSED_EXCLUDES="processed_excludes.tmp"
    # Take each line and put a space at the beginning and end, so when we
    # grep for them below, it will effectively be: "match either repo:tag
    # or imageid".  Also delete blank lines or lines that only contain
    # whitespace
    sed 's/^\(.*\)$/ \1 /' $EXCLUDE_FROM_GC | sed '/^ *$/d' > $PROCESSED_EXCLUDES
    # The following looks a bit of a mess, but here's what it does:
    # 1. Get images
    # 2. Skip header line
    # 3. Turn columnar display of 'REPO TAG IMAGEID ....' to 'REPO:TAG IMAGEID'
    # 4. find lines that contain things mentioned in PROCESSED_EXCLUDES
    # 5. Grab the image id from the line
    # 6. Prepend ^ to the beginning of each line

    # What this does is make grep patterns to match image ids mentioned by
    # either repo:tag or image id for later greppage
    $DOCKER images \
        | tail -n+2 \
        | sed 's/^\([^ ]*\) *\([^ ]*\) *\([^ ]*\).*/ \1:\2 \3 /' \
        | grep -f $PROCESSED_EXCLUDES 2>/dev/null \
        | cut -d' ' -f3 \
        | sed 's/^/^(sha256:)?/' > $EXCLUDE_IDS_FILE
}

function compute_exclude_container_ids() {
    # Find containers matching to patterns listed in EXCLUDE_CONTAINERS_FROM_GC file
    # Implode their values with a \| separator on a single line
    PROCESSED_EXCLUDES=`cat $EXCLUDE_CONTAINERS_FROM_GC \
        | xargs \
        | sed -e 's/ /\|/g'`
    # The empty string would match everything
    if [ "$PROCESSED_EXCLUDES" = "" ]; then
        touch $EXCLUDE_CONTAINER_IDS_FILE
        return
    fi
    # Find all docker images
    # Filter out with matching names
    # and put them to $EXCLUDE_CONTAINER_IDS_FILE
    $DOCKER ps -a \
        | grep -E "$PROCESSED_EXCLUDES" \
        | awk '{ print $1 }' \
        | tr -s " " "\012" \
        | sort -u > $EXCLUDE_CONTAINER_IDS_FILE
}

function log() {
    msg=$1
    if [[ $LOG_TO_SYSLOG -gt 0 ]]; then
        logger -i -t "$SYSLOG_TAG" -p "$SYSLOG_FACILITY.$SYSLOG_LEVEL" "$msg"
    else
        echo "[$(date +'%Y-%m-%dT%H:%M:%S')] [INFO] : $msg"
    fi
}

function container_log() {
    prefix=$1
    filename=$2

    while IFS='' read -r containerid
    do
        log "$prefix $containerid $(${DOCKER} inspect -f {{.Name}} $containerid)"
    done < "$filename"
}

function image_log() {
    prefix=$1
    filename=$2

    while IFS='' read -r imageid
    do
        log "$prefix $imageid $(${DOCKER} inspect -f {{.RepoTags}} $imageid)"
    done < "$filename"
}

# Change into the state directory (and create it if it doesn't exist)
if [ ! -d "$STATE_DIR" ]
then
  mkdir -p $STATE_DIR
fi
cd "$STATE_DIR"

# Verify that docker is reachable
$DOCKER version 1>/dev/null

# List all currently existing containers
$DOCKER ps -a -q --no-trunc | sort | uniq > containers.all

# List running containers
$DOCKER ps -q --no-trunc | sort | uniq > containers.running
container_log "Container running" containers.running

# compute ids of container images to exclude from GC
compute_exclude_ids

# compute ids of containers to exclude from GC
compute_exclude_container_ids

# List containers that are not running
comm -23 containers.all containers.running > containers.exited

if [[ $EXCLUDE_DEAD -gt 0 ]]; then
    echo "Excluding dead containers"
    # List dead containers
    $DOCKER ps -q -a -f status=dead | sort | uniq > containers.dead    
    comm -23 containers.exited containers.dead > containers.exited.tmp
    cat containers.exited.tmp > containers.exited
fi

container_log "Container not running" containers.exited

# Find exited containers that finished at least GRACE_PERIOD_SECONDS ago
> containers.reap.tmp
cat containers.exited | while read line
do
    EXITED=$(${DOCKER} inspect -f "{{json .State.FinishedAt}}" ${line})
    ELAPSED=$(elapsed_time $EXITED)
    if [[ $ELAPSED -gt $GRACE_PERIOD_SECONDS ]]; then
        echo $line >> containers.reap.tmp
    fi
done

# List containers that we will remove and exclude ids.
cat containers.reap.tmp | sort | uniq | grep -v -f $EXCLUDE_CONTAINER_IDS_FILE > containers.reap || true

# List containers that we will keep.
comm -23 containers.all containers.reap > containers.keep

# List images used by containers that we keep.
cat containers.keep |
xargs -n 1 $DOCKER inspect -f '{{.Image}}' 2>/dev/null |
sort | uniq > images.used

# List images to reap; images that existed last run and are not in use.
$DOCKER images -q --no-trunc | sort | uniq > images.all

# Find images that are created at least GRACE_PERIOD_SECONDS ago
> images.reap.tmp
cat images.all | while read line
do
    CREATED=$(${DOCKER} inspect -f "{{.Created}}" ${line})
    ELAPSED=$(elapsed_time $CREATED)
    if [[ $ELAPSED -gt $GRACE_PERIOD_SECONDS ]]; then
        echo $line >> images.reap.tmp
    fi
done
comm -23 images.reap.tmp images.used | grep -E -v -f $EXCLUDE_IDS_FILE > images.reap || true

# Use -f flag on docker rm command; forces removal of images that are in Dead
# status or give errors when removing.
FORCE_CONTAINER_FLAG=""
if [[ $FORCE_CONTAINER_REMOVAL -gt 0 ]]; then
    FORCE_CONTAINER_FLAG="-f"
fi
# Reap containers.
if [[ $DRY_RUN -gt 0 ]]; then
    container_log "The following container would have been removed" containers.reap
else
    container_log "Removing containers" containers.reap
    xargs -n 1 $DOCKER rm $FORCE_CONTAINER_FLAG --volumes=true < containers.reap &>/dev/null || true
fi

# Use -f flag on docker rmi command; forces removal of images that have multiple tags
FORCE_IMAGE_FLAG=""
if [[ $FORCE_IMAGE_REMOVAL -gt 0 ]]; then
    FORCE_IMAGE_FLAG="-f"
fi

# Reap images.
if [[ $DRY_RUN -gt 0 ]]; then
    image_log "The following image would have been removed" images.reap
else
    image_log "Removing image" images.reap
    xargs -n 1 $DOCKER rmi $FORCE_IMAGE_FLAG < images.reap &>/dev/null || true
fi








Thursday, November 17, 2016

A fix to broken SSH key authenticated login after Mac Sierra Upgrade

For Mac OS, I feel that every major upgrade comes with some aftermaths which may cause short term migraine. The first thing to do could be looking up possible solutions from web searches in the hope that someone has figured out how to fix those post-upgrade problems. It reminds me that it would be safer to wait for a few months since every major OS upgrade has been released. The same truth holds for any other OS releases.

Problem encountered:
After the upgrade of Mac OS Sierra, I was unable to login to my linux box from my MacBook via SSH which was supposed to be using key authentication login without typing password.

Instead, I was asked for the passphrase for my key file like ~/.ssh/id_rsa. First of all, I found I forgot my passphrase. Actually, I have not been typing this passphrase for a while since I setup SSH key authentication on my MacBook for convenience.

The solution:
Someone suggests regenerating new key on local machine to resolve this. First thing first, you need to re-enable password authentication from the SSH server.

Another Mac user pointed out that the problem could be originated from the ssh-agent on Mac OS Sierra which is SSH v7.2 as of writing. A possible situation is that the ssh-agent does not automatically load passphrases on the keychain during startup.

To verify this, try the command:
$ ssh-add -l
The agent has no identities.

Clearly, there is no identity information stored in ssh-agent.

Let's store passphrase in your keychain again:
$ ssh-add -K <keyfile>

whereas <keyfile> could be the path like ~/.ssh/id_rsa, or whatever suits you

It will prompt for the passphrase and then will save them to the keychain. However, you might need to remind yourself the passphrase of that particular key file. If you have saved this in Keychain Access before, you can retrieve the passphrase under Keychains: login -> Category: Passwords in Keychain Access app.


You should be able to login again in the good old way of SSH key authentication, but it may not survive the next reboot for whatever reason since MacOS Sierra. Apple's Engineer states that this is expected and it is just re-aligned their behavior with the mainstream OpenSSH in this area. In other words, the stored passphrase for SSH keychain WILL NOT survive next reboot since MacOS Sierra.

You need to run the following command in Terminal again and again when you log back in MacOS Sierra:
$ ssh-add -A <keyfile>


It sounds like an immediate solution but lasts not long enough.

Taking one step forward, you can add a bash script to run SSH command with that particular identity file on your laptop:
#!/bin/bash
echo "Adding identities to SSH agent..."
ssh-add -A 2>/dev/null
echo "Logging in remote SSH server with specific identity file and port number..."
ssh -i <keyfile> -p <port> username@<remote_ssh_server_name_or_ip>


Ultimately, you can first log in your SSH box, re-enable password authentication on SSH server, regenerate a new RSA key on your laptop and then upload it to SSH box as permanent change. The key authentication will work with the newly generated identity file on Mac OS Sierra.

For details, check these out:
http://askubuntu.com/questions/46930/how-can-i-set-up-password-less-ssh-login
http://manpages.ubuntu.com/manpages/trusty/man1/ssh-copy-id.1.html
https://openradar.appspot.com/27348363








Wednesday, October 26, 2016

pyenv + pyenv-virtualenv + OpenCV 3 on Mac OS X El Capitan

Python is a good tool for quick project startup while it's still a little early to dig into the details of C++ or Objective C code.

The trouble is that the builtin Python version may not meet our needs. As you know, it starts to get messy when a couple of different versions of Python interpreters have been installed in the same OS. A few weeks later, you may find out things are not compiling properly due to all sorts of PYTHONPATH issues.

So, here comes the management tool - pyenv

Simply speaking, pyenv lets you easily switch between multiple versions of Python. On OS X, pyenv may install multiple versions of Python into its own repository location while the builtin version of Python is still intact with the OS. This minimises the conflict between your project and other existing applications which uses Mac's builtin Python for compiling work.

pyenv also comes with a plugin to deal with Python's virtual environment - pyenv-virtualenv 

pyenv-virtualenv provides features to manage virtualenvs and conda environments for Python.

With both of these, you can customise a particular version of Python as per Application based the virtual environment will trigger those changes for individual project folder.

To setup, please read through the README page via the following links:

To keep things simple, it is suggest that we should use homebrew to install above packages.

In the file ~/.bash_profile, typical setup is recommended as follows:

#
#
# Multi-Python switcher Pyenv
# To use Homebrew's directories rather than ~/.pyenv add to your profile:
export PYENV_ROOT=/usr/local/var/pyenv
export PATH="$PYENV_ROOT/bin:$PATH"
# Enable shims and autocompletion
if which pyenv > /dev/null; then eval "$(pyenv init -)"; fi

# A pyenv plugin to manage virtualenv (a.k.a. python-virtualenv) ref: https://github.com/yyuu/pyenv-virtualenv
# Automatically activate/deactivate virtualenvs on entering/leaving directories 
# which contain a .python-version file that lists a valid virtual environment
eval "$(pyenv virtualenv-init -)"
export PYENV_VIRTUALENV_DISABLE_PROMPT=1

# pip should only run if there is a virtualenv currently activated
export PIP_REQUIRE_VIRTUALENV=true
#
#
#
#


OpenCV is an excellent library for image/video processing. To avoid any error during the installation, it's recommended we install via homebrew command:


$
# First time installation
$ brew install opencv3 --HEAD --with-python3 --with-ffmpeg --with-tbb --with-contrib --with-opengl --with-qt5
# Upgrade alternative
$ brew reinstall opencv3 --HEAD --with-python3 --with-ffmpeg --with-tbb --with-contrib --with-opengl --with-qt5
$

All these will be done at global system level.

Now, using command like 'pyenv virtualenv' to setup a virtual environment for your project:
# Get into empty target project folder
$ mkdir ~/target_project
$ cd ~/target_project
$
# List available Python versions
$ pyenv versions
  system
  2.7.10
  2.7.12
  3.5.2
# Create new virtual environment with specific version
# Also name it 'my-env-3.5.2' or any other name
$ pyenv virtualenv 3.5.2 my-env-3.5.2
# Now activate new virtual environment within the target project folder
$ pyenv activate my-env-3.5.2
$

Assuming OpenCV 3 and Python 3.5.2 will be used within new virtual environment, it's time to link up OpenCV library folder to the target Python folder like /site-packages for importing reference.
# Assuming in ~/target_project folder
$ echo `brew --prefix opencv3`/lib/python3.5/site-packages >> $PYENV_ROOT/versions/my-env-3.5.2/lib/python3.5/site-packages/opencv3.pth
$



For Python virtual environment, we can setup necessary packages by using the following command within the target project folder

# Assuming in ~/target_project folder
$ 
$ pip install numpy
$

To test whether OpenCV 3 is supported within the virtual environment, let's open up Python console and import cv2 library:

# Assuming in ~/target_project folder
$ python
Python 3.5.2 (default, Jul 19 2016, 15:25:16) 
[GCC 4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.31)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>import cv2
>>>

If no error is generated, we can ensure that OpenCV 3 library support is in place with current Python environment.




Friday, August 5, 2016

Add Google Calendar to Lightning in Thunderbird

One day, I opened up Thunderbird app after an auto upgrade and surprisingly found Google Calendar was not linked anymore in Lightning interface. It happens from time to time while I supposed Mozilla should have fixed it but it doesn't seem to be the case this time.

Searching around the forum, I come across several solutions but none of them is working. Whether it can be problems in the add-on itself or Google has changed API again, I found myself hard to remember what exactly to resolve this over time. This time I need to take a note to keep myself aware of what needs to be done in case the same thing happens again for the ongoing upgrade.

Just a reminder as for what has happened back in September 16th 2013:

Google is changing the Location URL of their CalDAV Calendars

Google has decided to change the authentication mechanism for their CalDAV calendars to OAuth 2.0, which required some changes in Lightning to accommodate.

Due to these changes, the URL to access the calendar has also changed. The old endpoint will stop working after September 16th (today!). This affects only Google calendars using CalDAV protocol.

According to the Mozilla's blog, iCal users with read-only access are not affected.

For iCal way in Lightning, you may receive error message like MODIFICATION_FAILED when adding new gCal event within Lightning Calendar interface, i.e., it becomes write-protected.

What if we really want to view and EDIT our Google Calendar in Lightning? Let's try another protocol like CalDAV. CalDAV provides read and write access to Calendar instance.

You may want to choose this way especially if you cannot make it work for specific option for Google Calendar under Provider for Google Calendar add-on. I found this option is not quite reliable as CalDAV and might not be working from time to time.

Steps as follows:


  • Download Thunderbird and install Lightning add-on
  • Open the new calendar dialog (File → New → Calendar)
  • Add a new remote calendar (On the Network → CalDAV)
  • As a location enter the following:


https://apidata.googleusercontent.com/caldav/v2/*calendar-id/events


*calendar-id is supposed to be your email address or any other id you have set to your Google Calendar

To enable this, you need to enter Google account login details (supposedly once), along with two way authentication 6 digits password if necessary (depending on whether you have enabled two authentication or not on Google account).

This will enable two way (read, write) communication with your Google Calendar instantly.

For Apple iCal users, they might need to use the following URL:

https://apidata.googleusercontent.com/caldav/v2/*calendar-id/user









Tuesday, August 2, 2016

Completely UNINSTALL corrupted instance of Visual Studio 2010

Visual Studio 2010 is a comprehensive programming suite for app development on Windows platform. It can be good or evil when it comes to such an elephant build with additional packages and tools to be installed all at one time. It takes lot of space and requires attention while doing uninstallation. Any error generated during the uninstallation process could actually break the whole thing. You can repair it via the installation wizard before you can actually uninstall it.



When you land on this article, you might actually have left the installation in a broken state and you might have desparately searched through the Internet to find a resolution for all these. The thing is you just cannot uninstall VS2010 in a normal way. Indeed, Microsoft provide a better way for uninstallation in VS2012 or newer while we're still sticking to the 6 years old VS build.

Microsoft has released a tool to fix all kinds of registry errors or blocking issue during uninstallation of any program, i.e., any program which appears in the uninstall list.

URL:
https://support.microsoft.com/en-au/help/17588/fix-problems-that-block-programs-from-being-installed-or-removed

It's a troubleshooting tool which guides you through to the right option which actually fixes those strange errors for you and uninstalls the program as specified from there. I have tried this tool and ultimately removed my broken VS build on Windows 2008 server. It claims to support Windows version from 7 to 10.

You may need to try a further cleanup by uninstalling those depending components of Visual Studio via Control Panel | Programs and Features.

Finally, it frees up huge amount of diskspace on the server.





Tuesday, July 26, 2016

REST API for Pokemon

A new global trend has just started on using mobile app Pokemon GO to search for favourite monster with augmented reality features and encouraging app users around the world to walk more for a better health condition.

By looking at the maps of activities, I was stunned but also excited. This seems to be the next wave  in Big Data world. How can we miss it?

Pokeapi - The RESTful Pokémon API v2 Beta has been released. For whatever reason, they are limited to 300 requests per resource per IP address.

Check and see what's the next killer app you can create from this.

URL:
https://pokeapi.co





Friday, July 22, 2016

Eclipse IDE autocomplete for JavaScript and PHP

To enable autocomplete feature for language like JavaScript and PHP in your Eclipse project:

Locate and open .project file under Parent folder of your project within Eclipse IDE

Add two lines as below:

org.eclipse.wst.jsdt.core.jsNature
org.eclipse.php.core.PHPNature

Restart Eclipse IDE and you should see autocomplete suggestions while typing your code in *.js or *.php files.

Wednesday, July 6, 2016

Upgrade ESXi from 5.5 Update 3 to 6.0 Update 2

The good thing in ESXi 5.5 is the upgrade process can be done via SSH terminal. This eliminates the CD-ROM burning process and carries out the entire upgrade remotely from my laptop. As mentioned before, it's not always the direct process for version upgrade in ESXi infrastructure.

I was looking for the HP custom version of offline bundle (namely "HPE Custom Image for VMware ESXi 6.0 U2 Offline Bundle") for the HP machines which supposedly gathered all the drivers it needs for installing ESXi onto HP machines. You may need to download different custom version or official offline bundle for other machines which are supported otherwise.

URL:
https://my.vmware.com/group/vmware/details?downloadGroup=OEM-ESXI60U2-HPE&productId=491

After uploading the zip file onto ESXi box's datastore via vSphere client, with the following command I just got this error message on my ESXi console:

~ #
~ # esxcli software vib update -d /vmfs/volumes/datastore1/VMware-ESXi-6.0.0-2494585-depot.zip
[DependencyError]
 VIB VMware_bootbank_esx-base_6.0.0-2.34.3620759 requires vsan >= 6.0.0-2.34, but the requirement cannot be satisfied within the ImageProfile.
 VIB VMware_bootbank_esx-base_6.0.0-2.34.3620759 requires vsan << 6.0.0-2.35, but the requirement cannot be satisfied within the ImageProfile.
 Please refer to the log file for more details.
~ #
~ #

Honestly, ESXi 5.5 doesn't quite like the new VIB module named vsan. In other words, it doesn't have vsan in the current profile. That's why the path for vid update failed. So, let's go through the upgrade process with profile update option.

To list the current profile name, use the following command:

~ # esxcli software sources profile list -d /vmfs/volumes/datastore1/VMware-ESXi-6.0.0-2494585-depot.zip
Name                                 Vendor                      Acceptance Level
-----------------------------------  --------------------------  ----------------
HPE-ESXi-6.0.0-Update2-600.9.5.0.48  Hewlett Packard Enterprise  PartnerSupported

This may give different result on every single ESXi box, so you may need to take note of your own box for the actual name of image profile. Here, we have a profile name called "HPE-ESXi-6.0.0-Update2-600.9.5.0.48".

Now, it's time to proceed the upgrade:

~ #
~ # esxcli software profile update -p HPE-ESXi-6.0.0-Update2-600.9.5.0.48 -d /vmfs/volumes/datastore1/VMware-ESXi-6.0.0-2494585-depot.zip
.
.

And then I got the followings:

Update Result
   Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
   Reboot Required: true
   VIBs Installed: BRCM_bootbank_net-tg3_3.137l.v60.1-1OEM.600.0.0.2494585, EMU_bootbank_elxnet_10.7.110.13-1OEM.600.0.0.2768847, EMU_bootbank_ima-be2iscsi_10.7.110.10-1OEM.600.0.0.2159203, EMU_bootbank_lpfc_10.7.110.4-1OEM.600.0.0.2768847, EMU_bootbank_scsi-be2iscsi_10.7.110.10-1OEM.600.0.0.2159203, HPE_bootbank_amsHelper_600.10.4.0-22.2494585, HPE_bootbank_conrep_6.0.0.01-01.00.7.2494585, HPE_bootbank_hpbootcfg_6.0.0.02-02.00.6.2494585, HPE_bootbank_hpe-build_600.9.5.0.48-2494585, HPE_bootbank_hpe-esxi-fc-enablement_600.2.5.20-2494585, HPE_bootbank_hpe-ilo_600.10.0.0.26-1OEM.600.0.0.2494585, HPE_bootbank_hpe-smx-provider_600.03.10.00.13-2768847, HPE_bootbank_hponcfg_6.0.0.04-00.14.4.2494585, HPE_bootbank_hpssacli_2.40.13.0-6.0.0.1854445, HPE_bootbank_hptestevent_6.0.0.01-01.00.5.2494585, Hewlett-Packard_bootbank_char-hpcru_6.0.6.14-1OEM.600.0.0.2159203, Hewlett-Packard_bootbank_hpnmi_600.2.3.14-2159203, Hewlett-Packard_bootbank_scsi-hpdsa_5.5.0.48-1OEM.550.0.0.1331820, Hewlett-Packard_bootbank_scsi-hpsa_6.0.0.116-1OEM.600.0.0.2494585, Intel_bootbank_intelcim-provider_0.5-1.6, Intel_bootbank_net-i40e_1.3.45-1OEM.550.0.0.1331820, Intel_bootbank_net-igb_5.3.1-1OEM.550.0.0.1331820, Intel_bootbank_net-ixgbe_4.1.1.1-1OEM.550.0.0.1331820, MEL_bootbank_nmlx4-core_3.1.0.0-1OEM.600.0.0.2348722, MEL_bootbank_nmlx4-en_3.1.0.0-1OEM.600.0.0.2348722, MEL_bootbank_nmst_4.0.2.1-1OEM.600.0.0.2295424, QLogic_bootbank_misc-cnic-register_1.712.70.v60.1-1OEM.600.0.0.2494585, QLogic_bootbank_net-bnx2_2.2.5k.v60.1-1OEM.600.0.0.2494585, QLogic_bootbank_net-bnx2x_2.712.70.v60.3-1OEM.600.0.0.2494585, QLogic_bootbank_net-cnic_2.712.70.v60.3-1OEM.600.0.0.2494585, QLogic_bootbank_net-nx-nic_6.0.643-1OEM.600.0.0.2494585, QLogic_bootbank_net-qlcnic_6.1.191-1OEM.600.0.0.2494585, QLogic_bootbank_qlnativefc_2.1.30.0-1OEM.600.0.0.2768847, QLogic_bootbank_scsi-bnx2fc_1.712.70.v60.5-1OEM.600.0.0.2494585, QLogic_bootbank_scsi-bnx2i_2.712.70.v60.2-1OEM.600.0.0.2494585, VMWARE_bootbank_mtip32xx-native_3.8.5-1vmw.600.0.0.2494585, VMware_bootbank_ata-pata-amd_0.3.10-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-atiixp_0.4.6-4vmw.600.0.0.2494585, VMware_bootbank_ata-pata-cmd64x_0.2.5-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-pdc2027x_1.0-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-serverworks_0.4.3-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-sil680_0.4.8-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-via_0.3.3-2vmw.600.0.0.2494585, VMware_bootbank_block-cciss_3.6.14-10vmw.600.0.0.2494585, VMware_bootbank_cpu-microcode_6.0.0-0.0.2494585, VMware_bootbank_ehci-ehci-hcd_1.0-3vmw.600.2.34.3620759, VMware_bootbank_emulex-esx-elxnetcli_10.2.309.6v-0.0.2494585, VMware_bootbank_esx-base_6.0.0-2.34.3620759, VMware_bootbank_esx-dvfilter-generic-fastpath_6.0.0-0.0.2494585, VMware_bootbank_esx-tboot_6.0.0-2.34.3620759, VMware_bootbank_esx-ui_1.0.0-3617585, VMware_bootbank_esx-xserver_6.0.0-0.0.2494585, VMware_bootbank_ipmi-ipmi-devintf_39.1-4vmw.600.0.0.2494585, VMware_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.600.0.0.2494585, VMware_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.600.0.0.2494585, VMware_bootbank_lsi-mr3_6.605.08.00-7vmw.600.1.17.3029758, VMware_bootbank_lsi-msgpt3_06.255.12.00-8vmw.600.1.17.3029758, VMware_bootbank_lsu-hp-hpsa-plugin_1.0.0-1vmw.600.0.0.2494585, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-2vmw.600.0.11.2809209, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-1vmw.600.0.0.2494585, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-2vmw.600.0.11.2809209, VMware_bootbank_lsu-lsi-mpt2sas-plugin_1.0.0-4vmw.600.1.17.3029758, VMware_bootbank_lsu-lsi-mptsas-plugin_1.0.0-1vmw.600.0.0.2494585, VMware_bootbank_misc-drivers_6.0.0-2.34.3620759, VMware_bootbank_net-e1000_8.0.3.1-5vmw.600.0.0.2494585, VMware_bootbank_net-enic_2.1.2.38-2vmw.600.0.0.2494585, VMware_bootbank_net-forcedeth_0.61-2vmw.600.0.0.2494585, VMware_bootbank_net-vmxnet3_1.1.3.0-3vmw.600.2.34.3620759, VMware_bootbank_nmlx4-rdma_3.0.0.0-1vmw.600.0.0.2494585, VMware_bootbank_nvme_1.2.0.27-4vmw.550.0.0.1331820, VMware_bootbank_ohci-usb-ohci_1.0-3vmw.600.0.0.2494585, VMware_bootbank_rste_2.0.2.0088-4vmw.600.0.0.2494585, VMware_bootbank_sata-ahci_3.0-22vmw.600.2.34.3620759, VMware_bootbank_sata-ata-piix_2.12-10vmw.600.0.0.2494585, VMware_bootbank_sata-sata-nv_3.5-4vmw.600.0.0.2494585, VMware_bootbank_sata-sata-promise_2.12-3vmw.600.0.0.2494585, VMware_bootbank_sata-sata-sil24_1.1-1vmw.600.0.0.2494585, VMware_bootbank_sata-sata-sil_2.3-4vmw.600.0.0.2494585, VMware_bootbank_sata-sata-svw_2.3-3vmw.600.0.0.2494585, VMware_bootbank_scsi-aacraid_1.1.5.1-9vmw.600.0.0.2494585, VMware_bootbank_scsi-adp94xx_1.0.8.12-6vmw.600.0.0.2494585, VMware_bootbank_scsi-aic79xx_3.1-5vmw.600.0.0.2494585, VMware_bootbank_scsi-fnic_1.5.0.45-3vmw.600.0.0.2494585, VMware_bootbank_scsi-ips_7.12.05-4vmw.600.0.0.2494585, VMware_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.600.0.0.2494585, VMware_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.600.0.0.2494585, VMware_bootbank_scsi-megaraid2_2.00.4-9vmw.600.0.0.2494585, VMware_bootbank_scsi-mptsas_4.23.01.00-9vmw.600.0.0.2494585, VMware_bootbank_scsi-mptspi_4.23.01.00-9vmw.600.0.0.2494585, VMware_bootbank_uhci-usb-uhci_1.0-3vmw.600.0.0.2494585, VMware_bootbank_vsan_6.0.0-2.34.3563498, VMware_bootbank_vsanhealth_6.0.0-3000000.3.0.2.34.3544323, VMware_bootbank_xhci-xhci_1.0-3vmw.600.2.34.3620759, VMware_locker_tools-light_6.0.0-2.34.3620759
   VIBs Removed: Broadcom_bootbank_net-tg3_3.137l.v55.1-1OEM.550.0.0.1331820, Emulex_bootbank_elxnet_10.5.121.7-1OEM.550.0.0.1331820, Emulex_bootbank_ima-be2iscsi_10.5.65.7-1OEM.550.0.0.1331820, Emulex_bootbank_lpfc_10.5.39.0-1OEM.550.0.0.1331820, Emulex_bootbank_scsi-be2iscsi_10.5.65.7-1OEM.550.0.0.1331820, Hewlett-Packard_bootbank_char-hpcru_5.5.6.6-1OEM.550.0.0.1198610, Hewlett-Packard_bootbank_char-hpilo_550.9.0.2.3-1OEM.550.0.0.1198610, Hewlett-Packard_bootbank_hp-ams_550.10.3.0-15.1198610, Hewlett-Packard_bootbank_hp-build_550.9.4.26-1198610, Hewlett-Packard_bootbank_hp-conrep_5.5.0.1-0.0.8.1198610, Hewlett-Packard_bootbank_hp-esxi-fc-enablement_550.2.4.6-1198610, Hewlett-Packard_bootbank_hp-smx-provider_550.03.09.00.15-1198610, Hewlett-Packard_bootbank_hpbootcfg_5.5.0.02-01.00.5.1198610, Hewlett-Packard_bootbank_hpnmi_550.2.3.5-1198610, Hewlett-Packard_bootbank_hponcfg_5.5.0.4.4-0.3.1198610, Hewlett-Packard_bootbank_hpssacli_2.30.6.0-5.5.0.1198611, Hewlett-Packard_bootbank_hptestevent_5.5.0.01-00.01.4.1198610, Hewlett-Packard_bootbank_scsi-hpdsa_5.5.0.46-1OEM.550.0.0.1331820, Hewlett-Packard_bootbank_scsi-hpsa_5.5.0.114-1OEM.550.0.0.1331820, Intel_bootbank_intelcim-provider_0.5-1.4, Intel_bootbank_net-i40e_1.2.48-1OEM.550.0.0.1331820, Intel_bootbank_net-igb_5.2.10-1OEM.550.0.0.1331820, Intel_bootbank_net-ixgbe_3.21.4.3-1OEM.550.0.0.1331820, QLogic_bootbank_misc-cnic-register_1.712.50.v55.1-1OEM.550.0.0.1331820, QLogic_bootbank_net-bnx2_2.2.5j.v55.3-1OEM.550.0.0.1331820, QLogic_bootbank_net-bnx2x_2.712.50.v55.6-1OEM.550.0.0.1331820, QLogic_bootbank_net-cnic_2.712.50.v55.6-1OEM.550.0.0.1331820, QLogic_bootbank_net-nx-nic_5.5.643-1OEM.550.0.0.1331820, QLogic_bootbank_net-qlcnic_5.5.190-1OEM.550.0.0.1331820, QLogic_bootbank_qlnativefc_1.1.55.0-1OEM.550.0.0.1331820, QLogic_bootbank_scsi-bnx2fc_1.712.50.v55.7-1OEM.550.0.0.1331820, QLogic_bootbank_scsi-bnx2i_2.712.50.v55.4-1OEM.550.0.0.1331820, VMware_bootbank_ata-pata-amd_0.3.10-3vmw.550.0.0.1331820, VMware_bootbank_ata-pata-atiixp_0.4.6-4vmw.550.0.0.1331820, VMware_bootbank_ata-pata-cmd64x_0.2.5-3vmw.550.0.0.1331820, VMware_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.550.0.0.1331820, VMware_bootbank_ata-pata-pdc2027x_1.0-3vmw.550.0.0.1331820, VMware_bootbank_ata-pata-serverworks_0.4.3-3vmw.550.0.0.1331820, VMware_bootbank_ata-pata-sil680_0.4.8-3vmw.550.0.0.1331820, VMware_bootbank_ata-pata-via_0.3.3-2vmw.550.0.0.1331820, VMware_bootbank_block-cciss_3.6.14-10vmw.550.0.0.1331820, VMware_bootbank_ehci-ehci-hcd_1.0-3vmw.550.0.0.1331820, VMware_bootbank_esx-base_5.5.0-3.71.3116895, VMware_bootbank_esx-dvfilter-generic-fastpath_5.5.0-0.0.1331820, VMware_bootbank_esx-tboot_5.5.0-2.33.2068190, VMware_bootbank_esx-ui_0.0.2-0.1.3357452, VMware_bootbank_esx-xlibs_5.5.0-0.0.1331820, VMware_bootbank_esx-xserver_5.5.0-0.0.1331820, VMware_bootbank_ipmi-ipmi-devintf_39.1-4vmw.550.0.0.1331820, VMware_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.550.0.0.1331820, VMware_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.550.0.0.1331820, VMware_bootbank_lsi-mr3_0.255.03.01-2vmw.550.3.68.3029944, VMware_bootbank_lsi-msgpt3_00.255.03.03-1vmw.550.1.15.1623387, VMware_bootbank_misc-drivers_5.5.0-3.68.3029944, VMware_bootbank_mtip32xx-native_3.3.4-1vmw.550.1.15.1623387, VMware_bootbank_net-be2net_4.6.100.0v-1vmw.550.0.0.1331820, VMware_bootbank_net-e1000_8.0.3.1-3vmw.550.0.0.1331820, VMware_bootbank_net-enic_1.4.2.15a-1vmw.550.0.0.1331820, VMware_bootbank_net-forcedeth_0.61-2vmw.550.0.0.1331820, VMware_bootbank_net-vmxnet3_1.1.3.0-3vmw.550.2.39.2143827, VMware_bootbank_ohci-usb-ohci_1.0-3vmw.550.0.0.1331820, VMware_bootbank_rste_2.0.2.0088-4vmw.550.1.15.1623387, VMware_bootbank_sata-ahci_3.0-22vmw.550.3.68.3029944, VMware_bootbank_sata-ata-piix_2.12-10vmw.550.2.33.2068190, VMware_bootbank_sata-sata-nv_3.5-4vmw.550.0.0.1331820, VMware_bootbank_sata-sata-promise_2.12-3vmw.550.0.0.1331820, VMware_bootbank_sata-sata-sil24_1.1-1vmw.550.0.0.1331820, VMware_bootbank_sata-sata-sil_2.3-4vmw.550.0.0.1331820, VMware_bootbank_sata-sata-svw_2.3-3vmw.550.0.0.1331820, VMware_bootbank_scsi-aacraid_1.1.5.1-9vmw.550.0.0.1331820, VMware_bootbank_scsi-adp94xx_1.0.8.12-6vmw.550.0.0.1331820, VMware_bootbank_scsi-aic79xx_3.1-5vmw.550.0.0.1331820, VMware_bootbank_scsi-fnic_1.5.0.4-1vmw.550.0.0.1331820, VMware_bootbank_scsi-ips_7.12.05-4vmw.550.0.0.1331820, VMware_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.550.0.0.1331820, VMware_bootbank_scsi-megaraid-sas_5.34-9vmw.550.3.68.3029944, VMware_bootbank_scsi-megaraid2_2.00.4-9vmw.550.0.0.1331820, VMware_bootbank_scsi-mptsas_4.23.01.00-9vmw.550.3.68.3029944, VMware_bootbank_scsi-mptspi_4.23.01.00-9vmw.550.3.68.3029944, VMware_bootbank_uhci-usb-uhci_1.0-3vmw.550.0.0.1331820, VMware_bootbank_xhci-xhci_1.0-2vmw.550.3.68.3029944, VMware_locker_tools-light_5.5.0-3.68.3029944
   VIBs Skipped: Avago_bootbank_scsi-mpt2sas_15.10.06.00-1OEM.550.0.0.1331820, Hewlett-Packard_bootbank_scsi-hpvsa_5.5.0.100-1OEM.550.0.0.1331820, QLogic_bootbank_scsi-bfa_3.2.5.0-1OEM.550.0.0.1331820, QLogic_bootbank_scsi-qla4xxx_644.6.05.0-1OEM.600.0.0.2494585, VMware_bootbank_ima-qla4xxx_2.02.18-1vmw.600.0.0.2494585, VMware_bootbank_net-e1000e_3.2.2.1-1vmw.600.1.26.3380124, VMware_bootbank_net-mlx4-core_1.9.7.0-1vmw.600.0.0.2494585, VMware_bootbank_net-mlx4-en_1.9.7.0-1vmw.600.0.0.2494585

Such a long passage reporting how things are done. It's successful anyway. You need to reboot the ESXi box to make changes effective.

NB: You may need to use boot option like "noIOMMU" during the reboot process.

To enable noIOMMU option via SSH, try this command:
> esxcli system settings kernel set –setting=noIOMMU -v TRUE










Tuesday, July 5, 2016

Hanging on "Initialising IOV" message after ESXi 5.5 Upgrade

For my experience in ESXi infrastructure, it's always not easy to upgrade in one go.

Simply add boot options at bootup screen by pressing Shift + O, enter additional parameter "noIOMMU" at the end of the boot string. Then press ENTER key to proceed bootup procedures:

.
[Boot options...] noIOMMU
.


Once ESXi box is started successfully, you may want to apply persistent settings by SSH remotely into the box and issue the following command:

~ #
~ #
~ # esxcli system settings kernel set --setting=noIOMMU -v TRUE
~ #
~ #

This makes noIOMMU option survives ongoing reboots;-)



Access ATAPI DVD writer on VMware ESXi 4.x Windows Guest VM

To access the physical DVD writer on Windows Guest VM, there is no alternative but setting up a SCSI device for this.

IDE passthrough on CD/DVD-ROM device seems not to be really promising as I have tried through different ESXi 4.x boxes.

Finally, I found this way to get through this. You need to open up vSphere interface at a remote machine for monitor and control. You also need a remote desktop session connected to the target Windows Guest VM for checking read/write access on the target physical DVD drive.

Using remote VMware vSphere interface, it is easy to add new SCSI device for physical DVD drive. Before adding new SCSI device, you may want to shutdown Windows guest VM first. 

Once SCSI DVD device is added to Guest VM's profile, you may start the VM again to see if the device is detected.

When you can login to Windows Guest VM, it's time to try on remote vSphere interface by clicking CD/DVD-ROM device icon to connect to host physical device. You should see the device raw ID as shown in the selection list for choice on vSphere interface. Once connected, the physical SCSI DVD drive should be ready to go for Read & Write operations on your Windows Guest VM.


Wednesday, May 11, 2016

Practical way to tackle WiFi dropout and slow performance issue in OS X El Capitan

There is a couple of occasions that I need to do have an upgrade on Mac OS X. Chances are I completely forget those tweaking tips and hope things will be resolved in next version. Something like WiFi connection tweaking I can remind from the forum. An explanation can help me remind those things I should do every single time when I need to do an OS X upgrade.

We can find all kinds of topics regarding WiFi dropout issue on OS X and someone may suggest standard procedure to get around this. For WiFi settings, Apple would like to stick to standard MTU size at 1500 bytes for Ethernet network. But, why?

With WiFi connection, we can do a ping test like this:

$
$
$ ping -D -s 1500 google.com
PING google.com (203.5.76.246): 1500 data bytes
ping: sendto: Message too long
ping: sendto: Message too long

Note:
Option -D suppresses fragmentation for the packet (force to transmit whole packet at a time)
Option -s 1500 specify the packet size, i.e., 1500 bytes

Problem is there is an overhead added on top of packet size for every transmission. That is why we get 'Message too long' responses from there.

To rectify this problem, let's do the math:

Optimum MTU size = Packet size + Overhead

What about the overhead? The overhead would be 28 bytes because 20 bytes are reserved for the IP header and 8 bytes must be allocated for the ICMP Echo Request header.

So, maximum packet size permitted in each network packet is actually:

Optimum MTU size = Non-fragmented Packet size +  28 bytes

Here we say MTU size will theoretically be 1500. Packet size allowed would be

(1500 -28)=1472 bytes

But, is it practical? Let's test this.

To test this magic number, issue the following command in Terminal:

$
$
$ ping -D -s 1472 google.com
PING google.com (203.5.76.246): 1472 data bytes
1480 bytes from 203.5.76.246: icmp_seq=0 ttl=56 time=7.787 ms
1480 bytes from 203.5.76.246: icmp_seq=1 ttl=56 time=4.838 ms
1480 bytes from 203.5.76.246: icmp_seq=2 ttl=56 time=4.360 ms
1480 bytes from 203.5.76.246: icmp_seq=3 ttl=56 time=6.184 ms

Packets are now transmitted successfully with this payload (1472). Therefore, we can be sure the optimum MTU payload size would be 1500 (luckily a theoretical value this time) just for this particular WiFi network.

With no fragmentation happening, each packet should be theoretically transmitted as a whole and not broken in parts. This can improve the network speed effectively in an ideal signal condition.

Just remind that the size of payload can be different across various locations and wireless routers. You might need to find the smallest optimum payload which sits within the size range of successful transmission for most networks you'll be connecting, either at home or office. Try stepping down the value until you find transmission works for all the networks you have tried.

Apply this payload value in Network settings panel and hopefully it'll help reduce the number of WiFi dropout and improve the WiFi network speed.












Thursday, April 21, 2016

Secure network daemons using TCP wrapper on Ubuntu

As many blogposts point out that network daemon like sshd can be protected via configurations in two files /etc/hosts.deny and /etc/hosts.allow.

What about httpd or nignx? They provide web services to the network clients and it seems that TCP wrapper doesn't restrict access to these daemons.

TCP wrapper would only be effective when the network daemon has dependency over library like libwrap.so. To check whether a daemon relies on libwrap.so library. Issue the following command will do the job:

$ ldd /usr/sbin/sshd | grep libwrap
libwrap.so.0 => /lib/.../libwrap.so.0 (0xb55a5000) 

Daemon like sshd does rely on this TCP wrapper library so can be managed by chainging configurations in both /etc/hosts.deny and /etc/hosts.allow.

However, ldd test failed to display TCP wrapper dependency for daemon like httpd and nginx. This explains why TCP wrapper poses no action over these two daemon even with similar configurations.

Just keep in mind that checking dependency library for the daemons before trying to secure them using TCP wrapper.




Friday, April 15, 2016

Nginx php-fpm security.limit_extension issue

Just found something weird while tweaking the configurations in Nginx PHP-FPM. URL via https suddenly went offline and the server log shows something as follows:
[error] 18292#0: *1 FastCGI sent in stderr: "Access to the script '/usr/share/nginx/html' has been denied (see security.limit_extensions)", client: x.x.x.x, server: localhost, request: "GET /index.php HTTP/1.1", host: "xxx.net"

Although people suggest to turn off security.limit_extensions by setting it to nothing, it really raise me a bit of security concern.

It ends up there's one line in the config file /etc/nginx/sites-enabled/default which causes the error:
#
#
fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;
Comment it out is okay while the .php page loads fine if it's changed to something else:

# Fix for missing params and blank php page display problems
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO          $fastcgi_path_info;
#fastcgi_param PATH_TRANSLATED    $document_root$fastcgi_path_info;
fastcgi_param PATH_TRANSLATED    $document_root$fastcgi_script_name;
Reloading nginx server again and things are loaded up properly!









Thursday, April 14, 2016

Adding Fail2Ban UFW Portscan Filter on Ubuntu

To further prevent portscan from bad bots around the world, there's a way of making use of Fail2Ban filter.

Assuming Fail2Ban is in place, edit the config file as below;
$ sudo nano /etc/fail2ban/jail.local

Add new section in jail.local:

[ufw-port-scan]

enabled   = true
ignoreip  = 127.0.0.1/8
port      = all
filter    = ufw-port-scan
banaction = ufw
logpath   = /var/log/ufw.log
maxretry  = 20

Create new filter as follows:
$ sudo nano /etc/fail2ban/filter.d/ufw-port-scan.conf

Add new lines in ufw-port-scan.conf:
[Definition]
failregex = .*\[UFW BLOCK\] IN=.* SRC=
ignoreregex =
Create ban action config file as follows:


$ sudo nano /etc/fail2ban/action.d/ufw.conf

Add new lines in ufw.conf:


[Definition]
actionstart =
actionstop =
actioncheck =
actionban = ufw insert 1 deny from  to any
actionunban = ufw delete deny from  to any

Have a service restart and it's good to go.
$ sudo service fail2ban restart

It's possible to run a test for the regex rule as well:
$ fail2ban-regex /var/log/ufw.log '.*\[UFW BLOCK\] IN=.* SRC='

Then you might get some results back like these:

Running tests
=============

Use   failregex line : .*\[UFW BLOCK\] IN=.* SRC=
Use         log file : /var/log/ufw.log.1


Results
=======

Failregex: 163 total
|-  #) [# of hits] regular expression
|   1) [163] .*\[UFW BLOCK\] IN=.* SRC=
`-

Ignoreregex: 0 total

Date template hits:
|- [# of hits] date format
|  [163] MONTH Day Hour:Minute:Second
`-

Lines: 163 lines, 0 ignored, 163 matched, 0 missed












Monday, April 11, 2016

Nginx PHP-FPM display blank page for .PHP file on Ubuntu

Setting up new Nginx instance wasn't a funny thing as stated on most blogposts while it display blank page during startup. This is annoying when people set things up from scratches.

First thing first:

Check whether PHP-FPM is running:
$ ps -aux | grep php-fpm --color


$
$ ps -aux | grep php-fpm --color
root      1898  0.0  1.8 209928 18992 ?        Ss   11:48   0:00 php-fpm: master process (/etc/php/7.0/fpm/php-fpm.conf)                      
www-data  1900  0.0  0.6 210060  6660 ?        S    11:48   0:00 php-fpm: pool www                                                            
www-data  1901  0.0  0.5 210060  6088 ?        S    11:48   0:00 php-fpm: pool www   


Also, you may notice that running processes may be owned by someone else, like nginx, apache or whatsoever. Make sure user/group setting in php-fpm config file is referring to the same user/group as set in Nginx config file, like www-data/www-data.

Default location of php-fpm 7.0 config file:

/etc/php/7.0/fpm/pool.d/www.conf

Default location of Nginx config file:
/etc/nginx/sites-available/default

Back to the question about why nginx displays a balnk page instead? Let's take a look at the Nginx config file:

...
...
server {
        ...
        location ~ \.php$ {
                try_files $uri =404;
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_cache  microcache;
                fastcgi_cache_key $scheme$host$request_uri$request_method;
                fastcgi_cache_valid 200 301 302 30s;
                fastcgi_cache_use_stale updating error timeout invalid_header http_500;
                fastcgi_pass_header Set-Cookie;
                fastcgi_pass_header Cookie;
                fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
                fastcgi_index index.php;
                include fastcgi_params;
                # Fix for missing params and blank php page display problems
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name;

        }
}


The line "include fastcgi_params" actually refers to the file /etc/nginx/fastcgi_params which has various fastcgi_param variables initialized when calling. Unfortunately, two important variables went missing leading to an unknown blank page when startup.

They are SCRITP_FILENAME and PATH_TRANSLATED. Add them back to Nginx config file and PHP things execute again!

Ref: https://www.narga.net/avoid-nginx-wordpress-blank-page/







Thursday, January 7, 2016

Windows 10 Wi-Fi get disconnected intermittently


Having witnessed the rollout of Windows version 3 yet I'm not a big fan. Now it turns out to be version 10. For some reasons I need to start digging into Google searches to find something useful. The fact is that my friend's parent get into the trouble of using Windows laptop at home and they are not tech savvy enough to adapt to the new order set by Microsoft in the new year. After an upgrade to Windows 10, wah la! Everything is not working! That's what you might expect. Well, fixing the problem would be easier than teaching them Linux or OS X from the ground up.

Having updated the Wi-Fi adaptor driver provided by the manufacturer, the wireless connection seems to work for a while. And then thing's happening... Internet Explorer becomes unresponsive and shows up an error regarding network connection. Same thing happens on Microsoft Edge. It ends up the wireless connection being broken and not recovering on time. It actually remains broken until you do something. Most of the time, Wi-Fi hotspot needs to be manually connected again whereas it's a little bit beyond what the elderly would understand. They might just blame someone for not setting things up right.

To stay away from the trouble, we need to come up with a all-year-round solution which automatically cure this bad symptom.

An active recovery process sounds good in this case. By setting up a event scheduler, it is possible to monitor the disconnection and then launch a series of commands to recover network connection behind the scene. Actually, Grannies don't even want to see something is fixing up their trouble.

To create an event that’s triggered when the network is disconnected, create an event schedule with using 10001 for the Event ID.

Launch Windows Task Scheduler from All Programs –> Accessories –> System Tools.

Click Action –> Create Task…
Give your task a name in the General tab, and then click Triggers and then click New.

Log: Microsoft-Windows-NetworkProfile/Operational
Source: NetworkProfile
Event ID: 10001

You’ll also want to make sure that there aren’t any network connection conditions (since you won’t be connected to the Internet when this happens).

Add some actions in the Actions tab and then click OK to finish making your task. Of course, it's not like that it popup a message and says "It worked!" and then thing's resolved. We need to do some more by adding a batch script to fix things up.
Talking about the Actions tab, it is actually quite possible to run PowerShell script or Batch file to regulate the broken connection.

Someone came up a solution to recover network disconnection since Windows 8 released, so it's not a new problem after all. I don't use Windows 8 myself so didn't realize this problem at all.

Possible batch script with DOS commands would be like this: 

C:\>
C:\>netsh interface set interface name="Wi-Fi" admin=disabled
C:\>netsh interface set interface name="Wi-Fi" admin=enabled
C:\>ipconfig /release
C:\>ipconfig /renew
C:\>arp -d *
C:\>nbtstat -R
C:\>nbtstat -RR
C:\>ipconfig /flushdns
C:\>ipconfig /registerdns
Change the actual interface name to suit your needs. It can be "Wi-Fi" or "Wired" or something else in particular. Take a look at your network adapter settings and see what needs to be changed.

Be it "fixmywifi.bat" or "givemebackmynetwork.bat". Just include the script file into Actions tab and hopefully it will run and try to recover the network connection at the background whenever a system event of disconnection is fired.













Embedded image in email via PHP Mail_mime

Ever try the best way to include your favourite logo image in the email message in the hope that your recipients will actually see it?

It was quite confusing why the image would show up on one email client but not the others. One challenge would be how to display an image correctly in the message for MS Outlook client.

After a reading on this blog, there are basically at least three ways to do so. Of course, we will need to find a balance between the compatibility among various types of email clients and the overall size of email message to be sent.

CID embedded image (a.k.a. inline image) is the old-school way to include images or graphics in HTML formatted message. It might increase the size of each email you send out. But, it is by-far the most compatible way to have the image displayed by desktop email clients and web mail services.  Unfortunately, the trade-off would be the inconsistent behaviour expressed by the email clients and sometimes it turns out to be ugly.

Example as below: