Thursday, December 1, 2016

Shrink the size of Docker.qcow2 to free valuable diskspace

Since my developing work is moved to Docker platform for Mac, it makes the machine setup quicker and easier. Now they have stable version of Docker for Mac for use which is great! However, the easiness comes with a price of file size inflation on the development machine. On Mac, it's not uncommon to find that we are running out of diskspace. After Docker for Mac is in place for a couple of months, there's a surprise over the size of their qcow2 file:


$ ls -l ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/
-rw-r--r--  1 user  staff   46GB Nov 1 14:47 Docker.qcow2
-rw-r--r--  1 user  staff    64K Nov 1 14:44 console-ring
-rw-r--r--  1 user  staff     5B Nov 1 14:44 hypervisor.pid
-rw-r--r--  1 user  staff     0B Nov 1 12:34 lock
drwxr-xr-x  4 user  staff   136B Nov 1 12:34 log/
-rw-r--r--  1 user  staff    17B Nov 1 14:44 mac.0
-rw-r--r--  1 user  staff    36B Nov 1 12:34 nic1.uuid
-rw-r--r--  1 user  staff     5B Nov 1 14:44 pid
-rw-r--r--  1 user  staff   141B Nov 1 14:44 syslog
lrwxr-xr-x  1 user  staff    12B Nov 1 14:44 tty@ -> /dev/ttys001


As you can see above, Docker.qcow2 grows up to 46GB which almost eats up half of the free space on SSD drive. I remember I have regularly remove unused images and containers. Even I have done this, the file size of Docker.qcow2 didn't actually stop growing.

In theory, Docker.qcow2 file keeps those layers and containers in use for Docker Engine. But the fact is that Docker doesn't come with a cleanup mechanism for all these. As long as we are pulling new images for testing and then delete them, those data remains inside Docker.qcow2 and will not be erased. This is why we see a huge file sitting on the harddrive as time goes by.

You may try deleting Docker.qcow2 file but you are going to destroy everything you've built inside the containers. After a restart of Docker engine, this file may still grow up to the previous size based on Docker's registry information for all those used or unused layers and containers.

Through using qemu utilities, we can shrink the size of .qcow2 file effectively.
$ brew update && brew install qemu
$
$ cd ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/
$ mv original_image.qcow2 original_image.qcow2_backup
$ qemu-img convert -O qcow2 original_image.qcow2_backup original_image.qcow2
$
$

Once we have confirmed Docker engine is up and running again, we can remove the backup file:
$ rm ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2_backup


Another way to reclaim used space within .qcow2 file is using dock_gc:
https://github.com/spotify/docker-gc

You can follow the instructions up there to build a custom Docker image based on your current Docker version number and then deploy it as Docker image and run the cleanup command like this:

$ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v /etc:/etc spotify/docker-gc


Reminder: The docker-gc container requires access to the docker socket in order to function, so we need to map it when running this command. The /etc directory is also mapped so that it can read any exclude files that we have created.

Once we git clone the source of docker-gc, we can start modifying to our needs.

To checkout:
$ git clone https://github.com/spotify/docker-gc.git


To build the source and upload to local Docker engine:
$ docker build -t spotify/docker-gc .


Combining docker_gc with qemu-img command, we can effectively reduce the size of .qcow2 file safe and sound.

Here's the modified version of my Dockerfile

FROM gliderlabs/alpine:3.2

ENV DOCKER_VERSION 1.12.3

# We get curl so that we can avoid a separate ADD to fetch the Docker binary, and then we'll remove it
RUN apk --update add bash curl 
RUN cd /tmp/ 
RUN curl -sSL -O https://get.docker.com/builds/Linux/x86_64/docker-${DOCKER_VERSION}.tgz 
RUN tar zxf docker-${DOCKER_VERSION}.tgz 
RUN mkdir -p /usr/local/bin/ 
RUN mv ./docker /usr/local/bin/ 
RUN chmod +x /usr/local/bin/docker 
RUN apk del curl 
RUN rm -rf /tmp/* 
RUN rm -rf /var/cache/apk/*

COPY ./docker-gc /docker-gc

VOLUME /var/lib/docker-gc

CMD ["/docker-gc"]



Here's the modified version of docker_gc file
#!/bin/bash

# Copyright (c) 2014 Spotify AB.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

# This script attempts to garbage collect docker containers and images.
# Containers that exited more than an hour ago are removed.
# Images that have existed more than an hour and are not in use by any
# containers are removed.

# Note: Although docker normally prevents removal of images that are in use by
#       containers, we take extra care to not remove any image tags (e.g.
#       ubuntu:14.04, busybox, etc) that are used by containers. A naive
#       "docker rmi `docker images -q`" will leave images stripped of all tags,
#       forcing users to re-pull the repositories even though the images
#       themselves are still on disk.

# Note: State is stored in $STATE_DIR, defaulting to /var/lib/docker-gc

# The script can send log messages to syslog regarding which images and
# containers were removed. To enable logging to syslog, set LOG_TO_SYSLOG=1.
# When disabled, this script will instead log to standard out. When syslog is
# enabled, the syslog facility and logger can be configured with
# $SYSLOG_FACILITY and $SYSLOG_LEVEL respectively.

set -o nounset
set -o errexit

GRACE_PERIOD_SECONDS=${GRACE_PERIOD_SECONDS:=3600}
STATE_DIR=${STATE_DIR:=/var/lib/docker-gc}
FORCE_CONTAINER_REMOVAL=${FORCE_CONTAINER_REMOVAL:=0}
FORCE_IMAGE_REMOVAL=${FORCE_IMAGE_REMOVAL:=0}
#DOCKER=${DOCKER:=docker}
DOCKER='/usr/local/bin/docker/docker'
PID_DIR=${PID_DIR:=/var/run}
LOG_TO_SYSLOG=${LOG_TO_SYSLOG:=0}
SYSLOG_FACILITY=${SYSLOG_FACILITY:=user}
SYSLOG_LEVEL=${SYSLOG_LEVEL:=info}
SYSLOG_TAG=${SYSLOG_TAG:=docker-gc}
DRY_RUN=${DRY_RUN:=0}
EXCLUDE_DEAD=${EXCLUDE_DEAD:=0}

for pid in $(pidof -s docker-gc); do
    if [[ $pid != $$ ]]; then
        echo "[$(date)] : docker-gc : Process is already running with PID $pid"
        exit 1
    fi
done

trap "rm -f -- '$PID_DIR/dockergc'" EXIT

echo $$ > $PID_DIR/dockergc


EXCLUDE_FROM_GC=${EXCLUDE_FROM_GC:=/etc/docker-gc-exclude}
if [ ! -f "$EXCLUDE_FROM_GC" ]
then
  EXCLUDE_FROM_GC=/dev/null
fi

EXCLUDE_CONTAINERS_FROM_GC=${EXCLUDE_CONTAINERS_FROM_GC:=/etc/docker-gc-exclude-containers}
if [ ! -f "$EXCLUDE_CONTAINERS_FROM_GC" ]
then
  EXCLUDE_CONTAINERS_FROM_GC=/dev/null
fi

EXCLUDE_IDS_FILE="exclude_ids"
EXCLUDE_CONTAINER_IDS_FILE="exclude_container_ids"

function date_parse {
  if date --utc >/dev/null 2>&1; then
    # GNU/date
    echo $(date -u --date "${1}" "+%s")
  else
    # BSD/date
    echo $(date -j -u -f "%F %T" "${1}" "+%s")
  fi
}

# Elapsed time since a docker timestamp, in seconds
function elapsed_time() {
    # Docker 1.5.0 datetime format is 2015-07-03T02:39:00.390284991
    # Docker 1.7.0 datetime format is 2015-07-03 02:39:00.390284991 +0000 UTC
    utcnow=$(date -u "+%s")
    replace_q="${1#\"}"
    without_ms="${replace_q:0:19}"
    replace_t="${without_ms/T/ }"
    epoch=$(date_parse "${replace_t}")
    echo $(($utcnow - $epoch))
}

function compute_exclude_ids() {
    # Find images that match patterns in the EXCLUDE_FROM_GC file and put their
    # id prefixes into $EXCLUDE_IDS_FILE, prefixed with ^

    PROCESSED_EXCLUDES="processed_excludes.tmp"
    # Take each line and put a space at the beginning and end, so when we
    # grep for them below, it will effectively be: "match either repo:tag
    # or imageid".  Also delete blank lines or lines that only contain
    # whitespace
    sed 's/^\(.*\)$/ \1 /' $EXCLUDE_FROM_GC | sed '/^ *$/d' > $PROCESSED_EXCLUDES
    # The following looks a bit of a mess, but here's what it does:
    # 1. Get images
    # 2. Skip header line
    # 3. Turn columnar display of 'REPO TAG IMAGEID ....' to 'REPO:TAG IMAGEID'
    # 4. find lines that contain things mentioned in PROCESSED_EXCLUDES
    # 5. Grab the image id from the line
    # 6. Prepend ^ to the beginning of each line

    # What this does is make grep patterns to match image ids mentioned by
    # either repo:tag or image id for later greppage
    $DOCKER images \
        | tail -n+2 \
        | sed 's/^\([^ ]*\) *\([^ ]*\) *\([^ ]*\).*/ \1:\2 \3 /' \
        | grep -f $PROCESSED_EXCLUDES 2>/dev/null \
        | cut -d' ' -f3 \
        | sed 's/^/^(sha256:)?/' > $EXCLUDE_IDS_FILE
}

function compute_exclude_container_ids() {
    # Find containers matching to patterns listed in EXCLUDE_CONTAINERS_FROM_GC file
    # Implode their values with a \| separator on a single line
    PROCESSED_EXCLUDES=`cat $EXCLUDE_CONTAINERS_FROM_GC \
        | xargs \
        | sed -e 's/ /\|/g'`
    # The empty string would match everything
    if [ "$PROCESSED_EXCLUDES" = "" ]; then
        touch $EXCLUDE_CONTAINER_IDS_FILE
        return
    fi
    # Find all docker images
    # Filter out with matching names
    # and put them to $EXCLUDE_CONTAINER_IDS_FILE
    $DOCKER ps -a \
        | grep -E "$PROCESSED_EXCLUDES" \
        | awk '{ print $1 }' \
        | tr -s " " "\012" \
        | sort -u > $EXCLUDE_CONTAINER_IDS_FILE
}

function log() {
    msg=$1
    if [[ $LOG_TO_SYSLOG -gt 0 ]]; then
        logger -i -t "$SYSLOG_TAG" -p "$SYSLOG_FACILITY.$SYSLOG_LEVEL" "$msg"
    else
        echo "[$(date +'%Y-%m-%dT%H:%M:%S')] [INFO] : $msg"
    fi
}

function container_log() {
    prefix=$1
    filename=$2

    while IFS='' read -r containerid
    do
        log "$prefix $containerid $(${DOCKER} inspect -f {{.Name}} $containerid)"
    done < "$filename"
}

function image_log() {
    prefix=$1
    filename=$2

    while IFS='' read -r imageid
    do
        log "$prefix $imageid $(${DOCKER} inspect -f {{.RepoTags}} $imageid)"
    done < "$filename"
}

# Change into the state directory (and create it if it doesn't exist)
if [ ! -d "$STATE_DIR" ]
then
  mkdir -p $STATE_DIR
fi
cd "$STATE_DIR"

# Verify that docker is reachable
$DOCKER version 1>/dev/null

# List all currently existing containers
$DOCKER ps -a -q --no-trunc | sort | uniq > containers.all

# List running containers
$DOCKER ps -q --no-trunc | sort | uniq > containers.running
container_log "Container running" containers.running

# compute ids of container images to exclude from GC
compute_exclude_ids

# compute ids of containers to exclude from GC
compute_exclude_container_ids

# List containers that are not running
comm -23 containers.all containers.running > containers.exited

if [[ $EXCLUDE_DEAD -gt 0 ]]; then
    echo "Excluding dead containers"
    # List dead containers
    $DOCKER ps -q -a -f status=dead | sort | uniq > containers.dead    
    comm -23 containers.exited containers.dead > containers.exited.tmp
    cat containers.exited.tmp > containers.exited
fi

container_log "Container not running" containers.exited

# Find exited containers that finished at least GRACE_PERIOD_SECONDS ago
> containers.reap.tmp
cat containers.exited | while read line
do
    EXITED=$(${DOCKER} inspect -f "{{json .State.FinishedAt}}" ${line})
    ELAPSED=$(elapsed_time $EXITED)
    if [[ $ELAPSED -gt $GRACE_PERIOD_SECONDS ]]; then
        echo $line >> containers.reap.tmp
    fi
done

# List containers that we will remove and exclude ids.
cat containers.reap.tmp | sort | uniq | grep -v -f $EXCLUDE_CONTAINER_IDS_FILE > containers.reap || true

# List containers that we will keep.
comm -23 containers.all containers.reap > containers.keep

# List images used by containers that we keep.
cat containers.keep |
xargs -n 1 $DOCKER inspect -f '{{.Image}}' 2>/dev/null |
sort | uniq > images.used

# List images to reap; images that existed last run and are not in use.
$DOCKER images -q --no-trunc | sort | uniq > images.all

# Find images that are created at least GRACE_PERIOD_SECONDS ago
> images.reap.tmp
cat images.all | while read line
do
    CREATED=$(${DOCKER} inspect -f "{{.Created}}" ${line})
    ELAPSED=$(elapsed_time $CREATED)
    if [[ $ELAPSED -gt $GRACE_PERIOD_SECONDS ]]; then
        echo $line >> images.reap.tmp
    fi
done
comm -23 images.reap.tmp images.used | grep -E -v -f $EXCLUDE_IDS_FILE > images.reap || true

# Use -f flag on docker rm command; forces removal of images that are in Dead
# status or give errors when removing.
FORCE_CONTAINER_FLAG=""
if [[ $FORCE_CONTAINER_REMOVAL -gt 0 ]]; then
    FORCE_CONTAINER_FLAG="-f"
fi
# Reap containers.
if [[ $DRY_RUN -gt 0 ]]; then
    container_log "The following container would have been removed" containers.reap
else
    container_log "Removing containers" containers.reap
    xargs -n 1 $DOCKER rm $FORCE_CONTAINER_FLAG --volumes=true < containers.reap &>/dev/null || true
fi

# Use -f flag on docker rmi command; forces removal of images that have multiple tags
FORCE_IMAGE_FLAG=""
if [[ $FORCE_IMAGE_REMOVAL -gt 0 ]]; then
    FORCE_IMAGE_FLAG="-f"
fi

# Reap images.
if [[ $DRY_RUN -gt 0 ]]; then
    image_log "The following image would have been removed" images.reap
else
    image_log "Removing image" images.reap
    xargs -n 1 $DOCKER rmi $FORCE_IMAGE_FLAG < images.reap &>/dev/null || true
fi








Thursday, November 17, 2016

A fix to broken SSH key authenticated login after Mac Sierra Upgrade

For Mac OS, I feel that every major upgrade comes with some aftermaths which may cause short term migraine. The first thing to do could be looking up possible solutions from web searches in the hope that someone has figured out how to fix those post-upgrade problems. It reminds me that it would be safer to wait for a few months since every major OS upgrade has been released. The same truth holds for any other OS releases.

Problem encountered:
After the upgrade of Mac OS Sierra, I was unable to login to my linux box from my MacBook via SSH which was supposed to be using key authentication login without typing password.

Instead, I was asked for the passphrase for my key file like ~/.ssh/id_rsa. First of all, I found I forgot my passphrase. Actually, I have not been typing this passphrase for a while since I setup SSH key authentication on my MacBook for convenience.

The solution:
Someone suggests regenerating new key on local machine to resolve this. First thing first, you need to re-enable password authentication from the SSH server.

Another Mac user pointed out that the problem could be originated from the ssh-agent on Mac OS Sierra which is SSH v7.2 as of writing. A possible situation is that the ssh-agent does not automatically load passphrases on the keychain during startup.

To verify this, try the command:
$ ssh-add -l
The agent has no identities.

Clearly, there is no identity information stored in ssh-agent.

Let's store passphrase in your keychain again:
$ ssh-add -K <keyfile>

whereas <keyfile> could be the path like ~/.ssh/id_rsa, or whatever suits you

It will prompt for the passphrase and then will save them to the keychain. However, you might need to remind yourself the passphrase of that particular key file. If you have saved this in Keychain Access before, you can retrieve the passphrase under Keychains: login -> Category: Passwords in Keychain Access app.


You should be able to login again in the good old way of SSH key authentication, but it may not survive the next reboot for whatever reason since MacOS Sierra. Apple's Engineer states that this is expected and it is just re-aligned their behavior with the mainstream OpenSSH in this area. In other words, the stored passphrase for SSH keychain WILL NOT survive next reboot since MacOS Sierra.

You need to run the following command in Terminal again and again when you log back in MacOS Sierra:
$ ssh-add -A <keyfile>


It sounds like an immediate solution but lasts not long enough.

Taking one step forward, you can add a bash script to run SSH command with that particular identity file on your laptop:
#!/bin/bash
echo "Adding identities to SSH agent..."
ssh-add -A 2>/dev/null
echo "Logging in remote SSH server with specific identity file and port number..."
ssh -i <keyfile> -p <port> username@<remote_ssh_server_name_or_ip>


Ultimately, you can first log in your SSH box, re-enable password authentication on SSH server, regenerate a new RSA key on your laptop and then upload it to SSH box as permanent change. The key authentication will work with the newly generated identity file on Mac OS Sierra.

For details, check these out:
http://askubuntu.com/questions/46930/how-can-i-set-up-password-less-ssh-login
http://manpages.ubuntu.com/manpages/trusty/man1/ssh-copy-id.1.html
https://openradar.appspot.com/27348363








Wednesday, October 26, 2016

pyenv + pyenv-virtualenv + OpenCV 3 on Mac OS X El Capitan

Python is a good tool for quick project startup while it's still a little early to dig into the details of C++ or Objective C code.

The trouble is that the builtin Python version may not meet our needs. As you know, it starts to get messy when a couple of different versions of Python interpreters have been installed in the same OS. A few weeks later, you may find out things are not compiling properly due to all sorts of PYTHONPATH issues.

So, here comes the management tool - pyenv

Simply speaking, pyenv lets you easily switch between multiple versions of Python. On OS X, pyenv may install multiple versions of Python into its own repository location while the builtin version of Python is still intact with the OS. This minimises the conflict between your project and other existing applications which uses Mac's builtin Python for compiling work.

pyenv also comes with a plugin to deal with Python's virtual environment - pyenv-virtualenv 

pyenv-virtualenv provides features to manage virtualenvs and conda environments for Python.

With both of these, you can customise a particular version of Python as per Application based the virtual environment will trigger those changes for individual project folder.

To setup, please read through the README page via the following links:

To keep things simple, it is suggest that we should use homebrew to install above packages.

In the file ~/.bash_profile, typical setup is recommended as follows:

#
#
# Multi-Python switcher Pyenv
# To use Homebrew's directories rather than ~/.pyenv add to your profile:
export PYENV_ROOT=/usr/local/var/pyenv
export PATH="$PYENV_ROOT/bin:$PATH"
# Enable shims and autocompletion
if which pyenv > /dev/null; then eval "$(pyenv init -)"; fi

# A pyenv plugin to manage virtualenv (a.k.a. python-virtualenv) ref: https://github.com/yyuu/pyenv-virtualenv
# Automatically activate/deactivate virtualenvs on entering/leaving directories 
# which contain a .python-version file that lists a valid virtual environment
eval "$(pyenv virtualenv-init -)"
export PYENV_VIRTUALENV_DISABLE_PROMPT=1

# pip should only run if there is a virtualenv currently activated
export PIP_REQUIRE_VIRTUALENV=true
#
#
#
#


OpenCV is an excellent library for image/video processing. To avoid any error during the installation, it's recommended we install via homebrew command:


$
# First time installation
$ brew install opencv3 --HEAD --with-python3 --with-ffmpeg --with-tbb --with-contrib --with-opengl --with-qt5
# Upgrade alternative
$ brew reinstall opencv3 --HEAD --with-python3 --with-ffmpeg --with-tbb --with-contrib --with-opengl --with-qt5
$

All these will be done at global system level.

Now, using command like 'pyenv virtualenv' to setup a virtual environment for your project:
# Get into empty target project folder
$ mkdir ~/target_project
$ cd ~/target_project
$
# List available Python versions
$ pyenv versions
  system
  2.7.10
  2.7.12
  3.5.2
# Create new virtual environment with specific version
# Also name it 'my-env-3.5.2' or any other name
$ pyenv virtualenv 3.5.2 my-env-3.5.2
# Now activate new virtual environment within the target project folder
$ pyenv activate my-env-3.5.2
$

Assuming OpenCV 3 and Python 3.5.2 will be used within new virtual environment, it's time to link up OpenCV library folder to the target Python folder like /site-packages for importing reference.
# Assuming in ~/target_project folder
$ echo `brew --prefix opencv3`/lib/python3.5/site-packages >> $PYENV_ROOT/versions/my-env-3.5.2/lib/python3.5/site-packages/opencv3.pth
$



For Python virtual environment, we can setup necessary packages by using the following command within the target project folder

# Assuming in ~/target_project folder
$ 
$ pip install numpy
$

To test whether OpenCV 3 is supported within the virtual environment, let's open up Python console and import cv2 library:

# Assuming in ~/target_project folder
$ python
Python 3.5.2 (default, Jul 19 2016, 15:25:16) 
[GCC 4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.31)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>import cv2
>>>

If no error is generated, we can ensure that OpenCV 3 library support is in place with current Python environment.




Friday, August 5, 2016

Add Google Calendar to Lightning in Thunderbird

One day, I opened up Thunderbird app after an auto upgrade and surprisingly found Google Calendar was not linked anymore in Lightning interface. It happens from time to time while I supposed Mozilla should have fixed it but it doesn't seem to be the case this time.

Searching around the forum, I come across several solutions but none of them is working. Whether it can be problems in the add-on itself or Google has changed API again, I found myself hard to remember what exactly to resolve this over time. This time I need to take a note to keep myself aware of what needs to be done in case the same thing happens again for the ongoing upgrade.

Just a reminder as for what has happened back in September 16th 2013:

Google is changing the Location URL of their CalDAV Calendars

Google has decided to change the authentication mechanism for their CalDAV calendars to OAuth 2.0, which required some changes in Lightning to accommodate.

Due to these changes, the URL to access the calendar has also changed. The old endpoint will stop working after September 16th (today!). This affects only Google calendars using CalDAV protocol.

According to the Mozilla's blog, iCal users with read-only access are not affected.

For iCal way in Lightning, you may receive error message like MODIFICATION_FAILED when adding new gCal event within Lightning Calendar interface, i.e., it becomes write-protected.

What if we really want to view and EDIT our Google Calendar in Lightning? Let's try another protocol like CalDAV. CalDAV provides read and write access to Calendar instance.

You may want to choose this way especially if you cannot make it work for specific option for Google Calendar under Provider for Google Calendar add-on. I found this option is not quite reliable as CalDAV and might not be working from time to time.

Steps as follows:


  • Download Thunderbird and install Lightning add-on
  • Open the new calendar dialog (File → New → Calendar)
  • Add a new remote calendar (On the Network → CalDAV)
  • As a location enter the following:


https://apidata.googleusercontent.com/caldav/v2/*calendar-id/events


*calendar-id is supposed to be your email address or any other id you have set to your Google Calendar

To enable this, you need to enter Google account login details (supposedly once), along with two way authentication 6 digits password if necessary (depending on whether you have enabled two authentication or not on Google account).

This will enable two way (read, write) communication with your Google Calendar instantly.

For Apple iCal users, they might need to use the following URL:

https://apidata.googleusercontent.com/caldav/v2/*calendar-id/user









Tuesday, August 2, 2016

Completely UNINSTALL corrupted instance of Visual Studio 2010

Visual Studio 2010 is a comprehensive programming suite for app development on Windows platform. It can be good or evil when it comes to such an elephant build with additional packages and tools to be installed all at one time. It takes lot of space and requires attention while doing uninstallation. Any error generated during the uninstallation process could actually break the whole thing. You can repair it via the installation wizard before you can actually uninstall it.



When you land on this article, you might actually have left the installation in a broken state and you might have desparately searched through the Internet to find a resolution for all these. The thing is you just cannot uninstall VS2010 in a normal way. Indeed, Microsoft provide a better way for uninstallation in VS2012 or newer while we're still sticking to the 6 years old VS build.

Microsoft has released a tool to fix all kinds of registry errors or blocking issue during uninstallation of any program, i.e., any program which appears in the uninstall list.

URL:
https://support.microsoft.com/en-au/help/17588/fix-problems-that-block-programs-from-being-installed-or-removed

It's a troubleshooting tool which guides you through to the right option which actually fixes those strange errors for you and uninstalls the program as specified from there. I have tried this tool and ultimately removed my broken VS build on Windows 2008 server. It claims to support Windows version from 7 to 10.

You may need to try a further cleanup by uninstalling those depending components of Visual Studio via Control Panel | Programs and Features.

Finally, it frees up huge amount of diskspace on the server.





Tuesday, July 26, 2016

REST API for Pokemon

A new global trend has just started on using mobile app Pokemon GO to search for favourite monster with augmented reality features and encouraging app users around the world to walk more for a better health condition.

By looking at the maps of activities, I was stunned but also excited. This seems to be the next wave  in Big Data world. How can we miss it?

Pokeapi - The RESTful Pokémon API v2 Beta has been released. For whatever reason, they are limited to 300 requests per resource per IP address.

Check and see what's the next killer app you can create from this.

URL:
https://pokeapi.co





Friday, July 22, 2016

Eclipse IDE autocomplete for JavaScript and PHP

To enable autocomplete feature for language like JavaScript and PHP in your Eclipse project:

Locate and open .project file under Parent folder of your project within Eclipse IDE

Add two lines as below:

org.eclipse.wst.jsdt.core.jsNature
org.eclipse.php.core.PHPNature

Restart Eclipse IDE and you should see autocomplete suggestions while typing your code in *.js or *.php files.

Wednesday, July 6, 2016

Upgrade ESXi from 5.5 Update 3 to 6.0 Update 2

The good thing in ESXi 5.5 is the upgrade process can be done via SSH terminal. This eliminates the CD-ROM burning process and carries out the entire upgrade remotely from my laptop. As mentioned before, it's not always the direct process for version upgrade in ESXi infrastructure.

I was looking for the HP custom version of offline bundle (namely "HPE Custom Image for VMware ESXi 6.0 U2 Offline Bundle") for the HP machines which supposedly gathered all the drivers it needs for installing ESXi onto HP machines. You may need to download different custom version or official offline bundle for other machines which are supported otherwise.

URL:
https://my.vmware.com/group/vmware/details?downloadGroup=OEM-ESXI60U2-HPE&productId=491

After uploading the zip file onto ESXi box's datastore via vSphere client, with the following command I just got this error message on my ESXi console:

~ #
~ # esxcli software vib update -d /vmfs/volumes/datastore1/VMware-ESXi-6.0.0-2494585-depot.zip
[DependencyError]
 VIB VMware_bootbank_esx-base_6.0.0-2.34.3620759 requires vsan >= 6.0.0-2.34, but the requirement cannot be satisfied within the ImageProfile.
 VIB VMware_bootbank_esx-base_6.0.0-2.34.3620759 requires vsan << 6.0.0-2.35, but the requirement cannot be satisfied within the ImageProfile.
 Please refer to the log file for more details.
~ #
~ #

Honestly, ESXi 5.5 doesn't quite like the new VIB module named vsan. In other words, it doesn't have vsan in the current profile. That's why the path for vid update failed. So, let's go through the upgrade process with profile update option.

To list the current profile name, use the following command:

~ # esxcli software sources profile list -d /vmfs/volumes/datastore1/VMware-ESXi-6.0.0-2494585-depot.zip
Name                                 Vendor                      Acceptance Level
-----------------------------------  --------------------------  ----------------
HPE-ESXi-6.0.0-Update2-600.9.5.0.48  Hewlett Packard Enterprise  PartnerSupported

This may give different result on every single ESXi box, so you may need to take note of your own box for the actual name of image profile. Here, we have a profile name called "HPE-ESXi-6.0.0-Update2-600.9.5.0.48".

Now, it's time to proceed the upgrade:

~ #
~ # esxcli software profile update -p HPE-ESXi-6.0.0-Update2-600.9.5.0.48 -d /vmfs/volumes/datastore1/VMware-ESXi-6.0.0-2494585-depot.zip
.
.

And then I got the followings:

Update Result
   Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
   Reboot Required: true
   VIBs Installed: BRCM_bootbank_net-tg3_3.137l.v60.1-1OEM.600.0.0.2494585, EMU_bootbank_elxnet_10.7.110.13-1OEM.600.0.0.2768847, EMU_bootbank_ima-be2iscsi_10.7.110.10-1OEM.600.0.0.2159203, EMU_bootbank_lpfc_10.7.110.4-1OEM.600.0.0.2768847, EMU_bootbank_scsi-be2iscsi_10.7.110.10-1OEM.600.0.0.2159203, HPE_bootbank_amsHelper_600.10.4.0-22.2494585, HPE_bootbank_conrep_6.0.0.01-01.00.7.2494585, HPE_bootbank_hpbootcfg_6.0.0.02-02.00.6.2494585, HPE_bootbank_hpe-build_600.9.5.0.48-2494585, HPE_bootbank_hpe-esxi-fc-enablement_600.2.5.20-2494585, HPE_bootbank_hpe-ilo_600.10.0.0.26-1OEM.600.0.0.2494585, HPE_bootbank_hpe-smx-provider_600.03.10.00.13-2768847, HPE_bootbank_hponcfg_6.0.0.04-00.14.4.2494585, HPE_bootbank_hpssacli_2.40.13.0-6.0.0.1854445, HPE_bootbank_hptestevent_6.0.0.01-01.00.5.2494585, Hewlett-Packard_bootbank_char-hpcru_6.0.6.14-1OEM.600.0.0.2159203, Hewlett-Packard_bootbank_hpnmi_600.2.3.14-2159203, Hewlett-Packard_bootbank_scsi-hpdsa_5.5.0.48-1OEM.550.0.0.1331820, Hewlett-Packard_bootbank_scsi-hpsa_6.0.0.116-1OEM.600.0.0.2494585, Intel_bootbank_intelcim-provider_0.5-1.6, Intel_bootbank_net-i40e_1.3.45-1OEM.550.0.0.1331820, Intel_bootbank_net-igb_5.3.1-1OEM.550.0.0.1331820, Intel_bootbank_net-ixgbe_4.1.1.1-1OEM.550.0.0.1331820, MEL_bootbank_nmlx4-core_3.1.0.0-1OEM.600.0.0.2348722, MEL_bootbank_nmlx4-en_3.1.0.0-1OEM.600.0.0.2348722, MEL_bootbank_nmst_4.0.2.1-1OEM.600.0.0.2295424, QLogic_bootbank_misc-cnic-register_1.712.70.v60.1-1OEM.600.0.0.2494585, QLogic_bootbank_net-bnx2_2.2.5k.v60.1-1OEM.600.0.0.2494585, QLogic_bootbank_net-bnx2x_2.712.70.v60.3-1OEM.600.0.0.2494585, QLogic_bootbank_net-cnic_2.712.70.v60.3-1OEM.600.0.0.2494585, QLogic_bootbank_net-nx-nic_6.0.643-1OEM.600.0.0.2494585, QLogic_bootbank_net-qlcnic_6.1.191-1OEM.600.0.0.2494585, QLogic_bootbank_qlnativefc_2.1.30.0-1OEM.600.0.0.2768847, QLogic_bootbank_scsi-bnx2fc_1.712.70.v60.5-1OEM.600.0.0.2494585, QLogic_bootbank_scsi-bnx2i_2.712.70.v60.2-1OEM.600.0.0.2494585, VMWARE_bootbank_mtip32xx-native_3.8.5-1vmw.600.0.0.2494585, VMware_bootbank_ata-pata-amd_0.3.10-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-atiixp_0.4.6-4vmw.600.0.0.2494585, VMware_bootbank_ata-pata-cmd64x_0.2.5-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-pdc2027x_1.0-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-serverworks_0.4.3-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-sil680_0.4.8-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-via_0.3.3-2vmw.600.0.0.2494585, VMware_bootbank_block-cciss_3.6.14-10vmw.600.0.0.2494585, VMware_bootbank_cpu-microcode_6.0.0-0.0.2494585, VMware_bootbank_ehci-ehci-hcd_1.0-3vmw.600.2.34.3620759, VMware_bootbank_emulex-esx-elxnetcli_10.2.309.6v-0.0.2494585, VMware_bootbank_esx-base_6.0.0-2.34.3620759, VMware_bootbank_esx-dvfilter-generic-fastpath_6.0.0-0.0.2494585, VMware_bootbank_esx-tboot_6.0.0-2.34.3620759, VMware_bootbank_esx-ui_1.0.0-3617585, VMware_bootbank_esx-xserver_6.0.0-0.0.2494585, VMware_bootbank_ipmi-ipmi-devintf_39.1-4vmw.600.0.0.2494585, VMware_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.600.0.0.2494585, VMware_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.600.0.0.2494585, VMware_bootbank_lsi-mr3_6.605.08.00-7vmw.600.1.17.3029758, VMware_bootbank_lsi-msgpt3_06.255.12.00-8vmw.600.1.17.3029758, VMware_bootbank_lsu-hp-hpsa-plugin_1.0.0-1vmw.600.0.0.2494585, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-2vmw.600.0.11.2809209, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-1vmw.600.0.0.2494585, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-2vmw.600.0.11.2809209, VMware_bootbank_lsu-lsi-mpt2sas-plugin_1.0.0-4vmw.600.1.17.3029758, VMware_bootbank_lsu-lsi-mptsas-plugin_1.0.0-1vmw.600.0.0.2494585, VMware_bootbank_misc-drivers_6.0.0-2.34.3620759, VMware_bootbank_net-e1000_8.0.3.1-5vmw.600.0.0.2494585, VMware_bootbank_net-enic_2.1.2.38-2vmw.600.0.0.2494585, VMware_bootbank_net-forcedeth_0.61-2vmw.600.0.0.2494585, VMware_bootbank_net-vmxnet3_1.1.3.0-3vmw.600.2.34.3620759, VMware_bootbank_nmlx4-rdma_3.0.0.0-1vmw.600.0.0.2494585, VMware_bootbank_nvme_1.2.0.27-4vmw.550.0.0.1331820, VMware_bootbank_ohci-usb-ohci_1.0-3vmw.600.0.0.2494585, VMware_bootbank_rste_2.0.2.0088-4vmw.600.0.0.2494585, VMware_bootbank_sata-ahci_3.0-22vmw.600.2.34.3620759, VMware_bootbank_sata-ata-piix_2.12-10vmw.600.0.0.2494585, VMware_bootbank_sata-sata-nv_3.5-4vmw.600.0.0.2494585, VMware_bootbank_sata-sata-promise_2.12-3vmw.600.0.0.2494585, VMware_bootbank_sata-sata-sil24_1.1-1vmw.600.0.0.2494585, VMware_bootbank_sata-sata-sil_2.3-4vmw.600.0.0.2494585, VMware_bootbank_sata-sata-svw_2.3-3vmw.600.0.0.2494585, VMware_bootbank_scsi-aacraid_1.1.5.1-9vmw.600.0.0.2494585, VMware_bootbank_scsi-adp94xx_1.0.8.12-6vmw.600.0.0.2494585, VMware_bootbank_scsi-aic79xx_3.1-5vmw.600.0.0.2494585, VMware_bootbank_scsi-fnic_1.5.0.45-3vmw.600.0.0.2494585, VMware_bootbank_scsi-ips_7.12.05-4vmw.600.0.0.2494585, VMware_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.600.0.0.2494585, VMware_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.600.0.0.2494585, VMware_bootbank_scsi-megaraid2_2.00.4-9vmw.600.0.0.2494585, VMware_bootbank_scsi-mptsas_4.23.01.00-9vmw.600.0.0.2494585, VMware_bootbank_scsi-mptspi_4.23.01.00-9vmw.600.0.0.2494585, VMware_bootbank_uhci-usb-uhci_1.0-3vmw.600.0.0.2494585, VMware_bootbank_vsan_6.0.0-2.34.3563498, VMware_bootbank_vsanhealth_6.0.0-3000000.3.0.2.34.3544323, VMware_bootbank_xhci-xhci_1.0-3vmw.600.2.34.3620759, VMware_locker_tools-light_6.0.0-2.34.3620759
   VIBs Removed: Broadcom_bootbank_net-tg3_3.137l.v55.1-1OEM.550.0.0.1331820, Emulex_bootbank_elxnet_10.5.121.7-1OEM.550.0.0.1331820, Emulex_bootbank_ima-be2iscsi_10.5.65.7-1OEM.550.0.0.1331820, Emulex_bootbank_lpfc_10.5.39.0-1OEM.550.0.0.1331820, Emulex_bootbank_scsi-be2iscsi_10.5.65.7-1OEM.550.0.0.1331820, Hewlett-Packard_bootbank_char-hpcru_5.5.6.6-1OEM.550.0.0.1198610, Hewlett-Packard_bootbank_char-hpilo_550.9.0.2.3-1OEM.550.0.0.1198610, Hewlett-Packard_bootbank_hp-ams_550.10.3.0-15.1198610, Hewlett-Packard_bootbank_hp-build_550.9.4.26-1198610, Hewlett-Packard_bootbank_hp-conrep_5.5.0.1-0.0.8.1198610, Hewlett-Packard_bootbank_hp-esxi-fc-enablement_550.2.4.6-1198610, Hewlett-Packard_bootbank_hp-smx-provider_550.03.09.00.15-1198610, Hewlett-Packard_bootbank_hpbootcfg_5.5.0.02-01.00.5.1198610, Hewlett-Packard_bootbank_hpnmi_550.2.3.5-1198610, Hewlett-Packard_bootbank_hponcfg_5.5.0.4.4-0.3.1198610, Hewlett-Packard_bootbank_hpssacli_2.30.6.0-5.5.0.1198611, Hewlett-Packard_bootbank_hptestevent_5.5.0.01-00.01.4.1198610, Hewlett-Packard_bootbank_scsi-hpdsa_5.5.0.46-1OEM.550.0.0.1331820, Hewlett-Packard_bootbank_scsi-hpsa_5.5.0.114-1OEM.550.0.0.1331820, Intel_bootbank_intelcim-provider_0.5-1.4, Intel_bootbank_net-i40e_1.2.48-1OEM.550.0.0.1331820, Intel_bootbank_net-igb_5.2.10-1OEM.550.0.0.1331820, Intel_bootbank_net-ixgbe_3.21.4.3-1OEM.550.0.0.1331820, QLogic_bootbank_misc-cnic-register_1.712.50.v55.1-1OEM.550.0.0.1331820, QLogic_bootbank_net-bnx2_2.2.5j.v55.3-1OEM.550.0.0.1331820, QLogic_bootbank_net-bnx2x_2.712.50.v55.6-1OEM.550.0.0.1331820, QLogic_bootbank_net-cnic_2.712.50.v55.6-1OEM.550.0.0.1331820, QLogic_bootbank_net-nx-nic_5.5.643-1OEM.550.0.0.1331820, QLogic_bootbank_net-qlcnic_5.5.190-1OEM.550.0.0.1331820, QLogic_bootbank_qlnativefc_1.1.55.0-1OEM.550.0.0.1331820, QLogic_bootbank_scsi-bnx2fc_1.712.50.v55.7-1OEM.550.0.0.1331820, QLogic_bootbank_scsi-bnx2i_2.712.50.v55.4-1OEM.550.0.0.1331820, VMware_bootbank_ata-pata-amd_0.3.10-3vmw.550.0.0.1331820, VMware_bootbank_ata-pata-atiixp_0.4.6-4vmw.550.0.0.1331820, VMware_bootbank_ata-pata-cmd64x_0.2.5-3vmw.550.0.0.1331820, VMware_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.550.0.0.1331820, VMware_bootbank_ata-pata-pdc2027x_1.0-3vmw.550.0.0.1331820, VMware_bootbank_ata-pata-serverworks_0.4.3-3vmw.550.0.0.1331820, VMware_bootbank_ata-pata-sil680_0.4.8-3vmw.550.0.0.1331820, VMware_bootbank_ata-pata-via_0.3.3-2vmw.550.0.0.1331820, VMware_bootbank_block-cciss_3.6.14-10vmw.550.0.0.1331820, VMware_bootbank_ehci-ehci-hcd_1.0-3vmw.550.0.0.1331820, VMware_bootbank_esx-base_5.5.0-3.71.3116895, VMware_bootbank_esx-dvfilter-generic-fastpath_5.5.0-0.0.1331820, VMware_bootbank_esx-tboot_5.5.0-2.33.2068190, VMware_bootbank_esx-ui_0.0.2-0.1.3357452, VMware_bootbank_esx-xlibs_5.5.0-0.0.1331820, VMware_bootbank_esx-xserver_5.5.0-0.0.1331820, VMware_bootbank_ipmi-ipmi-devintf_39.1-4vmw.550.0.0.1331820, VMware_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.550.0.0.1331820, VMware_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.550.0.0.1331820, VMware_bootbank_lsi-mr3_0.255.03.01-2vmw.550.3.68.3029944, VMware_bootbank_lsi-msgpt3_00.255.03.03-1vmw.550.1.15.1623387, VMware_bootbank_misc-drivers_5.5.0-3.68.3029944, VMware_bootbank_mtip32xx-native_3.3.4-1vmw.550.1.15.1623387, VMware_bootbank_net-be2net_4.6.100.0v-1vmw.550.0.0.1331820, VMware_bootbank_net-e1000_8.0.3.1-3vmw.550.0.0.1331820, VMware_bootbank_net-enic_1.4.2.15a-1vmw.550.0.0.1331820, VMware_bootbank_net-forcedeth_0.61-2vmw.550.0.0.1331820, VMware_bootbank_net-vmxnet3_1.1.3.0-3vmw.550.2.39.2143827, VMware_bootbank_ohci-usb-ohci_1.0-3vmw.550.0.0.1331820, VMware_bootbank_rste_2.0.2.0088-4vmw.550.1.15.1623387, VMware_bootbank_sata-ahci_3.0-22vmw.550.3.68.3029944, VMware_bootbank_sata-ata-piix_2.12-10vmw.550.2.33.2068190, VMware_bootbank_sata-sata-nv_3.5-4vmw.550.0.0.1331820, VMware_bootbank_sata-sata-promise_2.12-3vmw.550.0.0.1331820, VMware_bootbank_sata-sata-sil24_1.1-1vmw.550.0.0.1331820, VMware_bootbank_sata-sata-sil_2.3-4vmw.550.0.0.1331820, VMware_bootbank_sata-sata-svw_2.3-3vmw.550.0.0.1331820, VMware_bootbank_scsi-aacraid_1.1.5.1-9vmw.550.0.0.1331820, VMware_bootbank_scsi-adp94xx_1.0.8.12-6vmw.550.0.0.1331820, VMware_bootbank_scsi-aic79xx_3.1-5vmw.550.0.0.1331820, VMware_bootbank_scsi-fnic_1.5.0.4-1vmw.550.0.0.1331820, VMware_bootbank_scsi-ips_7.12.05-4vmw.550.0.0.1331820, VMware_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.550.0.0.1331820, VMware_bootbank_scsi-megaraid-sas_5.34-9vmw.550.3.68.3029944, VMware_bootbank_scsi-megaraid2_2.00.4-9vmw.550.0.0.1331820, VMware_bootbank_scsi-mptsas_4.23.01.00-9vmw.550.3.68.3029944, VMware_bootbank_scsi-mptspi_4.23.01.00-9vmw.550.3.68.3029944, VMware_bootbank_uhci-usb-uhci_1.0-3vmw.550.0.0.1331820, VMware_bootbank_xhci-xhci_1.0-2vmw.550.3.68.3029944, VMware_locker_tools-light_5.5.0-3.68.3029944
   VIBs Skipped: Avago_bootbank_scsi-mpt2sas_15.10.06.00-1OEM.550.0.0.1331820, Hewlett-Packard_bootbank_scsi-hpvsa_5.5.0.100-1OEM.550.0.0.1331820, QLogic_bootbank_scsi-bfa_3.2.5.0-1OEM.550.0.0.1331820, QLogic_bootbank_scsi-qla4xxx_644.6.05.0-1OEM.600.0.0.2494585, VMware_bootbank_ima-qla4xxx_2.02.18-1vmw.600.0.0.2494585, VMware_bootbank_net-e1000e_3.2.2.1-1vmw.600.1.26.3380124, VMware_bootbank_net-mlx4-core_1.9.7.0-1vmw.600.0.0.2494585, VMware_bootbank_net-mlx4-en_1.9.7.0-1vmw.600.0.0.2494585

Such a long passage reporting how things are done. It's successful anyway. You need to reboot the ESXi box to make changes effective.

NB: You may need to use boot option like "noIOMMU" during the reboot process.

To enable noIOMMU option via SSH, try this command:
> esxcli system settings kernel set –setting=noIOMMU -v TRUE










Tuesday, July 5, 2016

Hanging on "Initialising IOV" message after ESXi 5.5 Upgrade

For my experience in ESXi infrastructure, it's always not easy to upgrade in one go.

Simply add boot options at bootup screen by pressing Shift + O, enter additional parameter "noIOMMU" at the end of the boot string. Then press ENTER key to proceed bootup procedures:

.
[Boot options...] noIOMMU
.


Once ESXi box is started successfully, you may want to apply persistent settings by SSH remotely into the box and issue the following command:

~ #
~ #
~ # esxcli system settings kernel set --setting=noIOMMU -v TRUE
~ #
~ #

This makes noIOMMU option survives ongoing reboots;-)



Access ATAPI DVD writer on VMware ESXi 4.x Windows Guest VM

To access the physical DVD writer on Windows Guest VM, there is no alternative but setting up a SCSI device for this.

IDE passthrough on CD/DVD-ROM device seems not to be really promising as I have tried through different ESXi 4.x boxes.

Finally, I found this way to get through this. You need to open up vSphere interface at a remote machine for monitor and control. You also need a remote desktop session connected to the target Windows Guest VM for checking read/write access on the target physical DVD drive.

Using remote VMware vSphere interface, it is easy to add new SCSI device for physical DVD drive. Before adding new SCSI device, you may want to shutdown Windows guest VM first. 

Once SCSI DVD device is added to Guest VM's profile, you may start the VM again to see if the device is detected.

When you can login to Windows Guest VM, it's time to try on remote vSphere interface by clicking CD/DVD-ROM device icon to connect to host physical device. You should see the device raw ID as shown in the selection list for choice on vSphere interface. Once connected, the physical SCSI DVD drive should be ready to go for Read & Write operations on your Windows Guest VM.