Wednesday, August 24, 2022

Create sparsebundle image manually to speed up Time Machine Backup process

Just found Time Machine backup is getting slower and revised what can be done on sparse image format. Here we come with the manual method to create sparsebundle image faster enough for Time Machine backup on network shared drive.

Here's the syntax:

$ hdiutil create -size 1024GB -type SPARSEBUNDLE -encryption AES-128 -nospotlight -fs "Journaled HFS+" -imagekey sparse-band-size=262144 -verbose -volname "Backup_X" Backup_X.sparsebundle

Note:
-imagekey sparse-band-size=size can be used to specify the number of 512-byte sectors that will be added each time the image grows. Valid values for SPARSEBUNDLE range from 2048 to 16777216 sectors (1 MB to 8 GB). 
sparse-band-size represents the size of the ‘chunks’ that make up the sparseimage (they aren’t just one single file). Normally, the images are made in 8M chunks but those perform quite poorly over the network. The value provided above represents 128M chunks (2 * 128 * 1024 512-Byte-Blocks = 262144 = 128MB) which is a good size for a hard disk backup.

Ref:
https://github.com/0xdevalias/devalias.net/issues/89
https://mike.peay.us/blog/archives/248
https://eclecticlight.co/2020/06/18/selecting-sizes-for-sparse-bundles/
https://ss64.com/osx/hdiutil.html

Friday, August 19, 2022

Extend the storage of WD MyCloud with external USB hard drive for Mac Users

Home backup solution WD MyCloud provides simple approach to extend the size of storage by attaching additional external hard drive through its USB port. 

First, take a look at what partition format of USB hard drive is supported by WD MyCloud:

My Cloud: External USB Drive Supported File Systems


The table lists the USB drive file systems supported on the My Cloud products.
Note:
  • exFAT support requires firmware version 2.21.111 and higher
  • exFAT with EFI System Partitions are not supported and will result in "Unable to mount USB device" message
    My Cloud device
    NTFS
    HFS+
    FAT32
    exFAT
    My Cloud
    My Cloud Mirror
    My Cloud Mirror Gen2
    My Cloud EX2
    My Cloud EX4
    My Cloud EX2 Ultra✔ *
    My Cloud EX2100✔ *
    My Cloud EX4100✔ *
    My Cloud DL2100✔ *
    My Cloud DL4100✔ *
    My Cloud PR2100✔ *
    My Cloud PR4100✔ *

The best Partition Format of all?

Choosing a partition format for external USB hard drive can be confusing for home users. Consider yourself using Windows or MacOS, the type of filesystem format would have impact on daily backup and the maintenance in case of drive failure. Default format of WD products (external USB hard drive) mostly comes with NTFS partition. Windows users may not aware of such difference until they switch to use Mac computer for home use. 

MacOS doesn't come with stable support (for free, I mean) of READ/WRITE operations to NTFS partition. So, the best bet is to reformat the external drive to HFS+ for better support. If you mainly work on MacOS platform then HFS+ would be your best choice. This takes consideration of easier maintenance and recovery using Apple computer.

People may ask if FAT32 or exFAT would be more appropriate for cross platform support among Windows and Mac computers. FAT32 partition is limit to store files whereas each file has a limit in size less than 4GB. exFAT doesn't have such limit. However, exFAT is not journaled filesystem so the chances of data loss in case of power outage is bigger than journaled filesystem like NTFS and HFS+. We are talking about hard drive running on 24/7 basis.

For reliable choices, we have now limited to NTFS and HFS+. Luckily, nearly all WD MyCloud products support both of these journaled filesystems. For Mac Users, HFS+ is definitely a good choice for extending NAS storage. In case of MyCloud server failure, the chances to recover data from HFS+ partition is higher than that from NTFS partition.

HFS+ is so far the most reliable filesystem with Time Machine support across different MacOS versions. Newer filesystem APFS has just officially supported since Big Sur. Older MyCloud products don't even support APFS. For backward compatibility, HFS+ is still the best choice as storage media.

What about Multiple Time Machine backups on WD MyCloud?

WD MyCloud has builtin support of Time Machine shared folder to accomodate more than one backup at a time. So if you have more than one computer required for home backup, you can opt for the builtin Time Machine feature on WD MyCloud server. The only exception is that you have no control over the size limit of each backup. WD MyCloud uses one single folder for Time Machine Backup for all. When the space runs out, whoever which backup in next operation may start to delete its older snapshots.

If you want more control over Time Machine backup, then you may want to manually create backup image file/folder for use. This way you can set a size limit to the image file/folder. The type of backup image option should be *.sparsebundle which is commonly used by Time Machine operations.

Once external HFS+ formatted drive is connected to WD MyCloud, it will hook up as USB drive and the drive would appear on Finder as new drive. Furthermore, you may protect the drive with password secured access on WD MyCloud web interface. This HFS+ format is just for storage level. If you are considering using APFS or other filesystem, then you should keep reading. We are going to create the actual backup image in its preferred filesystem afterwards.

Manually create *.sparsebundle on network shared drive


For better control over backup size limit, you must create *.sparsebundle image through Disk Utility tool. Click [File] > [New Image] > [Blank Image...] and then select a location on network shared drive to store your backup image.

For backup format, you will have many more options, i.e., Mac OS Extended (Journaled), APFS or even exFAT. This is the filesystem for the actual Time Machine Backup. 

For Image Format, you can simply pick [sparse bundle disk image]. 

For Partitions option, please select [Single partition - GUID Partition Map] for compatibility with Time Machine requirements. 

Finally, you must set the Size to your desired value. The backup image size must be smaller than the total space available on network shared drive. 

For further security, you can select Encryption like 128/256 bit encryption and set your password.

Once you create *.sparsebundle image successfully. A new drive linked to this image will appear on Finder.

Tell Time Machine where to find the new backup location

You can set destination with Time Machine command line like this:


$ sudo tmutil setdestination /Volumes/YOUR_TIMEMACHINE_BACKUP_DRIVE

Make network shared volume automatically mounted

To make the mounted volume surviving at the next boot time:

  1. Go to System Preferences
  2. Click Users and Groups
  3. Select your user and then click Login Items
  4. Click the + button and then choose the blank image (.sparsebundle) created in step 1 above
  5. Repeat step 4 and choose the target volume so it will be mounted automatically.



Ref.:

The way to manage multiple copies of Time Machine Backups is different than builtin setup of TimeMachineBackup shared folder on WD My Cloud.

Creating sparsebundle image manually on NAS share drive gives flexibilities and control over image size for each backup. 


Configure time machine backup on Samba drive:

https://manjaro.site/how-to-configure-time-machine-to-backup-to-samba-shared-folder/


Repair sparsebundle on NAS backup:

https://www.garth.org/archives/2011,08,27,169,fix-time-machine-sparsebundle-nas-based-backup-errors.html

https://expobrain.net/2016/12/10/fix-corrupted-time-machine-spase-bundles/

https://macmanus.nl/2014/01/31/fixed-use-terminal-to-repair-corrupt-sparsebundle-file/


Limit the size of time machine sparse bundles:

https://community.wd.com/t/how-can-i-limit-the-size-of-the-time-machine-sparse-bundles-on-the-mbl/55021/6


Time machine backups on external USB drive attached to WD My Cloud:

https://support-en.wd.com/app/answers/detailweb/a_id/19225

https://community.wd.com/t/how-to-expand-disk-size-for-time-machine-backups/230284/2

https://community.wd.com/t/usb-port-for-time-machine-backup/266335





Friday, June 16, 2017

Brew afsctool on your own for Mac OS Sierra

Recently, I came across an error message while compressing the document with HFS+ compression using open source tool afsctool. The repository on Homebrew project is still on version 1.6.4.

Message comes up every time: "Unable to compress file."

Someone on Github has figured out what's happening and has a solution. The Github user has pointed out that the filesystem on Sierra returns a different value of file type than its predecessor. This renders afsctool useless and return error message at all time. But the suggested change has never been updated on the original source repository of afsctool.

With a little patience for not using this tool, I made up my mind. I think it's time to homebrew our software tool before the Author patches the source code for this.

Here's the recipe:

Before your own compiling work, you might want to uninstall the outdated brew formula for afsctool:

$
$ brew uninstall afsctool

Make sure gcc is installed properly on your Mac.

Take a look at the Hombrew formulas for afsctool via http://brewformulas.org/Afsctool

Take a look at the source file from there: https://github.com/Homebrew/homebrew-core/tree/master/Formula/afsctool.rb

We found a URL from there: https://docs.google.com/uc?export=download&id=0BwQlnXqL939ZQjBQNEhRQUo0aUk

Also, take a look at the parameters called from the def() function call. Highlighted parameters will be useful for compiling:
def install
    cd "afsctool_34" do
      system ENV.cc, ENV.cflags, "-lz",
         "-framework", "CoreServices", "-o", "afsctool", "afsctool.c"

      bin.install "afsctool"
    end
end

Grab the source code ZIP of afsctool from the above URL.

Extract the ZIP file to a temporary location.

Find afsctool.c and amend the file according to this Github post.

Try compiling the .c file afsctool.c with GCC compiler using the aforementioned parameters from brew formula:

$
$ gcc -lz -framework CoreServices -o afsctool afsctool.c

Copy the compile file afsctool to the common bin folder for easy access:

$
$ cp ./afsctool /usr/local/bin/afsctool

Well, it's time to have a try on our new compiled tool:

$ afsctool -cv ./afsctool.c
/afsctool_34/afsctool.c:
File content type: public.c-source
File size (uncompressed data fork; reported size by Mac OS 10.6+ Finder): 79339 bytes / 79 KB (kilobytes) / 77 KiB (kibibytes)
File size (compressed data fork - decmpfs xattr; reported size by Mac OS 10.0-10.5 Finder): 13396 bytes / 16 KB (kilobytes) / 16 KiB (kibibytes)
File size (compressed data fork): 13412 bytes / 16 KB (kilobytes) / 16 KiB (kibibytes)
Compression savings: 83.1%
Number of extended attributes: 4
Total size of extended attribute data: 50 bytes
Approximate overhead of extended attributes: 1608 bytes
Approximate total file size (compressed data fork + EA + EA overhead + file overhead): 18306 bytes / 18 KB (kilobytes) / 18 KiB (kibibytes)

Finally, the compression works again.

By the way, we may need to keep watching until one day an updated brew formula is available to tackle this problem for Sierra.



Monday, June 5, 2017

Windows 2012 ports exhausted under heavy load

For a test case scenario, I have encountered a port depletion on Windows 2012 server which has its available ports used up quickly due to frequent queries to the database and HTTP requests to web server. Thing happened within a short period of time and nothing can ease the situation except a system reboot. Too many ports are opened but not closed properly which could be mainly due to the arriving order of incoming packets being out of sync.

Here's the extract of resolution:

On Windows platforms, the default timeout is 120 seconds, and the maximum number of ports is approximately 4,000, resulting in a maximum rate of 33 connections per second. If your index has four partitions, each search requires four ports, which provides a maximum query rate of 8.3 queries per second.

(maximum ports/timeout period)/number of partitions = maximum query rate

If this rate is exceeded, you may see failures as the supply of TCP/IP ports is exhausted. Symptoms include drops in throughput and errors indicating failed network connections. You can diagnose this problem by observing the system while it is under load, using the netstat utility provided on most operating systems.

To avoid port exhaustion and support high connection rates, reduce the TIME_WAIT value and increase the port range.

This problem does not usually appear on UNIX systems due to the higher default connection rate in those operating systems.

To set TcpTimedWaitDelay (TIME_WAIT):

    Use the regedit command to access the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\ Services\TCPIP\Parameters registry subkey.


  1.     Create a new REG_DWORD value named TcpTimedWaitDelay.
  2.     Set the value to 60.
  3.     Stop and restart the system.


To set MaxUserPort (ephemeral port range):

    Use the regedit command to access the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\ Services\TCPIP\Parameters registry subkey.


  1.     Create a new REG_DWORD value named MaxUserPort.
  2.     Set this value to 32768.
  3.     Stop and restart the system.


Furthermore, you may have to set another parameter StrictTimeWaitSeqCheck as well, for TcpTimedWaitDelay to be of effect:

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
"StrictTimeWaitSeqCheck"=dword:00000001

Setting or changing these will require a reboot for the changes to be in effect.



Monday, April 24, 2017

MySQL CLI mode localhost ignoring --PORT option

I have a issue like this in the past, either in Java or PHP, but never dug into the details of how MySQL server treats its command-line options for Localhost connection.

Basically, Localhost would point to local machine and implicitly means local IP like 127.0.0.1.

For standard MySQL instance, port number is default to 3306. To connect MySQL on different port number is possible by changing port number directly. However, such behaviour is a little bit different on Localhost connection.

Even MariaDB shares common bugs/features with its variant MySQL.

When connecting to MySQL instance on Localhost, you can type the followings with success:
> mysql -uuser -p -hlocalhost -P3306
> mysql -uuser -p -hlocalhost -P3307
> mysql -uuser -p -hlocalhost -P80
> mysql -uuser -p -hlocalhost -P443

They will absolutely redirect you to the instance Localhost:3306. I think of this a bug, yet someone has got an explanation on the official site for years:

On Unix, MySQL programs treat the host name localhost specially, in a way that is likely different from what you expect compared to other network-based programs. For connections to localhost, MySQL programs attempt to connect to the local server by using a Unix socket file. This occurs even if a --port or -P option is given to specify a port number. To ensure that the client makes a TCP/IP connection to the local server, use --host or -h to specify a host name value of 127.0.0.1, or the IP address or name of the local server. You can also specify the connection protocol explicitly, even for localhost, by using the --protocol=TCP option.
MySQL is based on Unix development source so behaviour will follow even when it's complied to Windows release.

To reach one particular port number on Localhost, we must specify the actual IP address. This ensures MySQL connection is done through TCP protocol rather than socket file.

In other words,
> mysql -uuser -p -hlocalhost -P3306 --protocol=TCP
> mysql -uuser -p -hlocalhost -P3307 --protocol=TCP
> mysql -uuser -p -hlocalhost -P80 --protocol=TCP
> mysql -uuser -p -hlocalhost -P443 --protocol=TCP
> mysql -uuser -p -h127.0.0.1 -P3306
> mysql -uuser -p -h127.0.0.1 -P3307
> mysql -uuser -p -h127.0.0.1 -P80
> mysql -uuser -p -h127.0.0.1 -P443

This will make sure we connect to Localhost on the port number as specified.

Ref: https://dev.mysql.com/doc/refman/5.7/en/connecting.html





Wednesday, March 15, 2017

Fix Acer Aspire ONE Ubuntu "Freezing" Issue (Ubuntu 14.04, 16.04)

Previously, I was looking at Official Ubuntu help page for Acer Aspire series for help but it didn't really work at all. It told us to set the following:

intel_idle.max_cstate=3

I have turned to add the following option to the boot parameters to solve system freezing when the power is connected:

intel_idle.max_cstate=1

Just add it to the GRUB_CMDLINE_LINUX_DEFAULT string in /etc/default/grub.

$
$
$ sudo update-grub
$ sudo reboot now
$
$

c_state = 1 makes CPU running at full power and not falling asleep anymore. It's such a power hunger setting for battery life but it fixed the random freeze issue.

For more advanced option of C state which may suits different CPUs categories. Please check:
http://www.hardwaresecrets.com/everything-you-need-to-know-about-the-cpu-c-states-power-saving-modes/







Thursday, February 2, 2017

Quick and dirty reboot of remote Linux server

It sounds annoying sometimes when I was rebooting the remote Linux server, especially on the Cloud. After issuing command like:

$
$ reboot now

The terminal session will definitely be disconnected but the server still hangs in the middle of the termination process and would not boot-up.

We can hard reset the machine if we are sitting in front of it. Holding down [Alt] and [SysRq] (which is the Print Screen key) while slowly typing keys R, E, I, S, U, B will get you safely restarted. Meanings of the keys as follows:

  •     R: Switch the keyboard from raw mode to XLATE mode
  •     E: Send the SIGTERM signal to all processes except init
  •     I: Send the SIGKILL signal to all processes except init
  •     S: Sync all mounted filesystems
  •     U: Remount all mounted filesystems in read-only mode
  •     B: Immediately reboot the system, without unmounting partitions or syncing


For the cloud server, we definitely need some sort of Terminal commands to put the server strictly into reboot process no matter what process is currently running or hanging. At least, we can diagnose it after it boots up again.

$
$ echo 1 > /proc/sys/kernel/sysrq
$ echo b > /proc/sysrq-trigger

First command mimics the keystroke of System Request [SysRq] key and the second one mimics hitting [b] key.

So, [b] for what? It does immediately reboot the system, without unmounting partitions or syncing

So just be prepared, you may suffer any data loss afterwards. However, it's really useful at the time you want to make sure the server does reboot immediately.

*A suggestion would be manually stopping major services and processes before you issue such commands.






Tuesday, December 20, 2016

Solution to Homebrew upgrade issue after MacOS Sierra 12.12.x Upgrade

For anyone who has problems in upgrading Homebrew repository after upgrading to MacOS Sierra 12.12.

$
$ cd "$(brew --repo)" && git fetch && git reset --hard origin/master && brew update
$ brew update
$ brew upgrade
$ brew cleanup
$ brew cask install --force $(brew cask list)
$ brew cask cleanup
$

If you have changed permissions on folder /usr/local while installing Homebrew previously on El Capitan 10.11, you might want to run the following command to revert that change whereas new Homebrew doesn't need special permission over this folder:

$
$ sudo chown root:wheel /usr/local
$


Saturday, December 17, 2016

Quickfix: Boot up screen stuck with spin freezes after Mac OS Sierra upgrade



After a major upgrade of Mac OS Sierra, my MacBook didn't boot up anymore. It freezes on the grey screen with dead spinning wheel.

After checking any non-apple Kext module in Safe mode and fixing any disk error in DiskUtility in Recovery mode, I nearly have no clue on what's happening to the hard drive.

Here come across 7 steps which can be useful for anyone who cannot resolve the boot up problem after upgrading to Sierra.


  1. Shut down your Mac.
  2. Press the power button to start up your Mac.
  3. Immediately Hold down Command-S for single-user mode.
  4. On the terminal window Type fsck –fy and press return
  5. Type mount –uw and press return
  6. Type touch /private/var/db/.AppleSetupDone and press return
  7. Type exit and press return

Through Steps 1 to 3, you should have launched your Mac on a Single user mode.

In Step 3, the screen shows up the raw booting messages which show clearly what is running behind the scene.

Steps 4 to 7 will help check for file system consistency and remount the boot volume.
Regarding Step 6, what's point of creating empty file .AppleSetupDone?
Every time OS X boots, it checks for the existence of a file known as .AppleSetupDone. This empty file is created after the completion of Setup Assistant. It doesn't exist on a brand-new, out-of-the-box Mac, nor on one that has had a clean installation of OS X.
By removing this file, OS X will assume that Setup Assistant has never been run and will launch it as soon as OS X boots.
Setup Assistant is also run with root privileges, which is why it can create a new user account with administrator privileges without the need for any authorisation.

As I have gone through the wizards of Setup Assistant before bootup problem is encountered. I have not seen any wizard of Setup Assistant again.

After step 7, my MacBook booted into the logon screen which I usually see. I quickly login, and then restart it to see if things are working again. Finally, it booted up quickly and successfully to logon screen and let me login as usual.

As of writing, I can boot back into my Mac OS Sierra v10.12.2.


Monday, December 12, 2016

SDLC approach: SCRUM & Agile

You may know something about SDLC when you heard about "Waterfall" and "Agile".

People talking about Agile or Agile-like development might also talk about Scrum. Here's a bit digests from the specifications I came through:

For the term "Agile" as people normally say:

The Agile Movement (www.agilemethodology.org) notes that Agile is not in itself a methodology, but rather an alternative to traditional project management to try and help teams respond to unpredictability through incremental, iterative work cadences and empirical feedback. Agile methods are therefore alternatives to waterfall, or traditional sequential development. 

So, how about Scrum?

Scrum is described as the most popular way of using Agile and Agile-like methods and has based its assumptions around that approach. The use of the term Agile and/or Agile-like is intended to imply that Customer may be more interested in the principles of Agile as tailored to the project, rather than in strict adherence to any particular form of Agile.
Scrum is an iterative and incremental agile software development framework for managing product development.


In simple words, Agile is a mindset and Scrum is a method.

To get scrum fully implemented and make the software development truly agile, companies might need to adopt the necessary changes in management style, organisation culture, running processes and way of executing projects.