Monday, 7 November 2016

Ansible in a Virtualenv

Tested on Arch Linux. First install support for the Virtualenv

$ pip2 install virtualenv

Create the virtualenv to host ansible :

$ virtualenv --system-site-packages ansible_env

Activate the environment

$ source ansible_env/bin/activate

Install a specific version

$ pip2 install ansible==2.1.3.0

When complete deactivate

$ deactivate

Sunday, 29 May 2016

Midnight Commander Configuration

By default when exiting Midnight Commander bash returns to the directory Midnight Commander was started from, instead of the last open directory. Midnight Commander has a wrapper script which can be used by adding
source /usr/lib/mc/mc.sh
to your
~/.bashrc
Useful Commands are
Copy (to second pane) : F5
Move (to second pane) : F6
Exit Midnight Commander : F10
Tab : Move to second pane
User menu (compress, uncompress) : F8

Midnight Commander Key Bindings

To change the default key bindings for Midnight commander on Arch Linux. From the man pages :
  Redefine hotkey bindings

Hotkey bindings may be read from external file (keymap-file).  Initially, Mignight    Commander creates key bindings using keymap defined in the source code. Then,  two  files  /usr/share/mc/mc.keymap  and  /etc/mc/mc.keymap are loaded always, sequentially reassigned key bindings defined earlier.  User-defined keymap-file is searched on the following algorithm (to the first one found):

              1) command line option -K <keymap> or --keymap=<keymap>
              2) Environment variable MC_KEYMAP
              3) Parameter keymap in section [Midnight-Commander] of config file.
              4) File ~/.config/mc/mc.keymap
Thus we can just
$ cp /etc/mc/mc.default.keymap ~/.config/mc/mc.keymap 
Then edit the keymap file as required. Its useful to save the file within a repository such as git when complete.

Monday, 2 May 2016

Ansible Roles variables that differ between OS

Using Ansible roles you may wish to set the package and service name per OS. Within roles we can import them within tasks from the appropriate vars.

tasks/main.yml

- name: Obtain OS Specific Variables
  include_vars: "{{ item }}"
  with_first_found:
    - "../vars/{{ ansible_distribution }}-{{ ansible_distribution_version }}.yml"
    - "../vars/{{ ansible_distribution }}.yml"
    - "../vars/{{ ansible_os_family }}.yml"
    - "../vars/defaults.yml"
Then you can set values within /vars as appropriate e.g. Apache

vars/default.yml

apache_package_name: apache2
apache_service_name: apache2

/vars/RedHat.yml

apache_package_name: httpd
apache_service_name: httpd

/vars/Debian.yml

apache_package_name: apache2
apache_service_name: apache2

Monday, 25 April 2016

Using /proc on Linux

/proc on Linux is a pseudo file system that provides an interface to the kernel data structures. It is very useful for troubleshooting and the following highlights some that I have found helpful. The man page can be accessed with full details on Arch Linux with :
# man 5 proc

/proc/cpu

  • A collection of CPU and system architecture dependent items
  • Useful to find the CPU type, number of cores and flags
  • lscpu uses this file to obtain its information
    $ cat /proc/cpu
    

/proc/cmdline

  • Arguments passed to the Linux kernel at boot time
  • Useful to see what options were used by e.g. grub on boot
    $ cat /proc/cmdline
    

/proc/loadavg

  • The first three fields in this file are load average figures giving the number of jobs in the run queue (state R) or waiting for disk I/O (state D) averaged over 1, 5, and 15 minutes.
  • Same numbers used by uptime
    $ uptime && cat /proc/loadavg
     20:37:08 up  1:36,  8 users,  load average: 0.51, 2.17, 2.56
     0.51 2.17 2.56 6/681 11570
    
  • The fourth field consists of two numbers separated by a slash (/)
  • The first of these is the number of currently runnable kernel scheduling entities (processes, threads)
  • The value after the slash is the number of kernel scheduling entities that currently exist on the system.
  • The fifth field is the PID of the process that was most recently created on the system.

/proc/meminfo

  • Reports statistics about memory usage on the system
  • Used by free to report the amount of free and used memory (both physical and swap) on the system as well as the shared memory and buffers used by the kernel
     $ cat /proc/meminfo
    

/proc/[pid]/cmdline

  • Everything in Linux is a file and each process has a directory under /proc for its /pid/
  • As pid 1 is the init script we can
    $ cat /proc/1/cmdline
    /sbin/init
    
  • This read-only file holds the complete command line for the process

/proc/[pid]/fd/

  • A subdirectory containing one entry for each file which the process has open, named by its file descriptor
  • Its a symbolic link to the actual file

Sunday, 24 April 2016

Best Practices for Docker Container deployments

Container technology like Docker leads to faster testing and deployments of software applications. The following is a list of some best practices to consider when using containers. The key objectives are :
Repeatability
Reliability
Resiliency

To achieve this :

1. Have a single code base, tracked in git with many deploys

  • Docker containers should be immutable
  • Use the environment variables to change anything inside the container
  • Do not build separate images for staging and production

2. Explicitly declare and isolate any dependencies

  • Do not use latest in images. Instead define the exact version e.g. ubuntu:latest vs ubuntu:16.04
  • Its useful to build run times (e.g. Java run time) based on specific images
  • Process -> Base OS -> Run Time (e.g. Java run time) -> Add app

3. Store configuration in the environment

  • Do not have a config.yml or properties.xml
  • Always use environmental variables

4. Treat backing services as attached resources

  • Never use local disk
  • Data will always disappear
  • Connect to network services using connection info from the environment e.g. DB_URL

5. Strictly separate build and run stages

  • Build immutable images and then run them
  • Never install anything on deployments
  • Respect the life cycle : build, run, destroy

6. Execute the application as 1 or more stateless processes

  • Schedule long running processes by distributing them across a cluster of physical hardware

7. Export services via port binding

  • Define ports from environment variables
  • A PID cannot guarantee the port in the container

8. Scale out via process model

  • Horizontally scale by adding instances

9. Maximize robustness with fast start-up and graceful shutdown

  • Quickly scale up when there is a load spike
  • For a data intensive app its tempting to load data into memory as a hot cache. Issue is that this stops having a fast start-up (as need to load data into memory). Could use Redis or memcache as an alternative

10. Keep development, staging and production as simple as possible

  • Run containers in development e.g. DB, caching

11. Treat logs as event streams

  • Log to standard output (stdout) and standard error (stderr)
  • Use something like the ELK stack to collect all the logs
  • Should never need to ssh to check logs
  • There should not be random log files in the container

12. Run admin/management tasks as one off processes

  • Do not use customer containers as one off tasks
  • Reuse app images with specific entry points for tasks

Sunday, 17 April 2016

systemd-analyze - Analyze system boot-up performance

The systemd project has an excellent tool systemd-analyze which allows an analysis of system boot performance.

Running without any parameters displays the startup time in total with figures for the kernel + userspace
$ systemd-analyze
Startup finished in 1.190s (kernel) + 8.312s (userspace) = 9.503s
The option blame prints a list of all running units, ordered by the time they took to initialize. This may be used to optimize boot-up times. The output may be misleading as the initialization of one service might be slow simply because it waits for the initialization of another service to complete.
For example the 3 applications that took longest to initialise :
$ systemd-analyze blame | head -n 3
          7.621s man-db.service
          3.174s docker.service
          1.134s mysqld.service
The option critical-chain prints a tree of the time-critical chain of units (for each of the specified UNITs or for the default target otherwise).

The time after the unit is active or started is printed after the “@” character. The time the unit takes to start is printed after the “+” character. Note that the output might be misleading as the initialization of one service might depend on socket activation and because of the parallel execution of units
For example reviewing the docker.service unit

$ systemd-analyze critical-chain docker.service
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.

docker.service +3.174s
`-network.target @1.287s
  `-NetworkManager.service @830ms +172ms
    `-dbus.service @691ms
      `-basic.target @682ms
        `-sockets.target @682ms
          `-docker.socket @681ms +481us
            `-sysinit.target @681ms
              `-systemd-backlight@backlight:acpi_video0.service @1.321s +41ms
                `-system-systemd\x2dbacklight.slice @1.320s
                  `-system.slice @123ms
                    `--.slice @102ms

The plot option prints an SVG graphic detailing which system services have been started at what time, highlighting the time they spent on initialization
$ systemd-analyze plot > plot.svg