Sunday, December 30, 2018

Powershell Automation from C#.NET

You could call Powershell Scripts from .NET (C#) application with ease using System.Management.Automation. A sample has been provided here, which utilizes RunSpace pool for more configurablity. i.e You could manage the thread pool, for running the Powershell scripts from C#. i.e You could run it on the calling thread, or on a pool of different threads for higher level of parallelism.


You could find a highly configurable SignalR ChatRoom application here, which make use of Powershell automation to achieve dynamic functionality as needed.


chatroom2

Friday, December 28, 2018

Powershell: High Performance Registry Search

Find a high performance registry search implementation through powershell here. Sample searches can be found here. You could search based on;

a. Registry Path Names

b. Registry Value Names

c. Registry Values

Like match would be the default. You could specifiy (-searchExact) for exact matches. The implementation provides a wrapper over the native ‘REG QUERY’ command line tool. Native registry search through Powershell lacks performance, pretty slow and runs for hours in certains cases.


Sample: (Search-Registry -tokenToSearch "aero.theme" -tokenType Value -pathsToSearch @("HKLM"))

Searches registry values having the string token “aero.theme”


registry

Sunday, December 16, 2018

Generic Log Parser (Filter By Log Dates–TimeZone, Log Expression) in Powershell

Find a Generic Log/Text Parser written in PowerShell here. This Parser allows filtering any kind of Logs (Windows Built in IIS, CBS or others) including Custom Logs from your application server environments. This parser also take the Time-Zone information into account, in which the Log Entries being written. The following are the key parameters by which Logs are being extracted;

a. Filter Start Log Date Time

b. Filter End Log Date Time

c. Time Zone in Which Filter Log (Above) have been given

d. Server Log Folders (network shares) from which logs will be parsed

e. An expression, which will be matched against log entries and only matching ones will be picked.

f. A print Time Zone, in which the dates of the selected Log Entries will be written to the output file

g. A Log Control File, which contains the structure of the Log File Formats.

e.g. Sample one given here, which defines the structure for the built in CBS and IIS Logs.

You could also define your own Log file formats from your environments and could include to this Control files. “LogEntryFormat” property of the Log Control entry, should contain three tokens namely DATE, TIME and ENTRY with the exact casing. These defines the positions in the Log entry to pick the Date, Time and the rest of the Log Entry to be parsed.

(Eg of Windows CBS Log: "LogEntryFormat": "(?<DATE>\\d{4,4}-\\d{1,2}-\\d{1,2}) (?<TIME>\\d+:\\d+:\\d+), (?<ENTRY>.*)")

A sample parsing can be found here.

To define a new Log Control entry for a custom log in your application environment, insert a similar entry to the Log Control Files and give it a unique name. The same should be specified for “logTypeControlKey” parameter. (e.g. ‘’logTypeControlKey = "CBS_LOG"”)



Monday, January 29, 2018

Docker Clusters On Linux Containers (LXD), Provisioned by Docker Machine

It is quite plausible to host Docker Clusters on top of Linux Containers such as LXD. This will be an ideal setup on your local desktop, to simulate a production cluster which will predominantly hosted on top of Virtual Machines (instead of Linux Containers). Using Linux Containers locally have some obvious advantages

a. They are like light weight Virtual Machines (aka Containers on steroids/System Containers)

b. No overhead of Virtual Machines/Guest OS

c. Incredibly fast to boot, less foot print on memory and CPU

d. Quick to copy, clone, snapshot which takes less disk space

e. You can run a dozens of them in a desktop unlike one /two virtual machines.

This is the paramount reason, you opt LXD to simulate a production cluster in local environments.

The below figure shows, how this has been organized in our desktop environment

clip_image002

Setting up such a cluster has been detailed below:

1. Setting up the Desktop (Host Machine)

1.1 Setting up LXD on ZFS

Our host system has Lubuntu 16.04 LTS, with LXD installed on top of ZFS file system. AUFS kernel module has been enabled, so that same will be available to LXD Containers and Docker Engine. AUFS should be made available, so that Docker can use aufs storage driver (instead of vfs) to manage its layered union filesystem, backed by the underlying ZFS.

With regards to networking, we’re using a prebuilt bridge instead of LXD’s default, so that LXD containers can be accessible from anywhere in our LAN. This prebuilt bridge (vbr0) has already been bridged with the host machine’s network adapter (eth0)

All our LXD containers will be based on Ubuntu 16.04 LTS (xenial) image. Run commands from host machine console.

sudo modprobe aufs

sudo apt-get install zfs

sudo zpool create -f ZFS-LXD-POOL vdb vdc -m none

sudo zfs create -p -o mountpoint=/var/lib/lxd ZFS-LXD-POOL/LXD/var/lib

sudo zfs create -p -o mountpoint=/var/log/lxd ZFS-LXD-POOL/LXD/var/log

sudo zfs create -p -o mountpoint=/usr/lib/lxd ZFS-LXD-POOL/LXD/usr/lib

sudo apt-get install lxd

sudo apt-get install bridge-utils

brctl show vbr0

lxc profile device set default eth0 parent vbr0

lxc profile show default

lxc image list images:

lxc image copy images:ubuntu/xenial/amd64 local: --alias=xenial64

lxc image list

#/etc/network/interfaces

auto vbr0

iface vbr0 inet dhcp

bridge-ifaces eth0

bridge-ports eth0

up ifconfig eth0 up

iface eth0 inet manual

1.2 Setting up Docker-Machine and Docker Client

Provisioning the LXD Containers with Docker is done through Docker-Machine, which automatically install Docker-Engine and related tools. Hence we should install Docker-Machine on our host machine. Run from host machine console.

#https://github.com/docker/machine/releases

sudo wget -L https://github.com/docker/machine/releases/download/v0.13.0/docker-machine-`uname -s`-`uname -m`

sudo mv docker-machine-`uname -s`-`uname -m` /usr/local/bin/docker-machine

sudo chmod +x /usr/local/bin/docker-machine

sudo apt-get install docker.io

NB: Once you install the docker.io (though it installs both docker-engine and client, we only need client in host machine), it will add a DROP IP rule (for any forwarded packets) to the IP Table rule chain to have the isolated Docker network. As we don’t intend the host to be a docker host, we should reverse the above rule. Otherwise Docker Containers inside the LXD Containers won’t be able to forward the packets through the host machine, and hence pull images won’t work from internet inside the LCD containers.

Long story short, run the below command in host machine to reverse the rule to the original.

sudo iptables -P FORWARD ACCEPT

2. Preparing LXD Containers for Docker Hosting

Before using LXD containers, it has to be prepared for Docker use. This involves making the container as privileged and supplying Docker required Kernel modules.

This involves;

2.1 Creating the container (CNT-1 and CNT-2)

2.2 Attaching Docker required profile configurations to containers

2.3 Create a user (with root privileges) with password less SUDO access

2.4 Setup SSH Server with Public Key Authentication

#Run from host machine console

lxc launch xenial64 CNT-1 -p default -p docker

#Docker required settings in LXD – Very Important!!!

lxc config set CNT-1 security.privileged true

lxc config set CNT-1 security.nesting true

lxc config set CNT-1 linux.kernel_modules “aufs, zfs, overlay, nf_nat, br_netfilter”

lxc restart CNT-1

lxc exec CNT-1 /bin/bash

#Run from LXD (CNT-1) container console

sudo touch /.dockerenv

adduser docusr

usermod -aG sudo docusr

sudo vi /etc/sudoers

#append line:

docusr ALL=(ALL) NOPASSWD: ALL

groupadd –r sshusr

usermod –a –G sshusr docusr

sudo apt-get install openssh-server

sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.original

sudo vi /etc/ssh/sshd_config

#append line:

AllowGroups sshusr

exit

#Run from Host Machine (Desktop) console

ssh-keygen -t rsa

ssh-copy-id docusr@<IP Of CNT-1>

Note: We’ve created an empty .dockerenv file at the root of the file system (/). This is required to make the Docker Swarm Clustering work for Overlay networking as per this link.

#Above sample for one LXD container (CNT-1), repeat for CNT-2

3. Docker Machine to provision LXD Containers

Docker does not have a native driver for LXD, hence we’re using the generic driver which leverages the SSH to provision the container. Run below from host machine console.

docker-machine create \

--driver generic \

--generic-ip-address=<IP Of CNT-1> \

--generic-ssh-user "docusr" \

--generic-ssh-key ~/.ssh/id_rsa \

cnt-1

This will automatically provision LXD Container (CNT-1) with Docker and related tools.

4. Docker Client CLI, to manage Docker Containers

All set. Run below from host machine console.

Point to LXD Container (CNT-1) and Run NGINX webserver in a container and access the same from a browser anywhere from your LAN

docker-machine env cnt-1

eval "$(docker-machine env cnt-1)"

docker run -d -p 8000:80 nginx

curl $(docker-machine ip cnt-1):8000

#or take browser and navigate to http://<IP of CNT-1>:8000

docker-machine env -u

clip_image004


Download Commands and Reference Links