Sunday, April 19, 2015

Full Hard Drive Installation of FatDog64-700

With the release of FatDog64-700, most of us are interested in full hard drive installation, to get the maximum performance out of it. But till now FatDog64 does not officially supports hard drive installation.

I’d already wrote an article on how to make full hard disk installation of FatDog64-630 and FatDog64-631. You can read the same here. The steps are almost remains the same for FatDog64-700 too. But there are a few tweaks to be specifically made for the 700 series in certain steps.

Before moving into the steps, there are a few caveats and assumptions.

i.  The default boot loader provided with the FatDog64 (While using the Frugal Installer) having issues, while booting the full install. In fact, I’ve never succeeded creating a full install, with the default boot loader provided by the FatDog700 frugal installer (syslinux). The major issue with this is, the built in Frugal Installer will install the files to ‘/boot’, subdirectory. Not to the root of the partition. This can cause you issues with full installation, where you’ve to provide ‘root=’ switch to the kernel command line, that can only point to a root of the partition, not to a subdirectory under the root partition. i.e. ‘root=/dev/sda2’ is a valid switch, but ‘root=/dev/sda2/boot’ is not.

So I assume you already have an existing boot loader (Grub2 or Grub4Dos). This also means you might already owns, a primary linux OS, such as ubuntu! You will be making the changes to the existing boot loader, to boot the FatDog-700 full install.

ii. If you’re working with a Virtual Environment (like KVM), be careful, while choosing the disk image controller type (Don’t select VirtIO Block, instead choose either SATA or IDE). Choosing VirtIO will cause ‘Kernel Panic, unable to mount root file system, on unknown block device’ issue, during the Full Install boot.

OK. Here we go! I will only list the changes in the steps (provided here for FatDog64-631 Full Install), that needs to be made for 700.

1. Create and Prepare a new Partition for FatDog64

This steps remains the same. But I’ve some changes in my environment for this experiment. I’ve my primary OS as Lubuntu, installed in ‘Sda1’. I’ve then created a new ‘ext4’ partition ‘sda2’, for the FatDog700 installation, using Gparted (Included in the FatDog 700 live CD).

image

2. Download the latest FatDog64 ISO file and burn it to a CD

Download FatDog 700 from here.

3. Boot into FatDog64 Live CD

No change. But use the FatDog 700 ISO file, that you’ve downloaded in step#2

4. Frugal install FatDog64 to the newly created partition

We’ve a major change in this step compared to FatDog 631 install. FatDog 700 Frugal Installer, installs the files to ‘/boot’ sub directory, rather than to the root partition (as FatDog 631 does). So we will be going the manual way of Frugal Install here for 700.

4.1 Format your new partition (that you’ve prepared for FatDog 700 full install), with ext4. You can do this in Step#1. See the above figure.

4.2 Now open FatDog 700 Live CD partition (With a CD Icon, listed along with other partitions in the bottom, near the taskbar)

image

Now select both the ‘initrd’ and ‘vmlinuz’ files, and drag (copy) to the new partition (the one you’ve created for FatDog 700 install).

image

My machine, after successful copy operation.

image

5. Perform a Layred Full Install

No change. But in FatDog 700, the sfs file name has changed to ‘fd64.sfs’. Make a note of it. Also if your ‘unsquashfs’ command fails in the middle (shows #killed, before it gets to 100%), its due to no swap. Better switch on the swap, at the very first. I’ve a swap partition at ‘sda3’, and I switched it on under FatDog 700, before the file operations. Below screenshot is self explanatory.

image

6. Perform a True Full Install

No change. If you’re using a SATA hard disk, you can also try out ‘pmedia=satahd’ switch in the kernel commandline.

i.e “linux /vmlinuz pmedia=satahd root=/dev/sda2 rw rootwait pkeys=us”

7. Fix True Full Install – Restore broken and removed packages in the True Full Install

No change.

8. Reboot.

Conclusion.

Though, I’ve been able to perform the True Full Install with all packaged retained, I am still seeing some warnings during boot up and shutdown, even in 700 series. If someone has figured out the same, please add the same at the comment section.

Wednesday, April 8, 2015

Install Remmina-Next in Ubuntu ARM V7 – RaspberryPi2

Now RaspberryPi2 support Ubuntu14.04 as I mentioned in this article. Its now posses Broadcom Quad core processor which is of ARM V7 architecture. So we can make RaspberryPi2 as a perfect Thinclient, by installing ‘Remmina’ (RDP Client).

But Ubuntu14.04 contains an old version of Remmina (1.0), which having many serious bugs like no clear-type support, no mouse icon theme support with RDP session e.t.c. This make it virtually unusable as a professional RDP client.

Luckily these issues have been fixed in the latest version of Remmina (Remmina-Next, Versions 1.1 and 1.2+). These versions can be installed from the Remmina-Next PPA. This has been explained in this article. But the PPA is only available for ‘X86’ and ‘X86-X64’ architectures only. Unfortunately no build available for ARMV7 architecture.

The only options is, to manually build the ‘Remmina-Next’ from the source for ARMV7 under RaspberryPi2. This has been explained below.

Note: These steps are based on this article, which used to build remmina under X86 or X64 systems. We’re making some tweaks to build it on ARM V7.

1. Install Build essentials for Ubuntu

sudo apt-get install build-essential


2. Install packages required to build Remmina and FreeRDP

sudo apt-get install git-core cmake libssl-dev libx11-dev libxext-dev libxinerama-dev \
libxcursor-dev libxdamage-dev libxv-dev libxkbfile-dev libasound2-dev libcups2-dev libxml2 libxml2-dev \
libxrandr-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev libxi-dev libavutil-dev \
libavcodec-dev libxtst-dev libgtk-3-dev libgcrypt11-dev libssh-dev libpulse-dev \
libvte-2.90-dev libxkbfile-dev libfreerdp-dev libtelepathy-glib-dev libjpeg-dev \
libgnutls-dev libgnome-keyring-dev libavahi-ui-gtk3-dev libvncserver-dev \
libappindicator3-dev intltool


3. Remove and purge, old FreeRDP and Remmina packages if any

sudo apt-get --purge remove freerdp-x11 remmina

sudo apt-get autoclean

sudo apt-get autoremove

sudo apt-get clean



4. Create a build directory for Remmina, in your HOME folder

mkdir ~/remmina_next
cd ~/remmina_next


5. Get latest ‘FreeRDP’ source from GIT

git clone https://github.com/FreeRDP/FreeRDP.git
cd FreeRDP


6. Configure ‘FreeRDP’ for compilation under ARM V7 (RaspberryPi2 running Ubuntu)



Note: Please note the switch specifically for ARM V7. i.e -DARM_FP_ABI=hard -DWITH_NEON=OFF -DTARGET_ARCH=ARM

sudo cmake -DARM_FP_ABI=hard -DWITH_NEON=OFF -DTARGET_ARCH=ARM -DCMAKE_BUILD_TYPE=Debug --DWITH_CUPS=on -DWITH_WAYLAND=off -DWITH_PULSE=on -DCMAKE_INSTALL_PREFIX:PATH=/opt/remmina_next/FreeRDP .


Above line will make FreeRDP install in /opt/remmina_next/FreeRDP



7. Compile and install FreeRDP

make && sudo make install


8. Make your system dynamic loader aware of the new libraries you installed. For Ubuntu ARM V7:



TODO: Need to verify the path!

echo /opt/remmina_next/FreeRDP/lib/arm-linux-gnueabihf/ | sudo tee /etc/ld.so.conf.d/freerdp_devel.conf > /dev/null
sudo ldconfig


9. Link FreeRDP in /usr/local/bin

sudo ln -s /opt/remmina_next/FreeRDP/bin/xfreerdp /usr/local/bin/


10. Get latest ‘Remmina-Next’ source from GIT, to your Remmina build folder

cd ~/remmina_next
git clone https://github.com/FreeRDP/Remmina.git -b next


11. Configure remmina for compilation

cd Remmina
cmake -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX:PATH=/opt/remmina_next/remmina -DCMAKE_PREFIX_PATH=/opt/remmina_next/FreeRDP --build=build .


12. Compile remmina and install

make && sudo make install


13. Link Remmina to /usr/local/bin

sudo ln -s /opt/remmina_next/remmina/bin/remmina /usr/local/bin/


14. Run remmina from command line

remmina


Note: In this approach the icons and launcher files are not installed. Someone can give pointers on that in the comment section. So you’ve to run Remmina from command line, or the ICON in the taskbar (Enable Run on Startup option in remmina main windows, so that the icon will be available on the task bar on system startup)



image



 



High Performance Desktop Virtualization Alternative to VirtualBox? Paravirtualized KVM with Spice, VirtManager 1.0.1 and Remmina-Next

I was a long term user of Virtualbox. Totally impressed by its easy to use GUI and the features exposed. It’s a perfect tool for a novice to start with virtualization. Below are the most useful features (exposed in the GUI), that I value most.

VirtualBox Best Features:

1. Guest Additions

Guest Additions improves 2D performance, Clipboard sharing and seamless integrations to name a few.

2. Snapshots

Snapshotting your virtual machine, so that you can restore it back in the future, when something go wrong.

VirtualBox Disadvantages:

But regarding the performance perspective, VirtualBox lags a lot compared to KVM, a Bare Metal native hypervisor for Linux environments. I’ve experienced the performance issues most with the below scenarios.

a. Opening Multiple performance intensive applications at once, inside a single VM.

b. Running 2 or more VM’s at the same time

In these scenarios, Virtualbox truly lags and freezes for a couple of minutes. Its GUI become unresponsive until the applications get settled.

So Virtualbox can be suffice for a beginner, but not suitable for a hard core virtualization user.

Introducing KVM:

With KVM, we’ve faced no such performance issues in the above scenarios. And that too in a machine powered by a Pentium Dual Core Processor with only 4GB of RAM. In other words KVM is the best for, such performance intensive workloads. It especially scales well with more than two VM’s and performance intensive applications inside a single VM.

Clearly KVM is the best, high performing choice for any Desktop virtualization purpose, that involves heavy workloads.

But KVM is known to be suited best for Servers. Not for desktops. Why? Many argues about this, as they says, only command line management tools are available for KVM and the GUI management tools provided are so rudimentary, that lack crucial features.

The adoption of any desktop virtualization suite, largely depend on the GUI Management features, as the audience involves are less experienced end users. Can we make KVM, user friendly, similar to VirtualBox with similar GUI management features? Yes we can!

I’ve listed the below sections, that can make KVM a high performing desktop virtualization alternative to VirtualBox.

Note: I’m detailing the procedures in the context of a Host machine, with Ubuntu 14.04 LTS installed. The steps will be similar, for other versions of Ubuntu as well.

Guest Additions with KVM (Para Virtualization with KVM)

To get the maximum performance out of a VM (running under KVM), you’ve to Para virtualize it.

To know more about Para virtualization, read this article. i.e You can make the VM’s disk, network, memory management and display controllers aware about the underlying virtualization environment, on which they are running and hence their behavior can be optimized for virtualization to achieve higher performance.

Now make your virtualized guest devices (Network/Disk/Memory/Display) as Para Virtualized (VirtIO/Spice) through Virt-Manager GUI. This has been explained nicely in this article and install the associated Para Virtualized drivers.

For KVM based windows guests, all these Para virtualized drivers are available as a single package in the Spice portal. Download Splice Guest Tools, and install under a windows guest. This will work for windows guests, that will be running Windows7 or lower.

If your Windows Guest, runs Windows8 and higher, the above package will not work, as the QXL video driver is not supported for Windows8 or higher (Windows8 and higher now use WDDM driver model for display driver, which is different from the Spice QXL Video driver). Nevertheless you can install Para virtualized drivers (Other than QXL display driver) in windows8 and higher. You can download the ISO file from the Fedora repository, that contains the Network, disk and memory management drivers. Manually install these drivers on your guest.

For Linux guests, follow this link. Recent versions of Debian and Ubuntu distributions have these Para Virtualized drivers out of the box.

Now we’re on par with Virtual Box Guest additions.

Note: If you get the error (“Cannot display graphical console type 'spice': No module named SpiceClientGtk”) in Ubuntu, while starting the VM, just install the module ‘python-spice-client-gtk’ through apt-get.  i.e ‘sudo apt-get install python-spice-client-gtk’. This bug has been filed in lauchpad.

Add Snapshot Capability in GUI, for KVM (Upgrade Virt-Manager to 1.0):

The current version of VirtManager (Included in Ubuntu14.04 LTS) is 0.9.5. This version lacks the most valuable feature compared to VirtualBox GUI, which is the “Snapshot Feature”. This is the most required feature under virtualization.

Luckily VirtManager.1.0.1 have this feature, but does not included under Ubuntu 14.04. You’ve to follow the below to install VirtManager.1.0.1 under Ubuntu 14.04.

wget -q -O - http://archive.getdeb.net/getdeb-archive.key | sudo apt-key add -


sudo sh -c 'echo "deb http://archive.getdeb.net/ubuntu trusty-getdeb apps" >> /etc/apt/sources.list.d/getdeb.list'

sudo apt-get update

sudo apt-get install virt-manager



Note: This is for Ubuntu14.04. For other versions, you’ve use the corresponding repository name as mentioned here.



VirtManager.1.0.1 having the support for ‘Internal Snapshots’ (Available only with Qcow2 disk images). Note: It does not support ‘External Snapshots’



imageimage



Now we’ve the ‘Snapshot’ capability from UI, as that of VirtualBox.



Add Quality 2D display experience  for Windows8+ KVM Guests (Upgrade RDP-Software)



As we already seen, Windows8+ will not support Spice QXL driver and hence we’ve to still fallback to the VNC protocol for display, which having very terrible UI experience compared to VirtualBox 2D.



A good alternative is to leave the VirtManager VNC viewer, and just use a RDP client application to the Guest (That is running Windows8+). The best option so far is to install ‘Remmina’, the best RDP client in Linux environment.



As with VirtManger in Ubuntu14.04, The version of 'Remmina’ included with Ubuntu14.04 is quite outdated (1.0). This version having serious bugs like ‘No ClearType/Font Smoothing’ support, ‘Unable to change Mouse ICON in RDP’ e.t.c.



All these items have been fixed in ‘Remmina.1.1 and 1.2’, which can be installed by following the below in Ubuntu14.04.

sudo apt-get remove remmina

sudo apt-get autoclean

sudo apt-get autoremove

sudo apt-get clean

sudo add-apt-repository ppa:remmina-ppa-team/remmina-next

sudo apt-get update

sudo apt-get install remmina



 

The RDP experience with Remmina-Next is very smooth, it support ClearType and Mouse Themes as that of the RDPed machine.


Unparalleled Performance:



Now with these steps, the KVM will be unparalleled in performance as a Desktop Virtualization Suite.

Tuesday, April 7, 2015

Word/Excel Properties returning NULL/Nothing, Inside Event Sinks or COM Callbacks

While working with a VSTO addin (Excel/Word Automation with .NET), I’ve faced a strange issue. The context is described here. We had a VSTO Word Ribbon AddIn, for having a word application level customization. Within the AddIn we also consume another COM Object’s Event Sources (COM event interfaces) through Event Sinks (COM Callbacks).

Within the callback function, we are trying to access word objects properties like Active Document, Document Properties, Custom document properties e.t.c. But all properties are returning NULL values for our surprise. There is no clue, regarding why its not able to access the properties and its even not throwing any error as well.

After much troubleshooting its found that the issue lies with, how .NET manages COM callbacks. By default .NET uses the built in ‘Free Threaded Marshaller’ (MTA – Multi Threaded Apartments) to handle COM Events to invoke the sinks. It will pick up a random RPC thread to handle the incoming COM event, and the Callback will be executed on this random RPC thread. COM objects in MTA are not thread safe by default. We should use our own synchronization mechanism to synchronize the access.

On the other hand, most of the COM components, that having GUI elements (like Office Word/Excel) lives in STA (Single Threaded Apartment) only, which is by default thread safe. This thread safety will be managed by the COM runtime by itself. It uses and hidden window and associated window messages to synchronize the access.

Now in our context, we’ve two apartments. One is the Office system’s STA and the .NET runtime’s MTA. The callback which is being executed in MTA needs to access the office properties, which lives in STA. COM interface access across apartments, is not directly possible. The needed interface (here Office propertie) should be marshalled to the target apartment. By default this is not available with Office objects. i.e Marshaling Office COM objects from their original STA apartment to the MTA apartment, where the callback want to access the office properties.

Since this marshalling fails, the properties returning as NULLs.

So what could be the solution? The answer is to tell the .NET runtime to use the same office STA apartment for the COM Event handling, so that both Office objects and callbacks will live in the same apartment and no marshalling needed to access the office objects.

To make the ‘Callback object’ to join the STA apartment, we need to inherit our ‘Callback object class’ from ‘StandardOleMarshalObject’ instead of the default ‘Free Threaded Marshaller’. A sample given below.

[ComVisible(true)]

[ClassInterface(ClassInterfaceType.None)]

internal class YourComCallBackClass : StandardOleMarshalObject, AComEventSinkInterface

{

       public short AComEventSinkInterface.EventSinkOrCallbackMethod()

       {

             //Access your office properties here.

             Office.DocumentProperties properties;

             properties =

             Application.ActiveDocument.CustomDocumentProperties as Office.DocumentProperties;

       }

}

ClassInterfaceType.None’ attribute is equally needed to make this work.

You can read more on this in the below links.

StackOverflow

MSDN Blogs

MSDN

Monday, April 6, 2015

Dependency Injection Pattern, Explained to a Novice

'Dependency Injection' (DI) or IOC (Inversion of Control) is a buzz word in high level programming languages like Java, C#, AngularJS e.t.c.

The basic principle in DI is,

You should (1) program to the interfaces and, The (2) caller should provide/inject the required dependencies to the calle.

Beginners having a tough time to understand the concept and implementing the same. So I’m planning to provide a basic introduction in conjunction with a real life example. DI made "design practices" close to real word, like what Object Oriented Programming (OOP) did for High Level Programming languages. As you would know, how the invention of C++, from C, made the programming revolutionary and this has lead to an invention of a plethora of new OOP based programming languages like Java, C#, Visual Basic.net to name a few. As computers have made by humans to assist with their real world needs, there is no wonder, a real world modeling of a programming language gave big advantages.

Ok now lets see a real world example that will better show DI in action.

Rewind the day, where you've allocated to your first Account in your Company. Most probably everything given to you, when you start with your project like your Desk, Computer, Project ID etc. In other words, these dependencies are injected to you, rather than you grab those yourself. So this a

DI in action, where your required dependencies are injected to you rather than you find it for yourself

So consider the advantages of this approach. Suppose your machine need be replaced, then a brand new machine will be again injected to you, without you worrying for it. Also suppose you're reallocated to a new project, you don't have to bother creating a new Project ID and tagging to the new Project by yourself. It will be done to you, by the process in the company (Framework we say).

So if you represent this scenario in a Java/C# class, we have an Account class (The account you belongs to) and Associate class (Represents you). See the below implementation.

As you can see in the 'AllocateNewAssociate(...)' function, 'Machine' and 'Project' are injected to you.

Code Sample1

void AssociateAllocationTest()
{
int yourEmployeeNumber = <your employee number>;
IAccount yourAccount = new Account();
yourAccount.AllocateNewAssociate(yourEmployeeNumber);
}
class Account : IAccount
{
public bool AllocateNewAssociate(int empNumber)
{
//Inject Machine and Project to Associate
IAssociate you = new Associate(empNumber, GetFreeMachine(), GetProjectID());
return you.Allocate();
}
public IMachine GetFreeMachine()
{
//Probably get the new machine from Infrastructure Team?
}
public IProject GetProjectID()
{
//Generate new ProjectID ?
}
}
class Associate : IAssociate
{
public Associate(int empNum, IMachine machine, IProject project)
{
//You got your dependencies
}
public bool Allocate()
{
return true;
}
}
Now consider the opposite of DI, the conventional way of programming. See the below, the same example rewritten. Here you need to grab your Machine and Project by yourself? You instantiate the dependencies as needed. Does this seems plausible? Obviously no. Will you go around and find a system, which is not already been allocated to someone else? What about Project? Are you able to create it by yourself? Most probably you will not even have access to that section in your company’s management portal, unless you are a Project Manager or Lead. 
And the most serious situation. You've concrete references to your dependencies. You're not programming to interfaces, instead you instantiate concrete types (ie. HPMachine, WebProject). If your HP machine, need to be replaced with a IBMMachine? How you can do that, unless you rewrite/recompile your class? Similarly for the WebProject case.

Code Sample2

void AssociateAllocationTest()
{
int yourEmployeeNumber = <your employee number>;
Account yourAccount = new Account();
yourAccount.AllocateNewAssociate(yourEmployeeNumber);
}
class Account
{
public bool AllocateNewAssociate(int empNumber)
{
//Let associate, get the dependencies himself/herself
Associate you = new Associate(empNumber);
return you.Allocate();
}
}
class Associate
{
public Associate(int empNum)
{
//Get your dependencies yourself
HPMachine machine = GetFreeHPMachine();
WebProject NewWebProject = GetNewWebProject();
}
public HPMachine GetFreeHPMachine()
{
//Does this really work out? No.
}
public WebProject GetNewWebProject()
{
//No way!
}
public bool Allocate()
{
return true;
}
}
So you've seen, how the DI model is helping us to inject dependencies, rather than you handle it yourself. This real life example clearly states how DI model is superior over our conventional way of programming (though it is easy to implement). In a long run, you can benefit from DI model, especially while unit testing.

To explain that, let's go to our DI example (Code Sample1) again. For unit testing the 'Allocate' method, you don't have to get a real Machine or Project. Instead get some dummy (Mock) implementation for IMachine or IProject and inject it. See the below example.

Code Sample3

void UnitTestAllocateMethod()
{
int yourEmpNum = <your emp number>;
IMachine machine = new Mock<IMachine>();
IProject project= new Mock<IProject>();
IAssociate you = new Associate(yourEmpNum, machine, project);
Verify you.Allocate();
}
This is how you do with your DI programs using Fake/Mock frameworks for unit testing it.

Now we've covered a real example with DI.

Still 'Code Sample1' does not fully adhere to DI principles. Because, in that example, we still instantiating concrete types in some places. The below statements are examples of those. (As you're using new keyword to instantiate Concrete types)

Code Sample4

IAccount yourAccount = new Account();
IAssociate you = new Associate(...);
As you can see, you've used concrete classes (Account, Associate), although you've stored them in interfaces. In real scenario you wont hard code class instances like this, instead these dependencies too, will be configured at a global level, during the startup of your program. Your program will only aware about interfaces types and request a special component to resolve to Concrete types at runtime. So there exists, a special component that contains the interface <--- to --->concrete class mapping and instantiate the appropriate concrete type , given an interface type.

These special components are called
'DI containers', where you will define the interface to concrete type mapping.
Examples of 'DI Containers' are

C#

1. Microsoft Unity
2. Spring.NET
3. NInject
4. Castle Windsor

Java
1. Spring
These DI containers, will make your life easy, so that you can define your interface to concrete class mappings. Also while instantiating a concrete type (based on a given interface type, it implements), and if it's constructor have arguments that are again depict dependencies, then DI Container will recursively resolves and instantiates each such dependencies, before it returns your object. That's the most coolest part of using a DI container.
Otherwise you've to write your own code to do that and it is called 'Poor man's DI'

So let's make our example more perfect with DI Containers (here we will be using Unity, but you can prefer another based on your choice)
Code Sample5
DIContainer diC;
RegisterTypes()
{
diC.Register<IAccount,Account>();
diC.Register<IAssociate,Associate>();
diC.Register<IMachine,HPMachine>();
diC.Register<IProject,WebProject>();
}
void Test()
{
RegisterTypes();
int yourEmpNum = <your emp number>;
IAccount yourAccount = diC.Resolve<IAccount>();
yourAccount.AllocateNewAssociate(yourEmpNum);
}
class Account : IAccount
{
public bool AllocateNewAssociate(int empNumber)
{
//Inject Machine and Project to Associate
IAssociate you = diC.Resolve<IAssociate>(new OvverideArgs[] {empNumber });
return you.Allocate();
}
}
class Associate : IAssociate
{
public Associate(int empNum, IMachine machine, IProject project)
{
//You got your dependencies
}
public bool Allocate()
{
return true;
}
}
As you can see, there is no more 'new' keywords in your program. The number of 'new' keywords will decide the complexity of your program and if you reduce it to a greater extend, your program is considered to have a good design.
As we've already said, the above code snippet shows, an example of resolving, interface types, that in turn have another dependencies with them. In our example the statement 'IAssociate you = diC.Resolve(...)', instructs the DIContainer, to look for both 'IMachine' and 'IProject' and resolved them first, as they are refereed as arguments in the 'Associate' class constructor. As every interfaces being used, in our context are registered with the container already, it wont raise any error and all resolves silently. Other wise the DI container would have thrown an error, if you've not registered, say 'IMachine' or 'IProject' with the container.
Now you've got a fair introduction to DI. So why wait, explore more on this like 'Cross Cutting Concerns', 'Ambient Context', 'Interception', 'Nested DI Containers', LifeTime of objects etc.

The below Ebooks will get you into depth of DI.


Dependency Injection With Unity - Microsoft

Dependency Injection in .NET - Mark SeeMann

Improve Remote Desktop (RDP) Performance with Windows (Client and Server Settings)

These tips will be beneficial for those, who are experiencing

“Poor RDP performance, with windows machines due to less network bandwidths and server loads“

The gain is exponential, by applying these tweaks in more number of machines, that are being RDPed.
These tips were an outcome of a R&D, work conducted by us, on resolving RDP slowness in our environment, where more than 100 machines where RDPed from a remote location. We are using VMWare Horizon View Client to connect to our remote machines. These tips will also work , If you are using Microsoft Terminal Services Client (MSTSC).

We need to make the changes to both,

"Client Machine" (The system you will be using to connect to the Remote machine) , through MSTSC settings

"Remote Machine" (The machine to which you remotely connects to), through GPO settings.

 

Client Machine Settings

Here you need to simply tweak the 'Microsoft Terminal Services Client' (MSTSC) settings as below.

Step1: Take Microsoft Terminal Services Client (i.e Start->Run->MSTSC)

Step2: Select the 'Display' tab, and choose the 'color depth' as the lowest (15bit), as shown in the below figure

image

Step3: Take 'Experience' tab, and 'uncheck' all settings as shown in the below figure.

Note: Optionally we can select 'Font smoothing', if you're not comfortable with the Non-Anti Aliased-'Fonts'.

image

Step4: This step is optional, where you restricts the resources being shared with your remote computer.

The less the resources being shared, the more will be the performance gain.


For Audio, Choose 'Do not Play' and 'Do not record' options.

image

Uncheck all options except 'Clipboard' (which allows copy/paste between our local and remote machines)

image

Remote Machine Settings

The above settings can only be made (Client side), if we're allowed to use 'Microsoft Terminal Services Client' as the RDP client. In some cases, the infrastructure does not allow the use of 'MSTSC', to tighten the security, and they prefer the use of other 'RDP' clients like 'VMWare View Client' or 'VMWare Horizon VIew'.

In such cases, you can tweak your remote machine settings using GPO (Group Policy Objects), rather than in the client machine.

Apart from that, more performance optimization settings are available with Remote Machine. So no matter what, we've done with client settings, these remote machine settings, will outweigh them in sheer scale.

The steps are briefed below

Step1: Open'Group Policy MMC Snap in', using Start->Run->gpedit.msc

image

Step2: Navigate to 'Computer Configuration->Administrative Templates->Windows Components->Remote Desktop Services->Remote Desktop Session Host->Remote Session Environment' (see the below figure)

Now enable the settings, rectangled in red in the below figure. Configure the settings as below.

Limit maximum color depth = 15bit
Enforce Removal of Remote Desktop wallpaper = true
Optimize Visual Experience when using RemoteFx = (Screen Capture Rate: Lowest + Image Quality: Lowest)
Set Compression Algorithm for RDP data = optimized to use less network bandwidth
Optimize Visual Experience for Remote Desktop Service Sessions = (Visual Experience = Text)
Configure Image Quality For RemoteFx Adaptive Graphics = Medium
Configure RemoteFx Adaptive Graphics = Optimize for minimum bandwidth usage


image

The above settings known to dramatically improve the RDP performance as it reduces the use of network bandwidth and both server/client load on processing the RDP data.

You can get more on these settings in the MSDN by following
this link.

Step3: We can configure the below settings, to restrict the redirection of additional resources between the client/remote machine

If we does not require these resources to be redirected (esp like the Printer attached to the remote machine, we rarely use it for any real purpose. So go and disable the printer redirection)

By setting below options (in red rectangle), we can get more performance. For settings starts with 'Allow...', disable the setting. For settings start with 'Do not allow...', enable the setting.

image


The above applies for 'Printer Redirection' section as well. Make the settings (in red rectangle) enabled.

image

Step4: Disconnect and then reconnect to the remote machine, for the changes to be applied.

General Settings

Last but not the least, adjust both your client and remote computer for best performance. This setting can improve over all system performance.

For both client and remote machine, follow the below steps.

Step1: Open Computer properties

image

Step2: Adjust for best performance

Navigate "Advanced system settings"->"Advanced Tab"->"Settings Button"->"Visual Effect Tab".

Select the radio button name 'Adjust for best performance'. Again if you're a fan of 'Anti-Aliased Font' (Clear Type Text), you can choose 'Custom' radio button with only 'smooth edges of screen fonts', checked.

image

Conclusion

These settings have known to improve RDP and system performance in general. Hope someone facing a similar situation may benefit from these tips.

Windows Azure Virtual Machines - A Debut Experience

Today I've created my first 'Virtual Machine' in 'Windows Azure Cloud'. Created a 'Windows 2008 R2' virtual machine.

Feels pretty excited!
Below are the steps, that I've followed.

1. Login to Azure Portal using your MSDN subscription

2. Click on the '+' sign, in the 'Virtual machines' Tab


image


3. Select 'Virtual Machine' and then 'From Gallery'
Note: You can have ready made VM images here, like Windows Server 2008 R2.


image


4. Opt 'Windows Server 2008 R2' and Click '->'


image


5. Select Hardware Configuration for the machine and Credentials
Note: (I've selected Dual core, with 3.5GB)

image

6. Select a 'DNS' name for your machine
Note: This is to access your machine through RDP, from anywhere in the internet (Not limited to LAN as with typical RDP). So select a unique name. Also ensure RDP is allowed both at private and public ports.


image


7. Check the option 'Install the VM Agent', So that RDP is enabled by default


image
Note: You can also install any additional extensions like Symantec or Microsoft Antimalware, to have antivirus in your system.

8. Click the 'Tick Mark', Now you can see your new VM is being provisioned.

9. Once provisioned, click 'Connect' Icon


image

10. MSTC will open up. Click 'Connect' button.

image



11. Congratulations, Your new 'Windows Azure VM' is up and running. Log into it.


image


12. See your 'VM's performance indicators in the console


 
image



Windows Azure Virtualization. Major Limitations Compared to OpenStack, KVM.

I plays with Windows Azure, VNets and Virtual Machines quite often. On a day to day basis, I do also works with KVM and Virtualbox for home use.

Based on my experience, Below are the two major limitations, I face with Windows Azure.

1. No Nested Virtualization support

2. No Custom DHCP Server

For instance, KVM supports nested virtualization. That means you can access Hardware Virtualization Extensions (AMD-V, Intel-VT) inside a KVM Guest. So you can install KVM inside another KVM guest and pass the hardware assisted virtualization support to the nested VM’s (Virtual Machines). So the nested guests can have performance improvements.  This nesting of virtualization can go to any level, at least theoretically. This feature can come in handy, if you need to experiment with virtualization, in an already virtualized environment.

To check this nested virtualization support in Windows Azure, I’ve tried to install Hyper-V inside another Windows Azure Virtual Machine. But it failed and reported that nested virtualization is not supported.

Also you cannot install and run your own DHCP server instances in Azure. Azure VM’s rely on the built in Azure DHCP servers, to lease IPs. But you can run your own DNS servers (Like a Windows Server 2012 instance with DNS role installed)

When it comes to KVM, you’ve the flexibility of having your own custom DHCP server, and can configure different IP ranges as you need.

Whether these features ever get into Azure? Someone can give a glimpse on that in the comment section.

AdSense approval with Blogger/BlogSpot accounts, without a Custom Domain. The missing pieces!

I got my AdSense request approved by Google for my BlogSpot account on 03 April 2015. I repeat again, the approval is for my BlogSpot account.

So I thought to share my experiences to help my fellow bloggers, who were struggling to get the same. There are numerous blogs on this topic (How to get AdSense Approval), out there, where they specifically preach about the requirement to get a ‘Custom Domain’ name. They says, “Gone are the days for AdSense approval, for Blogger and BlogSpot accounts. We need to get a Custom Domain Name, and that’s the very first requirement’. This is again proved wrong and in the Year 2015, Blogger/BlogSpot accounts still gets AdSense approvals.

I had applied for AdSense earlier 3 times and all were rejected. Then I’ve started searching for tips, on how to get ‘AddSense’ approval. All the blogs I’ve landed mentions the need of ‘Custom Domain’ name. I was really disappointed and thought to quit my Blogger account to find my luck elsewhere.

But I contemplated on this and decided to go forward once again. I looked at my blog from visitors perspective and looked for simple pieces that were missing. All of my content were original and all of them are about my experiments. Then I noticed one simple but crucial thing, which is missing!

It was my “Profile” details. I’ve never updated my Profile picture, my career and educational backgrounds. Its our profile and bio data, that will give authenticity and identity to the blog. So I’ve uploaded my recent professional photograph, updated my profile and details on Google+, and opted to show my profile details on BlogSpot pages.  See below;

image

Then I applied once again and voila its got approved within two days.

So if you’re a Blogger/BlogSpot blogger, struggling to get AdSense approval and not getting succeeded, Don’t give up! Don’t be disappointed by blogs that talks about grabbing a ‘Custom Domain’ name to get the approval. Even today, ‘Content’ is the ‘King’. If you write, original and authentic blogs, it has the 100% chance to get AddSense approval. Look for the critical missing pieces, that otherwise seems simple and irrelevant. Give importance to fill them up like your Profile details, Disclaimer statements and copyright details if any.

Good luck and Happy Blogging!

Sunday, April 5, 2015

RaspberryPi 2, A Thinclient to Windows Azure Virtual Machines

This is my very first experiment with RaspberryPI 2 (QuadCore, 1GB model). Idea was to develop a low cost Thin client/Zero client, to RDP my windows azure virtual machines. Requesting comments, reviews and suggestions, from the readers.

A descent, enterprise level Thin client in the market (like NComputing), will cost around $112. So I thought why build a cost effective model. So the effort was to transform the new RaspberryPI 2, to an efficient Thin client. The setup will only cost under 60$ (Assuming you’ve a Monitor/TV having HDMI port).

The expedition has detailed below.

    1.Bought a new RaspberryPI 2 (6x faster, and powered with Broadcom QuadCore Processor)

    2.Installed Ubuntu (
    https://wiki.ubuntu.com/ARM/RaspberryPi)

    3.Installed Lubuntu-Desktop (To have a desktop environment)

    4.Installed Remmina (The RDP Client)

The RDP Experience is pretty smooth with Pi, and below are some screenshots from my desk.

New RaspberryPi2:

image
image

Lubuntu Booted To Desktop:
image
image

Windows Azure Virtual Machine (RDPed from Pi2):

image
image