Launch gsheet (and other google docs) in web app mode

It has always bugged me that google docs shortcuts (.gsheet, gdoc etc) take over a tab in my chrome window whenever they’re clicked up. I had the same beef with gmail. I want email to load as an app, not a tab in my browser, so very early on I started doing this;

chrome.exe --app="http://mail.example.com"

This is great. I get an app on my Windows taskbar, not a tab hidden away in clusters of browsing sessions. Until today, the way to get this for google docs has eluded me. Unlike SkyDrive OneDrive where Excel, Word and friends are first-class citizens and share a file format between desktop MS-Office and the cloud (a significant advantage IMHO), gsheet files are simply links back to the cloud, and they look like this;

if ($args.Count -gt 0) {
 $url = Get-content $args[0] | out-string | ConvertFrom-Json | select url -expandProperty "url"
 $cmd = ($env:USERPROFILE)+'\AppData\Local\Google\Chrome\Application\chrome.exe --app=$url'
 invoke-expression $cmd
}

They’re executed in the Windows shell with googledrivesync.exe which (stupidly) just launches chrome without the –app flag. So it should be easy enough but I wasn’t about to fire up VS2013 an find a home for a project structure just to do that. I knew powershell was the solution but I’ve never bothered to learn it. Today however, whilst simultaneously managing to avoid learning much above my zero knowledge of powershell, I managed to cobble this together;

<# example
powershell chromeApp.ps1 'myspreadsheet.gsheet'
#>

if ($args.Count -gt 0) {
 $url = Get-content $args[0] | out-string | ConvertFrom-Json | select url -expandProperty "url"
 $cmd = ($env:USERPROFILE)+'\AppData\Local\Google\Chrome\Application\chrome.exe --app=$url'
 invoke-expression $cmd
}

So I saved the file as chromeApp.ps1 to a folder in my path and then modified the shell registry entry for .gsheet with the following command;

reg add HKEY_CURRENT_USER\Software\Classes\GoogleDrive.gsheet\shell\open\command /f /ve /t REG_SZ /d "powershell chromeApp.ps1 '%1'"

 

And there we have it. Now when I click on .gsheet files in the shell I get a new, frameless window, not a browser tab. The above example just caters for .gsheet but it’s easy to add the other file formats of course.

Dialog Boxes are Evil

In terms of the user experience, if dialog boxes are generally evil, Modal dialog boxes are the devil himself. Dialogs are UI concepts from what is slowly (and gladly) becoming a deprecated model. The 30 year old ‘GUI’ is being re-fashioned with a focus on touch-screens rather than mouse and pointer. Overlapping rectangular windows are being replaced with more web-style page navigation and interfaces (see Windows 8) yet we still see dialog boxes on Web pages today.

Jakob Nielsen states as an advantage of modal dialogs that it improves user awareness: "When something does need fixing, it’s better to make sure that the user knows about it". For this goal, the light-box design provides strong visual contrast of the dialog over the rest of the visuals. The light-box technique is now a common tool in website design [Wikipedia]. I don’t disagree with that, but it seems we are presented with modal dialog boxes far more frequently, more commonly than just when the user really needs to be involved, even on the web. A classic example that we can find in almost every application on the desktop or the web is this one (taken from Windows 7 in this case);

clip_image001

Curiously, this file is actually headed for the recycle bin, so it’s a reversible option, so we don’t really need to be asked. Recycle bins are good because they let the user make reversible mistakes, but in this case we also have an unnecessary intervention. At least Windows is mature enough to support keyboard shortcuts, [Return] and [Esc] for cancel. This is something a lot of Web page GUI clones would do well to emulate.

Presented with enough of these interventions, the user soon becomes de-sensitised to the warning. Windows 7 is riddled with, “are you sure?” queries like this. For instance, UAC transfers the responsibility of system protection to the user, which in many cases the least qualified party to decide if opening a file is safe. In essence, one could argue that the computer is saying, “I don’t want the responsibility of the thing that I’m about to do…so, if it goes wrong, it’ll be your fault”.

clip_image003

To most users, this intervention just gets in the way. Pressing ‘No’ means the application won’t run, pressing ‘Yes’ means it will. Sure, it’s adding a guard layer to ensure the user is aware that this program is about to make system changes and providers an opportunity to back out. I’m not against that, if you’re going to run with Administrative permissions on the desktop, it’s merely a design compromise based on the least path of resistance.

I don’t want to get distracted by whether it’s a good idea to show the user this dialog in the first place, but having done so, note that it’s a “modal” dialog. Meaning, that I can’t do anything other than either press ‘yes’ or ‘no’. There’s no third option.

Nielson is correct here. This operation should be modal, but does it really need to be a dialog? If all I can do is choose one of those two options, and I have a 15 inch screen with a pixel range of 1280×1024, why do I have to target one of two bulls-eyes only a little taller than my mouse pointer?

In fact Microsoft has until recently had a 30 year love-affair with the MessageBox. You only have to use Kinect to see evidence of this unhealthy obsession. Even a full-body motion controller interface has its share of dialogs with yes/no options and it’s far worse here because you need to somehow find that tiny button whilst floating your hand around in 3D space.

If there are only two options, and (computer) you need my attention that badly, why not just give me something genuinely full-screen like this;

clip_image004

OK, it’s Ugly with capital ‘U’, but I’m just making a point. First of all, in the western world we read left to right, so progressing forward is the right-hand option. The left hand half of the screen is to go back or cancel the operation I’m about to do. For Kinect this would map perfectly to left-arm/right-arm. I no longer need to locate a tiny target. I can use a whole body gesture; left arm or right arm. Since there’s nothing else I can do, so why not use the whole screen, or the whole body if I have a full-body controller.

Now I’m not for a second suggesting that we turn all dialog boxes into full screen option buttons, dialogs are still evil after all however they’re styled, but let’s take a look at a very common example in Google docs. We see the “Are you sure?” warning on delete operations all the time, but Google even have this on document creation in Docs;

clip_image005

Nowhere near as mature as Windows, this GUI does NOT support keyboard shortcuts at all, so I’m forced to pick up the mouse and locate that [x] in the top right. Try accurately hitting that on a touch-screen. But a more important question to ask yourself is whether you really need a dialog at all. One could argue that in this case, since this operation is ‘permission’ related that we really need that user confirmation before we proceed, but with a re-think it might be possible to design this dialog out altogether.

Going back to common delete operations, this is where Google get it right. If you delete a file in Google docs it doesn’t ask if you are sure, it just does it, but gives you a reverse operation (undo).

clip_image006

This is known as “User Forgiveness” and it’s exactly the same thing that Windows does with the Recycle Bin. Announced just yesterday, SkyDrive now has this. Additionally, the Web already has a “go back” concept with the browser navigation buttons which can be used in a lot of places where dialogs are often chosen instead.

At Nupe, we favour user forgiveness over blame transference every time and we’re working very hard to design-out both the, “are you sure?” model and other legacy GUI concepts that aren’t touch-friendly in general like dialogs along with horizontal scrollbars, cascading menus. You’ll see the benefit of this effort over the coming months.

One Line System.Web Caching Template for .Net

Whenever I have something that rarely changes, but takes a little while to calculate I like to cache it, typically, in ASP.Net’s Web Cache. One example might be a bit of JavaScript I want to return minimised and I can’t use System.Web.Optimization for some reason. So, I was finding myself repeating the same algorithm over and over;

1) Is the item cached already? If yes, return it immediately.

2) Build the item.

3) Cache for next time

4) return the item

It’s not many lines of code, but it’s nevertheless prone to cut and paste error and requires a new unit test each time, and then I came up with this neat little shortcut;

    public class Caching
    {
        static Cache _cache = HttpRuntime.Cache;

        public static T CacheThis<T>(string key, Func<T> resolverFunc)
        {
            // Use our own cache if ASP.Net cache is not available
            Cache cache = _cache;
            if (HttpContext.Current != null)
                cache = HttpContext.Current.Cache;

            // Look for a cached version of key
            var output = (T)cache.Get(key);
            if (output != null)
                // Smashing, we can return this from the cache
                return output;
            else
            {
                // Resolve, cache, return
                output = resolverFunc();
                cache.Insert(key, output, null, Cache.NoAbsoluteExpiration, Cache.NoSlidingExpiration);
                return output;
            }
        }
    }

So basically I templatised the algorithm. Now whenever I want to cache anything I can wrap my code in this neat little one-liner;

            var scriptName = "minifiedMainScript.js";
            return Caching.CacheThis(scriptName, () =>
            {
                // get resource without a cache in the normal way
                var Text = MyLoaderClass.Load(scriptName);
                return Text;
            });

Or literally, this could be written on a single line, but you can put as much logic in that code block as you want. It won’t run unless it needs to. Easy huh?

A Couple of Quick and Dirty Ways to Bootstrap Bare EC2 Instances

So you want to remotely launch and script a server instance in the cloud, configure it and install your app? Well, unlike Windows Azure, Amazon Web Services don’t launch with an embedded app and they don’t have a tightly embedded way to run startup tasks….or do they?  EC2 instances are little more than vanilla installs of the OS, with perhaps SQL Server and IIS. Bootstrapping a bare machine with a vanilla OS is a very old skill, and I’ve done this a number of ways over the years. It’s gotten easier with every generation of Windows (man those NT4 servers were hard to build remotely) as better tools have become standard on a vanilla install, robocopy and icacls to name but two.

Now unless you want to open up firewall ports and start pushing stuff to your AWS instances psExec style  (hey, I said “dirty”, not completely filthy like those NT4 days) you’ll want the machine to fetch its own config and apps on startup. Ideally, get hold of git, and bring your app down from your git repository. So what are the options on AWS;

1) Create a custom AMI
Do this either with all your setup or some bootstrap code such as cloudinit. Whilst you can do this to a lesser or greater extent there’s still some maintenance involved here. Once you fork the standard Amazon AMIs you’re no longer using an off-the-shelf image, so you have to maintain it yourself. To be honest, this isn’t really a big deal since Windows Update can take care of a lot of this for you, but you’ll want to can the latest versions and this all chews up admin time. At some point, you need to take the hit and rebuild your AMI. It’s a shame that (unlike the Linux AMIs) the standard Windows AMIs don’t (yet) come with something like cloudinit already loaded. If they did, I’d use them in a flash.

2) CloudFormation
This seems to be the premier option where they take care of all this for you, but at extra cost per instance. It’s a negligible cost mind-you, but it’s something else to setup as well. [edit] Turns out I read this wrongly. There’s no additional charge for CloudFormation, but it doesn’t look like there’s any magic either – it will simply orchestrate options (1) or (2) here. So I guess there are just two options.

3) Old-School
The ‘notepad’ way is to basically script your own setup. For that you’ll need to at least be able to run a script on the box and you’ll need to use that to configure the machine and install all your software.

User-Data

AWS gives you just enough rope to do that. It’s called EC2config and it accepts user-data. EC2Config is a service built into the standard AMIs that runs on startup. Whereas user-data, along with a bunch of other information is just a bit of text or a file and you specify it when you start the instance either programmatically or even in the web UI if you’re launching by hand. They then put this verbatim on a private URL for the machine; http://169.254.169.254/latest/user-data/.

That’s great, but there’s no CURL.EXE or WGET.EXE on a standard build of Windows Server 2008 so that’s not a lot of use, however, they do sort-of pass this to you in the standard AMIs and as of June 2012 they’ll run PowerShell scripts if contained within <powershell/> tags or batch (cmd) files if contained within <script/> tags. Well it’s no MSI download, but that’s a good result!

Programs From Text

OK, so I can do almost anything with PowerShell, but what can I do with a quick and dirty script? Well the answer is not a lot really because you can only work with the programs already installed on the box. So you aren’t going to be able to say, go pull an MSI out of S3. Well here are two ways;

1) Map a network drive to a WebDav folder and use it like a conventional network drive. Copy off the files you need and run them.

2) Just like in Unix, since .Net was first shipped we now have compilers on the box, so if we can ship it a script, we can dynamically create an executable.

1 – Map a Network Drive to a WebDav folder

So this has worked since at least WindowsXP as far as I know;

c:\> net use s: http://mywebserver/WebDav /user:username *mypassword*

You can then use robocopy.exe on the s: drive like it were a normal drive. But where to get a WebDav server from? Well obviously IIS can act as a WebDav server, but who wants to maintain another server just for that? The ideal place would be S3 you’d think, but that doesn’t support WebDav. Google Docs/Drive, DropBox? nope! Well SkyDrive does! Microsoft have been supporters of WebDav from the beginning, and I think that’s commendable. You’ll need to find your SkyDrive URL first, but OK, off we go then, right? Wrong, try it and you’ll get this;

System error 67 has occurred.
The network name cannot be found.

It turns out that WebDav support isn’t installed by default on Windows Server, and it requires the DesktopExperience feature (yuck) which has a pre-requisite of InkSupport (double yuck – WTF?) so that’s the first thing our scripts needs to do;

dism /online /enable-feature /featureName:InkSupport /NoRestart
dism /online /enable-feature /featureName:DesktopExperience /NoRestart

Now it’s a good idea that we make our script idempotent, so it can run at each and every startup without penalty in case of crashes or reboots. Unfortunately, EC2Config deletes our user-data script after the first run, but that’s easily defeated with a small hack;

REM Copy the startup file to somewhere we can find it after this copy is deleted
if exist c:\bootstrap goto bootstrap
md     c:\bootstrap
icacls c:\bootstrap /grant "NT AUTHORITY\SYSTEM":(OI)(CI)F
REM Make ourselves persistent
if not %0x==c:\bootstrap\bootstrap.cmd copy/y %0 c:\bootstrap\bootstrap.cmd
schtasks /create /tn bootstrap /RU "NT AUTHORITY\SYSTEM" /tr c:\bootstrap\bootstrap.cmd /sc onstart /f

Now you’d think that you’d be able to simply put yourself in the HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunServices key, but that hasn’t worked in a long time, so we use the scheduled task trick to get ourselves to run on each and every startup. Now we should also make sure that our script doesn’t install features unnecessarily (we may be running the script many times after all) and we should have plenty of logging. For good measure, why not copy that log back to SkyDrive? Another good idea is to allow us to change the script for a booting instance if necessary. Once we have a mapped drive we can simply look for a new script on SkyDrive.

Here’s the whole script (unfortunately, WordPress won’t allow me to upload .txt files): bootstrap.docx

2 – Dynamically Create an Executable from user-data Text

I don’t know which of these hacks is the dirtiest, but I’ll leave that to you to decide and comment when you see this next one. Basically, it’s a batch file that compiles into a .Net executable. From C# you can do anything you need, including connecting to S3 and downloading whatever setup files you need.

It’s a nasty habit I picked up in my scripting days. I’ve used a similar technique to write .JS files that will run either in cscript or will full compile themselves into JScript.Net executables. The trick relies on the fact that batch files don’t read the whole script and they’ll skip any line with an error. We only get to pass a single file to our instance in user-data, so we use a C# comment (/*) on the first line, which errors and it complains but then continues to run the next line as a command and so on. We put the C# inline further down and we simply skip over that with a goto statement. The target tag (:end) for the goto statement is similarly hidden from the C# compiled using another comment block.

Essentially, csBootstrap.cmd looks like this;

/* This file MUST be less than 16K for AWS
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\csc.exe /nologo /out:bootstrap.exe /Reference:C:\Windows\Microsoft.NET\Framework64\v4.0.30319\System.Net.dll "%0"
goto end
*/

using System;
using System.IO;
using System.Net;

static class Program
{
    static void Main(string[] argv)
    {
        //Console.WriteLine("Do whatever you like here");
        var wc = new WebClient();
        using (var data = wc.OpenRead(argv[0]))
        {
            using (var reader = new StreamReader(data))
            {
                string s = reader.ReadToEnd();
                Console.WriteLine(s);
            }
        }
    }
}
/*
:end

REM ** Run the program
BootStrap.exe "http://169.254.169.254/latest/meta-data/"
REM */

Note that the script uses the C# command line compiler CSC against itself and that it may also therefore have a cmd extension, but that’s OK to the CSC compiler as long as the file is valid C#. The penultimate line there simply runs the newly created executable with the parameter to fetch the list of meta-data properties available to the EC2 instance, but of course we could do just about anything from this code, including configuring IIS, downloading files or installing git.exe.

Now to make this more robust, we would need some logging and ideally we wouldn’t hard-code the location of the framework. I’ve left the above example as simple as possible, but the following piece of script at the top would make it better;

/* This file MUST be less than 16K for AWS
 
@echo off
@setlocal
REM ** Determine the lastest NetFx installed (note that the delims here are a tab followed by a space)
REM ** Note this is also the best way to determine if a key exists since REG nearly always returns ERRORLEVEL==0
FOR /F "tokens=3* delims=     " %%A in ('reg query "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework" /v InstallRoot') do set FXRT=%%A
FOR /F "tokens=2* delims=     " %%A in ('reg query "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v2.0.50727" /ve') do if %%Ax==REG_SZx set FX=v2.0.50727
FOR /F "tokens=2* delims=     " %%A in ('reg query "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v3.0" /ve')       do if %%Ax==REG_SZx set FX=v3.0
FOR /F "tokens=2* delims=     " %%A in ('reg query "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v3.5" /ve')       do if %%Ax==REG_SZx set FX=v3.0
FOR /F "tokens=2* delims=     " %%A in ('reg query "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319" /ve') do if %%Ax==REG_SZx set FX=v4.0.30319
FOR /F "tokens=2* delims=     " %%A in ('reg query "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319\SKUs\.NETFramework,Version=v4.5" /ve') do if %%Ax==REG_SZx set FX=v4.0.30319
:build

then we’d just amend our compiler command like this;

%FXRT%%FX%\csc /nologo /out:bootstrap.exe /Reference:%FXRT%%FX%\System.Net.dll "%0"

How I got .Net 4.5 RC Running in a Windows Azure WebRole

UPDATE (28 Sep 2012):  Now Windows Azure supports .Net 4.5 officially and the dev tools are fixed with the release of the Azure SDK 1.8 this post largely deprecated.

However, if you get the following message when you try to deploy for the first time after upgrading your local build;

The feature named NetFx45 that is required by the uploaded package is not available in the OS * chosen for the deployment.

You’ll need to upgrade your CSFG with the osFamily from 2 to 3;

<code ServiceConfiguration ... osFamily="3"/>

UPDATE (7 Sep 2012): I’ve since wrapped this in a plugin and I’m now using it for .Net 4.5 RTM (includes MVC web apps, naturally) with the Azure SDK and Tools for Visual Studio 1.7 SP1 

Initially I was very excited when I saw this post which seemed to suggest that the 1.7 release of the Azure SDK would now allow me to build solutions in .Net 4.5 and Visual Studio 2012 RC.

Initially I was very excited when I saw this post which seemed to suggest that the 1.7 release of the Azure SDK would now allow me to build solutions in .Net 4.5 and Visual Studio 2012 RC.

Unfortunately, like many other announcements it was very misleading. It does NOT work out of the box. If you look carefully at the pictures, this post, like many others very subtly chooses [.Net Framework 4.0] and NOT 4.5 as the title would suggest. The comments on this post suggest there may be workarounds but I couldn’t find a single explanation for “power users” of how to “work around the blockers”. “How hard could these workarounds be to find?”, I asked myself. Well it turns out, that although the solution isn’t that complicated, the effort involved in hunting down the stoppers was way more of a time-sink than I’d have liked. If you’d like to save yourself days of pain, read on.

Blocker #1 – No .Net 4.5 Runtime on the Azure Images

So this one is pretty easy to solve once you know how to build a Windows Azure Startup Task but I chose to build mine in the end using the semi-official/semi-documented plugins feature. I started with an Azure Plugins Library as a base but wrote my own in the end. Startup Tasks have to be idempotent and error free. Any deviation from that and your role will hang seemingly indefinitely as it tries to recover running the scripts over and over again. a task like installing the .Net runtime for instance isn’t something you want to repeat endlessly because you returned a non-zero error code from your command script. By default, plugins for the 1.7 Azure SDK go here;

%ProgramFiles%\Microsoft SDKs\Windows Azure\.NET SDK\2012-06\bin\plugins

I used a junction (mklink /j) to my developer source directory under which is under git control to develop my plugin, which looks like this;

image

I’d read Magnus’ post about upgrading the Framework on the Azure instance and although he said he’d had trouble getting the web installer to run I didn’t fancy putting the whole 50MB installer into my .cspkg and putting it into blob storage seemed liked overkill, so the file you see above (although named ‘full’) is actually the web installer.

The csplugin simply looks like this;

<?xmlversion="1.0" ?>
<RoleModule
  xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"
  namespace="Axxiant.Net45RC">
  <Startup>
    <TaskcommandLine="baseLineUpgrade.cmd"executionContext="elevated"taskType="simple" />
  </Startup>
  <ConfigurationSettings>
  </ConfigurationSettings>
  <Endpoints>
  </Endpoints>
  <Certificates>
  </Certificates>
</RoleModule>

It runs as a “Simple” task so that it’s synchronous and the role won’t continue to setup until this task completes successfully (returns ERRORLEVEL==0 from baseLineUpgrade.cmd). Here’s that cmd file influenced by various posts in the comment fields;

@echo off
REM http://www.magnusmartensson.com/post/2012/04/02/howto_put_net45_beta_and_aspnetmvc4_beta_on_windowsazure.aspx
REM http://blog.smarx.com/posts/windows-azure-startup-tasks-tips-tricks-and-gotchas
REM http://www.davidaiken.com/2011/01/19/running-azure-startup-tasks-as-a-real-user/ 
:start
echo :start****************************************  >> baseLineUpgrade.txt
time /t >> baseLineUpgrade.txt
echo **********************************************  >> baseLineUpgrade.txt
echo REM Install .Net 4.5 RC : WARNING - this does take several minutes of startup time and then reboots >> baseLineUpgrade.txt

:jobs

:dotNetFx_check
echo :dotNetFxCheck >> baseLineUpgrade.txt
reg query "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319\SKUs\.NETFramework,Version=v4.5" >> baseLineUpgrade.txt
if %ERRORLEVEL%x==0x goto dotNetFx45_installed
echo Need to install NetFX45RC >> baseLineUpgrade.txt

echo dotNetFxRunning=%dotNetFxRUNNING% >> baseLineUpgrade.txt
if  dotNetFx_RUNNINGx==truex goto skip_dotNetFx45
set dotNetFx_RUNNING=true
echo set dotNetFx_RUNNING=true >> baseLineUpgrade.txt

echo REM Change the location of App Data for running 32 bit install tasks on Win64 (downloading installers like WebPI have trouble with this otherwise) >> baseLineUpgrade.txt
@echo on
md "%~dp0appdata"
reg add "hku\.default\software\microsoft\windows\currentversion\explorer\user shell folders" /v "Local AppData" /t REG_EXPAND_SZ /d "%~dp0appdata" /f  >> baseLineUpgrade.txt
@echo off

echo REM Run FULL setup if we have it, otherwise WEB setup - see flags /passive vs /q >> baseLineUpgrade.txt
REM Could also use WebPI Command Line tool http://msdn.microsoft.com/en-us/library/windowsazure/gg433059.aspx
if     exist .\dotNetFx45_Full_x86_x64.exe echo Running FULL installer... >> baseLineUpgrade.txt
if not exist .\dotNetFx45_Full_x86_x64.exe echo Running WEB  installer... >> baseLineUpgrade.txt
if     exist .\dotNetFx45_Full_x86_x64.exe start /wait .\dotNetFx45_Full_x86_x64.exe /q /serialdownload /log "%~dp0appdatadotNetFx45_setup.log"
if not exist .\dotNetFx45_Full_x86_x64.exe start /wait .\dotNetFx45_Full_setup.exe /q /serialdownload /log "%~dp0appdatadotNetFx45_setup.log"

echo REM Restore Local AppData >> baseLineUpgrade.txt
reg add "hku\.default\software\microsoft\windows\currentversion\explorer\user shell folders" /v "Local AppData" /t REG_EXPAND_SZ /d %%USERPROFILE%%\AppData\Local /f  >> baseLineUpgrade.txt

REM no need to set dotNetFx_RUNNING=false
goto doesntAppearToNeedReboot
echo REBOOT ***************************************************  >> baseLineUpgrade.txt
REM shutdown /r /t 0 >> baseLineUpgrade.txt
:doesntAppearToNeedReboot

:dotNetFx45_installed
echo :dotNetFx45_installed >> baseLineUpgrade.txt
:skip_dotNetFx45
echo :skip_dotNetFx45 >> baseLineUpgrade.txt

:next_job

:exit
echo :exit >> baseLineUpgrade.txt

Notice there’s plenty of logging (which you need) and the script should defend itself both against being executed more than once and unnecessarily.

You’ll need add a “Cloud Project” to your solution, for which you need to have installed WindowsAzureTools.vs110 for Visual Studio on top of the 1.7 SDK.

To include this plugin, add an Import section to your ServiceDefinition.csdef as follows;

<?xmlversion="1.0"encoding="utf-8"?>
<ServiceDefinitionname="MyProjDotCom"xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"
schemaVersion="2012-05.1.7">
 
  <WebRolename="MyProj.www"vmsize="ExtraSmall"enableNativeCodeExecution="true">
       <!-- omitted stuff -->
    <Imports>
       <!--omitted stuff -->
      <ImportmoduleName="NetFX45RC" />
    </Imports>
       <!-- omitted stuff –>
  </WebRole>
</ServiceDefinition>

Blocker #2 – Visual Studio Refuses to Build a Cloud Project that contains .Net v4.5 Code

So now you have created your cloud project, add in your web project and build and you’ll get this;

Windows Azure Cloud Service projects currently support roles that run on .NET Framework version 3.5 and 4.  Please set the Target Framework property in the project settings for project ‘MyProj.www’ to .NET Framework 3.5 or .NET Framework 4.

Thanks a lot VS! Well after much deliberation, I decided to fix this by patching the targets in the 1.7 SDK;

c:\> notepad %ProgramFiles(x86)%\MSBuild\Microsoft\VisualStudio\v11.0\Windows Azure Tools\1.7\Microsoft.WindowsAzure.targets

And edited line 1784 from this;

  Condition="$(_RoleTargetFramework) == 'v3.5' Or $(_RoleTargetFramework.StartsWith('v4.0'))">True</_IsValidRoleTargetFramework>

to this;

Condition="$(_RoleTargetFramework) == 'v3.5' Or $(_RoleTargetFramework.StartsWith('v4'))">True</_IsValidRoleTargetFramework>

OK. Now it will build and package. The trouble is, it creates a broken package. And you find that your role just won’t start. It turns out this is for two reasons; Firstly, Visual Studio packages up a waIISHost.exe.config file with a missing version number whereas you actually want one that looks like this;

<?xmlversion="1.0"encoding="utf-8"?>
<configuration>
  <startupuseLegacyV2RuntimeActivationPolicy="true">
    <supportedRuntimeversion="v4.0"sku=".NETFramework,Version=v4.5" />
  </startup>
  <runtime>
    <NetFx40_LegacySecurityPolicyenabled="false" />
  </runtime>
</configuration>

It also packs a RoleModel.xml which contains an incorrect version number pointing to v3.5. Both of these cause the role to fail. It should look like this;

      <EntryPoint>
        <NetFxEntryPointassemblyName="MyProj.www.dll"targetFrameworkVersion="v4.5" />
      </EntryPoint>

But due to some trigger I never fully resolved, VS always sets this to v3.5 as soon as you try to build a v4.5 package. Yes, you read that right, it doesn’t even set it to 4.0, it falls back to some default when it can’t figure out what to do.

I couldn’t for the life of me find out how to get Visual Studio to do the right thing here. As soon as you use a v4.5 based role VS just creates a bad package. This is presumably another reason why they didn’t ship with this feature in the RC.

Workaround #1 – using csPack
You can bypass Visual Studio’s packaging with csPack, but along with having two build scripts now, you’ll also need to jump through those other hoops of specifying a physical folder for the web project on the command line AND in your ServiceDefinition.csdef;

Packaging with csPack.exe
cspack \vs11Projects\MyProj\Azure11\Azure2012\ServiceDefinition.csdef         /out:\vs11Projects\MyProj\Azure11\Azure2012\bin\Release\app.publish\MyProjDotCom.cspkg /role:MyProj.www;\vs11Projects\MyProj\www\obj\Release\AspnetCompileMerge\Source /rolePropertiesFile:MyProj.www;\vs11Projects\MyProj\AzureCommandLine\roleProperties.txt

WARNING: Make sure the physical directory you specify is NOT your project folder. It must be a published folder (I pointed mine at the pre-compiled folder) otherwise you’ll ship all your source code and developer web.configs in the package instead of the properly built web site.

The second piece of magic here is to use a roleProperties.txt file thus;

TargetFrameWorkVersion=v4.5
EntryPoint=MyProj.www.dll

Workaround #2 – using Visual Studio
Now don’t get me wrong, I’m all for scripted builds, but I’d rather be using msbuild over my Visual Studio project file than a completely separate build script which could easily become out of sync with my project. Just so you know, I’m not proud of this next kludge, but at least it produces consistent builds that deploy and start up. As I said, I couldn’t for the life of me work out how to get VS to package up the correct files for a v4.5 role so since I was already patching the Azure Instance, so I decided to do the same for these glitches which I assume will get patched by the Azure team later.

I simply ship a correct version of the waIISHost.exe.config file with my WebRole project (remembering to set the file to ‘Copy Always’);

image

And then it’s simply a matter of running the other file copyWaIIShost.cmd;

copy Installation\waIISHost.exe.config \base\x64\. /Y >> Installation\copywaIIShost.txt

…and call it from your ServiceDefinition.csfg with;

<?xmlversion="1.0"encoding="utf-8"?>
<ServiceDefinitionname="MyProjDotCom"xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"schemaVersion="2012-05.1.7">
  <WebRolename="MyProj.www"vmsize="ExtraSmall"enableNativeCodeExecution="true">
      <!--omitted--> 
    <Startuppriority="1">
      <TaskexecutionContext="elevated"taskType ="background"commandLine ="Installation\copyWaIIShost.cmd"/>
    </Startup>
  </WebRole>
</ServiceDefinition>

I chose to keep this kludge with the project rather than make a plugin, but either would work. Note that plugins will execute from a dynamically attached disk (initially E: but will change) in E:\plugins\<yourPluginName> whereas startup tasks will run from E:\AppRoot\bin and in my case my files will copy to E:\AppRoot\bin\Installation, but keep in mind the current path will still be E:\AppRoot\bin, so my copy command needs to reference the Installation path as well.

Blocker #3 – IIS AppPools Incorrectly configured

So when deploying to Azure this time, it’s only now that the machine is configured and the role starts that the ASP.Net site itself still fails to load. In fact for an MVC project you’ll probably get this;

image

Basically, routing isn’t working, because the IISConfigurator.exe or whatever calls it can’t figure out how to configure a v4.5 site and so chooses the default, which is v2.0 and a non-integrated pipeline. It’s possible that somewhere above I’ve made too many 4.5 changes and somewhere it would have been happy (since v4.5 is an in-place upgrade to v4.0) with a v4.0 framework tag, but I don’t know where.

Again, I spent hours spelunking through the Microsoft.WindowsAzure.ServiceRuntime code with ILSpy but couldn’t quite figure out where it was failing or what would be the trigger to get it to configure the site as it would for a v4.0 project. However, at this point, having spent way too much time on the problem, I was getting quite comfortable with my kludges, so I added this to the compWaIIShost.cmd file;

%windir%\system32\inetsrv\appcmd set config -section:applicationPools
-applicationPoolDefaults.managedRuntimeVersion:v4.0
-applicationPoolDefaults.managedPipelineMode:Integrated >> appcmd.txt

Basically, I changed the defaults for newly configured application pools. To my astonishment (and immense relief) two pieces of luck were on my side. Firstly, my task seemed to run before it tried to configure the site, and secondly, it actually picked up the default. I didn’t have high hopes on that second point from what I was inferring in the ServiceRuntime source which appeared to be hard coded to a V2.0 default, but fortunately, it clearly wasn’t running that line of code.

Finally

My roles now start. I can consistently, repeatedly and reliably deploy new packages when I need to, at least until the Azure team fix up the Visual Studio extensions, then I can remove the kludges. But for now, I’m happy and can move on. If you’ve felt the need to read this far, I imagine you have been in a similar position recently so I hope this post has helped you move on too.

Similarly, if somebody can see where I’ve gone wrong and can avoid Blocker #3 altogether I’d be very grateful of a pointer.