I Thought He Came With You is Robert Ellison’s blog about software, marketing, politics, photography, time lapse and the occasional well deserved rant. Follow along with a monthly email, RSS or on Facebook. About 7,250,102,787 people have not visited yet so it might be your first time here. Suggested reading: Got It, or roll the dice.

Reading and Writing Office 365 Excel from a Console app using the Microsoft.Graph C# Client API

I needed a console app that reads some inputs from an online Excel workbook, does some processing and then writes back the results to a different worksheet. Because I enjoy pain I decided to use the thinly documented new Microsoft.Graph client library. The sample code below assumes that you have a work or education Office 365 subscription.

using System;
using System.Collections.Generic;
using System.Globalization;
using System.IO;
using System.Linq;
using System.Net.Http.Headers;
using System.Threading.Tasks;
using Microsoft.Graph;
using Microsoft.IdentityModel.Clients.ActiveDirectory;
using Newtonsoft.Json.Linq;

namespace Excel365Test
{
    /// <summary>
    /// 1) Install Microsoft.Graph NuGet Package
    /// 2) Install Microsoft.IdentityModel.Clients.ActiveDirectory NuGet Package
    /// 3) Register app at https://portal.azure.com/ - need app ID and redirct URL below
    /// </summary>
    class Program
    {
        static void Main(string[] args)
        {
            TokenCache tokenCache = new TokenCache();

            // load tokens from file 
            string tokenPath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), "Excel365Test");
            if (!Directory.Exists(tokenPath)) { Directory.CreateDirectory(tokenPath); }
            tokenPath = Path.Combine(tokenPath, "tokens.dat");
            if (System.IO.File.Exists(tokenPath))
            {
                tokenCache.Deserialize(System.IO.File.ReadAllBytes(tokenPath));
            }

            // this is the OAUTH 2.0 TOKEN ENDPOINT from https://portal.azure.com/ -> Azure Active Directory -> App Registratuons -> End Points
            var authenticationContext = new AuthenticationContext("https://login.windows.net/your-url-here/", tokenCache);

            // only prompt when needed, you'll get a UI the first time you run
            var platformParametes = new PlatformParameters(PromptBehavior.Auto);

            var authenticationResult = authenticationContext.AcquireTokenAsync("https://graph.microsoft.com/",
                "your-app-id",     // Application ID from https://portal.azure.com/
                new Uri("http://some.redirect.thing/"),         // Made up redirect URL, also from https://portal.azure.com/
                platformParametes).Result;
            string token = authenticationResult.AccessToken;

            // save token so we don't need to re-authorize
            System.IO.File.WriteAllBytes(tokenPath, tokenCache.Serialize());
            
            // use the token with Microsoft.Graph calls
            GraphServiceClient client = new GraphServiceClient(new DelegateAuthenticationProvider(
            (requestMessage) =>
            {
                requestMessage.Headers.Authorization = new AuthenticationHeaderValue("bearer", token);

                return Task.FromResult(0);
            }));

            // test reading from a sheet - in this case I have a test worksheet with a two column table for name/value pairs
            var readSheet = client.Me.Drive.Items["your-workbook-id"].Workbook.Worksheets["test"];
            var readTables = readSheet.Tables.Request().GetAsync().Result;
            string readTableId = readTables[0].Name;
            var table = readSheet.Tables[readTableId].Rows.Request().GetAsync().Result;
            
            // convert page to a dictionary... this doesn't handle pagination
            Dictionary<stringdecimal> tableValues = table.CurrentPage.ToDictionary(r => r.Values.First.First.ToString(), 
                r => Convert.ToDecimal(r.Values.First.Last, CultureInfo.InvariantCulture));

            // test adding a row to a table with four columns
            // sadly it seems you need this exact format, a regular JArray or JObject fails

            WorkbookTableRow newRow = new WorkbookTableRow
            {
                Values = JArray.Parse("[[\"1\",\"2\",\"3\",\"4\"]]")
            };
            
            var outputSheet = client.Me.Drive.Items["your-workbook-id"].Workbook.Worksheets["data"];
            var outputTables = outputSheet.Tables.Request().GetAsync().Result;
            string outputTableId = outputTables[0].Name;
            var outputResult = outputSheet.Tables[outputTableId].Rows.Request().AddAsync(newRow).Result;

            // the excel unit tests seem to be the most useful documentation right now:
            // https://github.com/microsoftgraph/msgraph-sdk-dotnet/blob/dev/tests/Microsoft.Graph.Test/Requests/Functional/ExcelTests.cs
        }
    }
}

Paste the code into a new console project and then follow the instructions at the top to add the necessary NuGet packages. You'll also need to register an application at https://portal.azure.com/. You want a Native application and you'll need the Application ID and the redirect URL (just make up some non-routable URL for this). Under Required Permissions for the app you should add read and write files delegated permissions for the Microsoft Graph API.

Hope this saves you a few hours. Comment below if you need a more detailed explanation for any of the above.

Shapefile Update

A few people have asked for 3D shape support in my ESRI Shapefile library. I've never got around to it, but CodePlex user ekleiman has forked a version in his ESRI Shapefile to Image Convertor that supports PointZ, PolygonZ and PolyLineZ shapes. If that's what you need please check it out.

Crushing PNGs in .NET

Crushing PNGs in .NET

I'm working on page speed and Google PageSpeed Insights is telling me that my PNGs are just way too large. Sadly .NET does not provide any way to optimize PNG images so there is no easy fix - just unmanaged libraries and command line tools.

I have an allergy to manual processes so I've lashed up some code to automatically find and optimize PNGs in my App_Data folder using PNGCRUSH. I can call CrushAllImages() to fix up everything or CrushImage() when I need to fix up a specific PNG. Code below:

public static void CrushAllImages()
{
    try
    {
        string appDataRoot = HostingEnvironment.MapPath("~/App_Data");
        if (appDataRoot == null)
        {
            return;
        }

        DirectoryInfo directoryInfo = new DirectoryInfo(appDataRoot);
        FileInfo[] pngs = directoryInfo.GetFiles("*.png", SearchOption.AllDirectories);
        foreach (FileInfo png in pngs)
        {
            CrushImage(png.FullName);
        }
    }
    catch (Exception ex)
    {
        //...
    }
}

public static void CrushImage(string fullPath)
{
    if (string.IsNullOrEmpty(fullPath))
    {
        return;
    }

    try
    {
        string markerPath = Path.ChangeExtension(fullPath, ".cng");
        if (File.Exists(markerPath))
        {
            return;
        }

        string crushExe = HostingEnvironment.MapPath("~/App_Data/pngcrush_1_7_77_w32.exe");

        ProcessStartInfo psi = new ProcessStartInfo(crushExe, string.Format(CultureInfo.InvariantCulture, "\"{0}\" \"{1}\"", fullPath, markerPath));
        psi.UseShellExecute = false;
        psi.CreateNoWindow = true;
        psi.LoadUserProfile = false;
        psi.WorkingDirectory = HostingEnvironment.MapPath("~/App_Data");

        Process p = Process.Start(psi);
        if (p == null)
        {
            throw new InvalidOperationException("No Process!");
        }
        p.WaitForExit();

        if (File.Exists(markerPath))
        {
            if (p.ExitCode == 0)
            {
                File.Copy(markerPath, fullPath, true);
                File.WriteAllText(markerPath, "Processed");
            }
            else
            {
                SiteLog.Log.Add(LogSeverity.Error, "CrushImage Failed (non-0 exit code) for " + fullPath);
                File.Delete(markerPath);
            }
        }
    }
    catch (Exception ex)
    {
       // ...
    }
}

Minify and inline CSS for ASP.NET MVC

ASP.NET has a CssMinify class (and a JavaScript variant as well) designed for use in the bundling pipeline. But what if you want to have your CSS minified and inline? Here is an action that is working for me (rendered into a style tag on my _Layout.cshtml using @Html.Action("InlineCss", "Home")).

public ActionResult InlineCss()
{
    BundleContext context = new BundleContext(
        new HttpContextWrapper(System.Web.HttpContext.Current), 
        BundleTable.Bundles, 
        "~/Content/css");
            
    Bundle cssBundle = BundleTable.Bundles.GetBundleFor("~/Content/css");
    BundleResponse response = cssBundle.GenerateBundleResponse(context);
           
    CssMinify cssMinify = new CssMinify();
    cssMinify.Process(context, response);

    return Content(response.Content);
}

Note that I'm using this to inline CSS for this blog. The pages are cached so I'm not worried about how well this action performs. My blog is also basically all landing pages so I'm also not worried about caching a non-inline version for later use, I just drop all the CSS on every page.

Personal Finger Daemon for Windows

Did you know that Windows still has a vestigial finger command with just about nothing left to talk to? One of my New Year's resolutions is to bring finger back and unlike the stalled webfinger project I need to make some progress. Here's some C# to run your own personal finger daemon... you just need to create a .plan file in your home directory (haven't done that for a while):

using System;
using System.Diagnostics;
using System.IO;
using System.Net;
using System.Net.Sockets;
using System.Text;
using System.Threading;

namespace fingerd
{
    static class Program
    {
        private const int FingerPort = 79;
        private const int MaxFingerCommand = 256;
        private const string PlanFile = ".plan";

        private static readonly TcpListener _tcpListener = new TcpListener(
             IPAddress.Any, FingerPort);

        static void Main()
        {
            _tcpListener.Start();

            while (true)
            {
                TcpClient tcpClient = _tcpListener.AcceptTcpClient();
                Thread clientThread = new Thread(ClientThread);
                clientThread.Start(tcpClient);
            }
        }

        static void ClientThread(object client)
        {
            NetworkStream clientStream = null;
            TcpClient tcpClient = client as TcpClient;
            if (tcpClient == null) { return; }

            try
            {
                byte[] command = new byte[MaxFingerCommand];
                clientStream = tcpClient.GetStream();

                int read = clientStream.Read(command, 0, command.Length);
                if (read == 0) { return; }

                ASCIIEncoding asciiEncoding = new ASCIIEncoding();
                string commandText = asciiEncoding.GetString(command);

                int endOfCommand = commandText.IndexOf("\r\n"
                    StringComparison.InvariantCultureIgnoreCase);
                if (endOfCommand <= 0) { return; }

                string user = commandText.Substring(0, endOfCommand);
                if (string.Compare(user, Environment.UserName, 
                    StringComparison.InvariantCultureIgnoreCase) != 0) { return; }

                string planPath = Path.Combine(Environment.GetFolderPath(
                    Environment.SpecialFolder.UserProfile),
                    PlanFile);
                if (!File.Exists(planPath)) { return; }

                string plan = File.ReadAllText(planPath) + "\r\n";
                byte[] planBytes = asciiEncoding.GetBytes(plan);
                clientStream.Write(planBytes, 0, planBytes.Length);
            }
            catch (Exception ex)
            {
                Debug.WriteLine(ex);
            }
            finally
            {
                if (clientStream != null)
                {
                    clientStream.Close();
                }
                tcpClient.Close();
            }
        }
    }
}

Fix search on enter problem in BlogEngine.NET

Search on enter has been broken for a while in BlogEngine.NET (I'm running the latest 2.8.0.1 version). Finally got a chance to look at this today and there is a simple patch to the JavaScript to fix it. See the issue I just filed on CodePlex for details.

How to get SEO credit for Facebook Comments (the missing manual)

How to get SEO credit for Facebook Comments (the missing manual)

I've been using the Facebook Comments Box on this blog since I parted ways with Disqus. One issue with the Facebook system is that you won't get SEO credit for comments displayed in an iframe. They have an API to retrieve comments but the documentation is pretty light and so here are three critical tips to get it working.

The first thing to know is that comments can be nested. Once you've got a list of comments to enumerate through you need to check each comment to see if it has it's own list of comments and so on. This is pretty easy to handle.

The second thing is that the first page of JSON returned from the API is totally different from the other pages. This is crazy and can bite you if you don't test it thoroughly. For https://developers.facebook.com/docs/reference/plugins/comments/ the first page is https://graph.facebook.com/comments/?ids=https://developers.facebook.com/docs/reference/plugins/comments/. The second page is embedded at the bottom of the first page and is currently https://graph.facebook.com/10150360250580608/comments?limit=25&offset=25&__after_id=10150360250580608_28167854 (if that link is broken check the first page for a new one). The path to the comment list is "https://developers.facebook.com/docs/reference/plugins/comments/" -> "comments" -> "data" on the first page and just "data" on the second. So you need to handle both formats as well as the URL being included as the root object on the first page. Don't know why this would be the case, just need to handle it.

Last but not least you want to include the comments in a way that can be indexed by search engines but not visible to regular site visitors. I've found that including the SEO list in the tag does the trick, i.e.

<fb:comments href="..." width="630" num_posts="10">*Include SEO comment list here*</fb:comments>

I've included the source code for an ASP.NET user control below - this is the code I'm using on the blog. You can see an example of the output on any page with Facebook comments. The code uses Json.net.

FacebookComments.ascx:

<%@ Control Language="C#" AutoEventWireup="true" CodeFile="FacebookComments.ascx.cs" 
  Inherits="LocalControls_FacebookComments" %>

FacebookComments.ascx.cs

using System;
using System.Collections.Generic;
using System.Globalization;
using System.Net;
using System.Text;
using System.Web;
using System.Web.Caching;
using Newtonsoft.Json.Linq;

// ReSharper disable CheckNamespace
public partial class LocalControls_FacebookComments : System.Web.UI.UserControl
// ReSharper restore CheckNamespace
{
    private const string CommentApiTemplate = "https://graph.facebook.com/comments/?ids={0}";
    private const string CacheTemplate = "localfacebookcomments_{0}";
    private const int CacheHours = 3;

    public string PostUrl { get; set; }

    protected void Page_Load(object sender, EventArgs e)
    {
        try
        {
            if (!string.IsNullOrWhiteSpace(PostUrl))
            {
                string cacheKey = string.Format(CultureInfo.InvariantCulture, 
                    CacheTemplate, PostUrl);

                if (HttpRuntime.Cache[cacheKey] == null)
                {
                    StringBuilder commentBuilder = new StringBuilder();

                    string url = string.Format(CultureInfo.InvariantCulture,
                                               CommentApiTemplate,
                                               PostUrl);

                    while (!string.IsNullOrWhiteSpace(url))
                    {
                        string json;
                        using (WebClient webClient = new WebClient())
                        {
                            json = webClient.DownloadString(url);
                        }

                        // parse comments
                        JObject o = JObject.Parse(json);
                        if ((o[PostUrl] != null) &&
                            (o[PostUrl]["comments"] != null) &&
                            (o[PostUrl]["comments"]["data"] != null))
                        {
                            // first page
                            AppendComments(o[PostUrl]["comments"]["data"], commentBuilder);
                        }
                        else if (o["data"] != null)
                        {
                            // other pages
                            AppendComments(o["data"], commentBuilder);
                        }
                        else
                        {
                            break;
                        }

                        // next page URL
                        if ((o[PostUrl] != null) &&
                            (o[PostUrl]["comments"] != null) &&
                            (o[PostUrl]["comments"]["paging"] != null) &&
                            (o[PostUrl]["comments"]["paging"]["next"] != null))
                        {
                            // on first page
                            url = (string) o[PostUrl]["comments"]["paging"]["next"];
                        }
                        else if ((o["paging"] != null) &&
                                 (o["paging"]["next"] != null))
                        {
                            // on subsequent pages
                            url = (string) o["paging"]["next"];
                        }
                        else
                        {
                            url = null;
                        }
                    }

                    string comments = commentBuilder.ToString();

                    HttpRuntime.Cache.Insert(cacheKey,
                        comments,
                        null,
                        DateTime.UtcNow.AddHours(CacheHours),
                        Cache.NoSlidingExpiration);

                    LiteralFacebookComments.Text = comments;
                }
                else
                {
                    LiteralFacebookComments.Text = (string)HttpRuntime.Cache[cacheKey];
                }
            }
        }
        catch (Exception)
        {
            LiteralFacebookComments.Text = string.Empty;
        }
    }

    private static void AppendComments(IEnumerable comments, 
        StringBuilder commentBuilder)
    {
        foreach (JObject comment in comments)
        {
            // write comment
            commentBuilder.AppendFormat(CultureInfo.InvariantCulture,
                                        "
{0} ({1})

\r\n"
,
                                        comment["message"],
                                        comment["from"]["name"]);

            // also write any nested comments
            if ((comment["comments"] != null) && (comment["comments"]["data"] != null))
            {
                AppendComments(comment["comments"]["data"], commentBuilder);
            }
        }
    }
}

The curious case of the missing slugs (in BlogEngine.net 2.8)

2013-06-16 Update: There is now a patch for the issue discussed below.

I just upgraded to BlogEngine.net 2.8 as it contains a fix for broken links from Facebook. There were a couple of hitches that I'll share in case they help anyone else.

I messed up the first upgrade attempt because the updater utility updates the source folder (containing the newly downloaded 2.8 code) instead of the destination folder (containing the current version of your blog). This is a little odd and the result is I uploaded an unchanged instance and then embarrassingly complained the the Facebook bug hadn't been fixed. It had, just not in the folder I was expecting. I probably didn't pay enough attention to the instruction video.

Having got that out of the way I discovered that new posts were appearing with a bad link (to /.aspx instead of /blog-title.aspx). I rarely post using the editor as I have a home-grown post by email service running. After a bit of digging it turns out that prior to 2.8 you could leave the slug empty when creating a post but now this results in the bad link. Luckily there isn't much effort require to fix this, you just need to set the slug before saving the new post:

if (string.IsNullOrWhiteSpace(post.Slug))
{
    post.Slug = Post.GetUniqueSlug(post.Title, post.Id);
}

In the middle of playing with this my live site died and started returning a 500 error. No amount of uploading the working local copy would fix this. Happily Server Intellect have outstanding support and restored a working backup for me in the middle of the night. Thanks chaps!

Catfood: Earth for Android 1.10

Catfood Earth for Android 1.10

I’ve just released Catfood Earth for Android 1.10. You can control the center of the screen manually (the most requested new feature) and also tweak the transparency of each layer and the width of the terminator between day and night. It also starts a lot faster and has fewer update glitches. Grab it from Google Play if this looks like your sort of live wallpaper.

.NET 2.0 and Windows 8

Inexplicably .NET 2.0, 3.0 and 3.5 are not installed by default in Windows 8 and can’t be installed using the redistributables that worked with previous versions of Windows. You have to go digging in Windows Features to get anything older than 4.0.