Finally Gmail
Gmail is taking the compose window out of the corner of your window.
Gmail is taking the compose window out of the corner of your window.
You know what's healthy? Leaving your house to get a snack.
(Published to the Fediverse as: NatureBox #etc #naturebox Healthy is not getting snacks delivered to your door as a subscription... )
Cory Doctorow wrote an article criticizing SiDiM in Publishers Weekly last week:
"But the fact that the basis behind this security measure was countered 25 years ago by employing a simple tool that’s getting into its 40s is not the silliest part of this supposed new DRM breakthrough."
This misses the point about how DRM really works.
Most people are fundamentally honest but reading a book or watching a TV show or installing one more copy of a software program doesn't feel like you're doing anything wrong. If you throw up a small roadblock then this is generally enough to gently remind people that they need to cough up.
If I can install another copy of Photoshop from the shared drive to get my job done then I will. If it asks for a license I'm going to ask my boss for one rather than hunting down a cracked copy. DRM works even if it doesn't do very much.
In fact it works best when it doesn't do much. A more sophisticated DRM is more likely to go wrong and is harder to operate. Highly effective DRM starts to hurt you with support and maintenance costs. When it fails to work or to be fair it can backfire spectacularly and cause a consumer backlash.
I spent a lot of time earlier in my career developing and selling DRM and copy protection. Deals were won on security but successful relationships were built from helping publishers balance the technical possibilities with ensuring that legitimate customers had a good experience.
"The idea that copyright owners might convince a judge, or, worse, a jury that because they found a copy of an e-book on the Pirate Bay originally sold to me they can then hold me responsible or civilly liable is almost certainly wrong, as a matter of law."
This doesn't sound right either. We've had enough experience of the record labels and movie studios suing individual copyright infringers, most of whom can't afford to risk a court case even if they have a plausible 'left my laptop unlocked' defense. This wasn't a smart move before, but it doesn't mean that book publishers can't deploy the same flawed strategy.
(Published to the Fediverse as: The Economics of Digital Rights Management #etc #drm The utility (or not) of Digital Rights Management has very little to do with the level of security provided. )
The BBC's Fast Track has a good segment on how absolutely miserable British Airways' Avios frequent flyer program is.
I've been using the Facebook Comments Box on this blog since I parted ways with Disqus. One issue with the Facebook system is that you won't get SEO credit for comments displayed in an iframe. They have an API to retrieve comments but the documentation is pretty light and so here are three critical tips to get it working.
The first thing to know is that comments can be nested. Once you've got a list of comments to enumerate through you need to check each comment to see if it has it's own list of comments and so on. This is pretty easy to handle.
The second thing is that the first page of JSON returned from the API is totally different from the other pages. This is crazy and can bite you if you don't test it thoroughly. For https://developers.facebook.com/docs/reference/plugins/comments/ the first page is https://graph.facebook.com/comments/?ids=https://developers.facebook.com/docs/reference/plugins/comments/. The second page is embedded at the bottom of the first page and is currently https://graph.facebook.com/10150360250580608/comments?limit=25&offset=25&__after_id=10150360250580608_28167854 (if that link is broken check the first page for a new one). The path to the comment list is "https://developers.facebook.com/docs/reference/plugins/comments/" -> "comments" -> "data" on the first page and just "data" on the second. So you need to handle both formats as well as the URL being included as the root object on the first page. Don't know why this would be the case, just need to handle it.
Last but not least you want to include the comments in a way that can be indexed by search engines but not visible to regular site visitors. I've found that including the SEO list in the tag does the trick, i.e.
<fb:comments href="..." width="630" num_posts="10"> | |
*Include SEO comment list here* | |
</fb:comments> |
I've included the source code for an ASP.NET user control below - this is the code I'm using on the blog. You can see an example of the output on any page with Facebook comments. The code uses Json.net.
FacebookComments.ascx:
<%@ Control Language="C#" AutoEventWireup="true" CodeFile="FacebookComments.ascx.cs" | |
Inherits="LocalControls_FacebookComments" %> |
FacebookComments.ascx.cs
using System; | |
using System.Collections.Generic; | |
using System.Globalization; | |
using System.Net; | |
using System.Text; | |
using System.Web; | |
using System.Web.Caching; | |
using Newtonsoft.Json.Linq; | |
// ReSharper disable CheckNamespace | |
public partial class LocalControls_FacebookComments : System.Web.UI.UserControl | |
// ReSharper restore CheckNamespace | |
{ | |
private const string CommentApiTemplate = "https://graph.facebook.com/comments/?ids={0}"; | |
private const string CacheTemplate = "localfacebookcomments_{0}"; | |
private const int CacheHours = 3; | |
public string PostUrl { get; set; } | |
protected void Page_Load(object sender, EventArgs e) | |
{ | |
try | |
{ | |
if (!string.IsNullOrWhiteSpace(PostUrl)) | |
{ | |
string cacheKey = string.Format(CultureInfo.InvariantCulture, | |
CacheTemplate, PostUrl); | |
if (HttpRuntime.Cache[cacheKey] == null) | |
{ | |
StringBuilder commentBuilder = new StringBuilder(); | |
string url = string.Format(CultureInfo.InvariantCulture, | |
CommentApiTemplate, | |
PostUrl); | |
while (!string.IsNullOrWhiteSpace(url)) | |
{ | |
string json; | |
using (WebClient webClient = new WebClient()) | |
{ | |
json = webClient.DownloadString(url); | |
} | |
// parse comments | |
JObject o = JObject.Parse(json); | |
if ((o[PostUrl] != null) && | |
(o[PostUrl]["comments"] != null) && | |
(o[PostUrl]["comments"]["data"] != null)) | |
{ | |
// first page | |
AppendComments(o[PostUrl]["comments"]["data"], commentBuilder); | |
} | |
else if (o["data"] != null) | |
{ | |
// other pages | |
AppendComments(o["data"], commentBuilder); | |
} | |
else | |
{ | |
break; | |
} | |
// next page URL | |
if ((o[PostUrl] != null) && | |
(o[PostUrl]["comments"] != null) && | |
(o[PostUrl]["comments"]["paging"] != null) && | |
(o[PostUrl]["comments"]["paging"]["next"] != null)) | |
{ | |
// on first page | |
url = (string) o[PostUrl]["comments"]["paging"]["next"]; | |
} | |
else if ((o["paging"] != null) && | |
(o["paging"]["next"] != null)) | |
{ | |
// on subsequent pages | |
url = (string) o["paging"]["next"]; | |
} | |
else | |
{ | |
url = null; | |
} | |
} | |
string comments = commentBuilder.ToString(); | |
HttpRuntime.Cache.Insert(cacheKey, | |
comments, | |
null, | |
DateTime.UtcNow.AddHours(CacheHours), | |
Cache.NoSlidingExpiration); | |
LiteralFacebookComments.Text = comments; | |
} | |
else | |
{ | |
LiteralFacebookComments.Text = (string)HttpRuntime.Cache[cacheKey]; | |
} | |
} | |
} | |
catch (Exception) | |
{ | |
LiteralFacebookComments.Text = string.Empty; | |
} | |
} | |
private static void AppendComments(IEnumerable comments, | |
StringBuilder commentBuilder) | |
{ | |
foreach (JObject comment in comments) | |
{ | |
// write comment | |
commentBuilder.AppendFormat(CultureInfo.InvariantCulture, | |
"{0} ({1})\r\n", | |
comment["message"], | |
comment["from"]["name"]); | |
// also write any nested comments | |
if ((comment["comments"] != null) && (comment["comments"]["data"] != null)) | |
{ | |
AppendComments(comment["comments"]["data"], commentBuilder); | |
} | |
} | |
} | |
} |
(Published to the Fediverse as: How to get SEO credit for Facebook Comments (the missing manual) #etc #facebook #seo #comments Optimize Facebook comments for SEO by using the API to add comment text before loading the comment box via JavaScript (C# implementation but principle will work anywhere). )
Skype just released a completely rewritten version of their Android client.
It's a nice streamlined UI and for the first time on Android it actually loads the 15,422 chats I'm required to participate in work and is usable and responsive. I've used it for a few days and really want to like it.
But.
Even though it's faster and prettier it still destroys battery life. Imo.im ad Plus.im manage to handle multiple networks all day without putting a noticeable dent int the battery. With Skype up and running my phone is dead by the early evening. It's useless.
They also haven't fixed syncing the read state of messages which is the worst deficiency of Skype on both mobile and the desktop. Imo.im did this wonderfully until Skype cut them off at the kneecaps.
Back to Plus.im for now...
There are things I still sort of like about Skype. I use it a lot for video calls (although for work and muti-party video it's pretty much all about Google Hangouts these days). I have a Philips phone that integrates with Skype for international calls (they seem to have discontinued it, and while the calls are cheap the UI is baroque). But the IM is horrible. It can't remember which messages you've seen between devices and so you're constantly trying to figure out what you have and haven't read.
And the IM on the desktop is nothing compared to the horror of the Skype Android app. This slowly spins up and by the time it's loaded previous messages your battery is dead.
Imo.im made Skype IM tolerable on Android and possible on a Chromebook. In the last week it seems that Skype has kneecapped them and blocked their servers from signing in. I'm limping by with IM+ Pro at the moment, but it's slow and buggy and frustrating.
I sympathize with Imo.im. I've been stiffed by Skype before as an officially sanctioned partner so it's no shock that they'd take out this kind of tool.
It would be nice if they could fix mobile and web access to the network first though.
Not to pick on British Airways but yes, that screenshot is real. It's a marketing email opt out that has not only been pre-populated in favor of spam but has then also been disabled.
(Published to the Fediverse as: Really BA? #etc #ba A British Airways marketing opt-out checkbox that is both pre-checked and disabled for your convenience. )
It has been brought to my attention that I've been whinging too much recently.
So I'd like to take a break from that and say how much I'm enjoying feedly. It's a wonderfully well designed RSS reader. I use the Chrome Extension version and the Android app. It preserves the Google Reader keyboard shortcuts so I can sail through my subscriptions and it brings back social sharing.
I looked at feedly once before and didn't really get it. I thought it was just one of those algorithmic recommendation news manglers that tries to guess what you want to read. It might do that on the home page but the 'All' view is a perfect replacement for Google Reader.
I love it. I want to pay for it to make sure it stays around. Thank you feedly.
(Published to the Fediverse as: Thank you Feedly #etc #feedly #software Feedly is an awesome RSS reader and the ideal replacement for Google Reader for RSS fans. )
Well that’s good, as you sure as hell can’t use them for a flight:
Accessing Printer Press ESC to cancel
International Date Line Longitude, Latitude Coordinates
Is it safe to open securedoc.html (Cisco Registered Envelope)?
Google Photos killed my Aura Frame
3D Printing a 72-58mm step down Camera Filter Adapter
Animation of US PM2.5 Air Pollution in 2023
Chromecast won't connect to wifi - finally found the fix
3D Printing a discreet wall mount shelf for the Aura Carver Mat
3D Printing a Window Mount for a Google Nest Indoor Wired Gen 2 Camera