The CRTC decision on Usage Based Billing

[sc:internet-category ]The CRTC finally released it’s decision on UBB for the wholesale market (so no change to anyone buying directly from Bell/Rogers/etc.).  The decision is long and complex but it comes down to a total rejection of Bell’s desire to have usage based billing implemented on independent ISPs.

The CRTC, whether for political reasons (they were told by the government that UBB was unacceptable after all) or for technical reasons (we can only hope as it would mark a shift to a more reasonable approach by the CRTC), finally came to the right decision.

The decision basically says that network providers like Bell can charge a fixed rate for each last mile connection, a fixed rate per 100Mbps of connection the independent ISP wants to their network as well a some ancillary monthly charges.  There is also an option to combine this all in to a single flat rate per connection, but the important carriers all have gone with the first option.

This squares perfectly with traditional network rates, where you buy a link and it doesn’t matter how much you use it, you pay for the capacity.

This means vendors like TekSavvy and Acanac can continue to offer services with different bandwidth caps or even no caps at all.

One item I noticed is there is quite a bit of justification around the cost model in the decision.  It’s based upon actual costs plus a “reasonable” markup, though what that is is never revealed in the decision, but we can say that it’s somewhere between 15 and 25 percent.

The really juicy part of the decision is in the appendices where the CRTC lays out the costs:

Monthly access rate (without usage)
Speed Bell Cogeco MTS Allstream RCP (Rogers) Videotron
0.5 Mbps $14.11 $11.97
2 Mbps $14.11
3 Mbps $12.73 $12.31 12.79*
5 Mbps $14.11
6 Mbps $24.70
7 Mbps $24.70 $14.78 15.37*
10 Mbps $24.84 $14.25
12 Mbps $24.84
14Mbps $15.06
15Mbps $19.06 $22.35
16 Mbps $24.98
25 Mbps $25.00 $21.00
30Mbps $24.98 $23.77
32Mbps $23.08
50Mbps $42.05 $22.69 $26.89
120Mbps $37.01
Capacity per 100g $2,213 $2,695 $281 $1,251 $1,890

There are several interesting things to note here:

  • Videotron has some weird speeds (see * above), for simplicity I matched them as close to the competition as possible.
  • Bell basically admits that the cost difference between speeds is down to which technology you have, ADSL cost $14/month and ADSL2 costs $25/month.
  • The cable providers all pretty much agree there’s a sliding scale of increased costs as rates increase.
  • Bell and Cogeco seem to be significantly more costly than their competition.
  • There’s a clear disconnect between the providers and their capacity costs, it’s not logical MTS Allstream can provide the same service at 10% the cost of Cogeco.

So what should we Take from this?  Looks like everyone should move to where MTS Allstream is available, their going to have significantly lower costs in comparison to everyone else.  But in reality someone needs to look at WHY there’s such a big difference in costs.  Without any real competition, the providers can simply continue to do incredibly wasteful implementations because they know they will always be able to make a “reasonable” profit on them.

Getting a competitive environment in place would drive all the costs lower as each provider tried to undercut each other.

The question will now become what does the above pricing mean for the average user.  At first glance it looks like an increase will likely be coming once this is implemented on Feb. 1, 2012.

Google and Locations Services

[sc:internet-category ]Google for years has tracked WiFi hotspots physical locations so they can better support their location services with Android and other products.  Of course Google isn’t the only one doing this but with the rise of Android they are perhaps the largest collector of this information.

Yesterday they announced a way to opt-out of the service for those who have WiFi and don’t want to be identified by the location services.

Their solution?  Everyone should rename their SSID’s to include “_nomap” at the end.

Ya, think about if for a moment.  You’ve got it.  That’s just plain insane.

Let’s list a few things off the top of my head that are wrong with this:

  • It’s opt out instead of in.
  • I have to rename by network and then reconnect all my devices.
  • For non-English speaking areas, it makes no sense.
  • It limits what I can call my network (32-6=26 letters)
  • I HAVE TO HAVE A FREAKING UGLY SSID

Let’s face it, Google clearly believes that SSID’s (and probably everything else) are public information that they should have access too.  It’s the users that should have to do the work to block Google’s access.

What it comes down to is simple, is an SSID the street number on your house or is it a “Home sweet home” sign hanging in your kitchen.

If it’s a street number then Google can drive by and read it from off of your property and there’s nothing you can do about it.

If it’s a sign in your kitchen, then they can’t without trespassing on your property (or at least being a peeping tom).

And here’s where it get tricky, it’s both.  It all depends on the intended use of the network.  If your McDonald’s giving out free WiFi access, then its public information like their street address.  But if it’s for private use then it’s like the sign in your kitchen.

Google has decided that unless specifically told otherwise, everything is public and that’s just plain wrong.

Too make it even worse, they haven’t done the logical thing and made it easy for the end-user to opt out, they’ve made it a major headache.  This presumably is to make sure that no one will actually take the time and effort to make the changes required to opt out and therefore keep this valuable information flowing in to them.

For a company that claims it’s motto is “do no evil”, they seem to have chosen every evil choice possible with this one:

Google Exec 1: So should we make this opt in (good) or opt out (evil)?

Google Exec 2: Oh, opt out of course we need that info.

Google Exec 1: Ok, should we setup a website to allow users to at least opt out (good) or not (evil)?

Google Exec 2: A website will be hacked asap and some hacker group will opt out everyone, we cannot have that!

Google Exec 1: Well ok, but people have to be able to opt out, otherwise the government will be all over us.

Google Exec 2: I’ve got it, instead of making it easy, international and quick (good) lets force everyone to have to do a lot of extra work so they won’t do it at all (evil) that way we can say we’re being good by giving everyone the option but won’t lose any real data that we just have to have!

Google Exec 1: Perfect, write that up in a happy sounding announcement and add some words around creating an industry standard to!

Ah to have been a fly on the wall that day Winking smile.

MetroTwit

[sc:software-category ]The beauty of the Metro UI and all the creamy goodness of Twitter!

Twitter, oh how the 140 characters seem so interesting.  But, like all web applications, the web part kind of lets it down.  This is not to say the Twitter website isn’t good, it’s just that a real desktop app seems so much better.

Hence my search for a desktop app for Twitter began shortly after installing the Spaz HD beta on my TouchPad.  I didn’t want an app that was simply written in another runtime (Java, AIR or something else) but instead a real desktop app for Windows.

After reading some reviews I found MetroTwit.

As I’ve said before, the Metro UI model that Microsoft has come up with is just an absolute joy to work with and MetroTwit has done a great job of bringing it to a desktop application.

My use of Twitter, like so many, is just to follow some interesting people, MetroTwit allows me to just have my timeline displayed with nothing else on screen.  But of course it allows for all kinds of other functionality as well.  You mentions and direct e-mail are displayed by default in columns side by side.

MetroTwit has some very nice features baked right in, including:

  • URL expansion, make those nasty short URL’s disappear forever.
  • Built in viewer for media, like Twitpic and Youtube, you don’t have to load a browser.
  • Integration in to the Windows TaskBar, shows the under of unread messages on the icon.
  • Displays a highlight on the vertical scroll bar where your last read tweet is.
  • New tweets come up as “Toast” messages, which are displayed for a set amount of time, at the top of the message you get a progress bar showing you how much time is left and if you hover over the message, the bar auto-pauses for you.

A feature I’m looking forward to in the future is synchronization of the client settings across multiple installs (I use MetroTwit on three separate systems).

There are a few area’s that need polish:

  • Load time is abysmal.
  • If twitter disconnects for some reason and MetroTwit reconnects, a large dialog box pops up and stays up until you hit the OK button, this should either be a toast message or another column of information that can be disabled.
  • There are a few user interface features that are not obvious on the outset.  Like how to mark a tweet read (simply click on it).
  • There doesn’t appear to be a way to pin the trending topics to the main UI.
  • There are (just like Windows Phone and the Zune HD) very limited options for themes.
  • There is no Help file.
  • Once in a while MetroTwit will stop updating the timeline, and hitting refresh does not resolve the issue.  Exit and restart does.  This happens less than once a day so people who shut off their systems at night may never even see it happen.

MetroTwit is of course still in beta at this time so some of the above must be taken in stride, however it is a pretty solid app and it’s made my Twitter experience much better.

I’d recommend it to anyone who like the Metro UI design and is addicted to Twitter.

The Good:

  • Windows native application
  • Beautiful design
  • Great functionality
  • Customizable

The not so bad/not so good:

  • Limited Theme support
  • Load time
  • Still a little ruff around the edges, its beta after all

The Bad:

  • No Help file
  • Seems to hang the timeline after 24+ hours

Firefox and the rapid release schedule

[sc:software-category ]Firefox 6 was released this week and I can’t help but think why?

Mozilla has decided that taking months to release a new version of Firefox is too long, with Chrome releasing a new version so often, I guess they feel left behind.

But is it really a good thing to release a major release so often?  What constitutes a major release?

Over the years I’ve developed a few projects, both closed source and open source and to me a major release had better have something big in it.  If your and end user piece of software like Firefox, then the user had better be able to see the change.  If you’re a server side piece of software like Apache, then it had best bring new features or simpler administration to the table.

Point releases should be the exact opposite, minor fixes and updates that the users and administrator should hardly notice.  Security fixes, bug fixes, a few new options in existing features.

By this definition, FireFox 6 is really only Firefox 5.1.  Maybe not even that.

And it only gets worse, if we start talking a new release every 6 weeks, in just 5 years we’re at FireFox 46.

That is just silly.

And now Mozilla is talking about removing the version number from the about dialog, which of course makes perfect sense when you have a stupid numbering system that will make you look foolish every time anyone displays the about dialog.  They say it will still be in the troubleshooting dialog, but come on, it’s the about dialog, it should show information about the product, including the version.

And really, who wants to update their browser ever 6 weeks anyway?  Oh sure, it should be transparent to the user, except for the 2 or 3 add-ons that fail, or the theme that no longer works, or the new bug that gets introduced.

This is just an outsiders opinion on the matter, but I think the FireFox developers have forgotten that they are not the typical user of their product.  Typical users just want things to work, they hate installing new stuff and hate big numbers even more.  It’s why IE 6 kicked around so long, if it works, they don’t want to fix it.

Microsoft knows this, they’re even running a TV ad at the moment showing a person with a 4 your old PC saying its good enough.  They’re spending big money to convince users to upgrade because users hate to upgrade…

…unless they’re getting something new and flashy of course Winking smile.

Hyper-V and OpenVPN

[sc:windows-category ]Having converted all my VMWare VM’s to Hyper-V was a relatively smooth process and they have been working quite well over the last several months, however I have found a slight quirk and that I haven’t been able to track down yet.

When I’m off site, I use OpenVPN to connect in to my network and then often use a virtual PC running on Hyper-V to work with.  This runs fine and under VMWare I could use the remote control application to place an Icon directly on my desktop to get to the Virtual system.  I have setup a similar Icon with Hyper-V and have used it from within the network without issue.

However, I recently tried to connect to the system from a remote location after having connected to OpenVPN and found that after a few seconds, the connection failed with an RPC error.

Curious, I retried the connection from inside the network, using OpenVPN and everything worked fine.

The Hyper-V manager exhibited the same behaviour.  Just using the standard RDP connection worked fine.

There must be something failing from the OpenVPN side, but I have yet to track down what it is.

The search continues…