While the proof is still going to be in what they do with these discussions, I was happily surprised at how the Shaw UBB session I went to played out. Now, it's important to remember the purpose of this sort of discussion for a company. I would be incredibly surprised to see any exact implementations come out of any of what was suggested, even though the folks running the discussion seemed very enthusiastic and encouraging over most ideas. One of the major challenges for any business is to figure out the right mix of choice vs convenience. And it was very clear from hearing what other customers had to say that ideas of fairness and what constituted acceptable use varied. Sometimes a lot. And I think that some of the ways in which they varied were quite interesting.
The one point that everyone on the customer side seemed to agree upon was that transparency was essential in the process. Shaw doesn't think that the figures from folks like Netflix and Michael Geist, regarding the last-mile Internet delivery costs being anywhere from 1-3 cents per GB, are quite right. I had in my head the 10 cent or less per GB cost of Amazon EC2 machines. As someone else pointed out, adding more capacity for Amazon in this case involves running a cable across a room, whereas adding more capacity for consumer internet potentially involves laying fibre over miles.
Shaw's own introduction was an interesting look into the challenges of providing residential Internet. They're looking at having to do around 500 "node splits" this year. What's that? Well, you have cable going into houses, but the signal starts to get distorted with cable after a short distance, so you have that cable going to nodes backed with fibre optic lines. Those nodes can only handle so much before another one needs to be put in. I'm not sure whether that's due to the fibre optic bandwidth itself or the processing capacity of the router. Regardless, at that point, if I remember correctly, they'll "split" the node, doubling the capacity and moving half of the residents to the new node. Those nodes go to more and more central stations along the way, which eventually link to the vast web of systems we all know and love. Now, putting in another node means obtaining permits, equipment, and not-exactly-plug-and-play installation. From the decision to "split" to the actual implementation it sounded like anywhere from 6-8 months until the thing would actually be operational. Certainly not trivial by any stretch. From their perspective, they're worried that next year it'll be 1000 splits, the year after 2000, etc. They saw a 60% increase from July 2010 to now. They feel this is unmaintainable.
I work with algorithms and software design. If anyone has a better understanding on the above, please enlighten me. I'm sure there are more than a few errors in what I picked up on.
So that's their side of the story, and they laid it out rather compellingly. I walked in with the impression of the big cable companies and telcos sitting on their thumbs, watching the money roll in and crying that the sky is falling. I left with the impression that they were sincerely trying to keep up with demand.
That said, there was a lot they didn't say and/or couldn't answer. Where did congestion tend to be a problem, for example? At the nodes or more centrally? Was it in the capacity of the fibre itself or the speed of the switches? It makes a bit difference whether you have to lay a bunch of new fibre to double capacity as opposed to letting Moore's law do it for you with better components at the nodes. Was it their efficiency in processing/routing traffic at the more central hubs? Problems there can be solved by smarter design and opening up to competition like the telcos have been forced to do. So while they laid out a compelling argument that it's not easy keeping up with network demand, there was still a lot of room for debate.
Back to transparency. At the end of the night, this was surprisingly the thing that they seemed to feel they would have the most trouble with. I had walked in thinking it would be the easiest. If you can give me numbers that I can independently verify, showing what it costs you to deliver me that GB, then I can decide for myself whether or not you're being fair. I don't have to just take your word for it. And I'm more than willing to pay for my usage by the GB if I feel I'm being treated fairly and you're doing everything you can as a business to be at your best. We know Canada has different challenges than countries where the population is more dense. Just show us the numbers. From their point of view, though, keeping those numbers to themselves plays a big part in business strategy. And in this, they seemed a lot like most more established businesses these days. Transparency seems natural to us, but it's alien and worrisome to an organization. Hopefully they'll find a way to satisfy our need to verify their claims with their need to keep some stuff to themselves, though, as that seemed to be the single idea that united almost everyone in the room.
Going on to the other suggestions...
I found that a lot of people shared my feelings that if you've got a certain cap and you're under one month (and they've demonstrated that they can easily measure that), you should be able to go over by that amount the next month and not be penalized. It's really hard to argue that one, and they didn't try. In fact, they seemed to like the idea themselves. From reading up on other sessions, it has been a popular one.
Raising the cap to something like 250GB was also suggested, of course. For most users, that was equivalent to saying "don't cap me", at least for now. That's of course where my cynicism rears its ugly head, suggesting that something like this would just be too easy for us and too generous of them, given their initial offer. But who knows?
Not monitoring (or providing discounted rates) non-peak hour usage was an interesting suggestion. This would work for those of us who have started using online backup solutions and/or occasionally have to download really large files (full games, for example). Upon hearing about our use of online backup, they asked what we would think of if they provided an in house solution to that. A service like Netflix's was also suggested. I can see the benefit from their perspective. This would allow them far more control to make sure the service met their quality standards and keep it from interfering with more general Internet traffic. I do have monopoly-type concerns with that, though. There's a reason I prefer Netflix to Shaw on Demand. It's quite simply a better service. And though they might be able to copy it, I don't think they would have come up with it unprovoked. I want them to be a pipe. A very good pipe. And I don't mind if they do offer some of their own versions of these services. But I still want to feel choice. I'm intrigued by the backup idea, but if, for example, it didn't offer client-side encryption and encrypted-only storage (so that not even an employee could see what's stored), I wouldn't use it for backup. There's just too much of a chance for a bad employee to compromise personal data unless you're the only one holding the key. I'm able to choose between Dropbox (great interface and tools, but no real encryption - besides creating your own encrypted image and storing that) and Jungle Disk (not a great interface, but client side encryption) and any number of other similar services out there. If the cost of transferring data is prohibitively expensive in choosing anything other than a Shaw solution, that's not enough choice.
Decoupling speed from usage was also brought up, which was something I hadn't considered. Discussion of it also led to, in my opinion, a much more nuanced way of handling network congestion. I feel it's because it gets to the root of the "capacity" problem. See, the problem isn't so much in the number of bits as the number of bits being transferred at one time. That's obvious, you say. But that isn't the problem that caps would be solving. Someone listening to Internet radio all day is probably not hurting network speed for others on the same node, even though when the month is over, they could well be considered a "heavy user". Meanwhile, the person who decides to download a bunch of huge files at a peak hour could be causing a disproportionate strain on the system. The latter may be under their cap but be the actual source of the network capacity issues.
So, is it more important that any time you download something, you can get it at 100Mbit/s, or would you rather be free to use 500GB throughout the month at a rate that's going to be somewhere between 5-25Mbit/s? Personally, I'd be happy with the latter. Gamers or DB admins who need to transfer large database dumps might prefer the former. The one-size-fits-all approach is still necessary for the vast majority of Internet users, as most won't want to think about it. But more customization would allow customers with highly specialized needs to get what they need without being considered a scourge on the system. The guy who's using 100Mbit/s at peak hours and downloading a terabyte per month may be out of luck until that becomes the norm, but most people would probably be able to find something that suits them in the meantime.