Re-run TV?

I got a kick out of this. It was revealed this week that the share of downstream Internet traffic generated by Netflix customers’ streaming movies reached thirty percent in the last measuring period. Thirty percent. But as eye-opening as that figure is, it’s not what I got a kick out of. In some bit of reporting associated with that announcement I learned the following little gem: the pet nickname by which people in Hollywood sometimes sneer at Netflix is “Re-run TV.” I find this genuinely funny. Twenty-five or -six million people (including me) are streaming content from Netflix, comprising thirty percent of all downstream traffic, but Hollywood can still look down on them because they’re just “re-runs.”

In the world these guys grew up in, the one in which their business model was based on total control of content and delivery, there was for each piece of programming a “first run” during which the media biggies allowed people to watch it once, assuming they could be in front of the delivery device at the appointed time. Subsequent performances were “re-runs” for which the media corps were paid big bucks by smaller networks and independent broadcasters. Run, and re-run. And since everything on Netflix has been seen before, why hell it’s all just a bunch of re-runs. In their world, once, and in their dreams now, the viewing public flocks to them en-masse for the must-see content, and once that content has been seen they might agree to dribble it out bit by bit to other, clearly inferior outlets.

Meanwhile, on Planet Reality, I got to watch five seasons of Lost, all of Battlestar Galactica, Firefly, four seasons of Rescue Me, Weeds, Big Love, Torchwood, and dozens upon dozens of documentaries and movies, including most recently all the best, campiest Bond flicks from the sixties. Some of the movies I’ve seen before. Most of the television I haven’t. It’s all “first run” to me, and delivered to my computer, in my office, or on either of our two TVs, when I want to watch it. More importantly, the only way any of the networks can get anywhere near having twenty-six million people care what they are doing on a given night is to get two cute royal kids to marry each other. Hard to pull that off regularly.

Is Google Site Blocking a Game Changer?

Google has always had to walk a fine line between profiting from search results and giving users more power over what appears in them. They try to make sure that what we see is relevant to us, while at the same time legions of SEO specialists and their clients try to game the system to make sure we see what they want us to. Recently Google announced a new feature that I think does a lot to destabilize the status quo and tip the scales greatly in favor of we the people.

The new capability allows users who are signed on to their Google profiles to easily block results from sites that they don’t find useful. Here’s a typical usage scenario, and one that I find myself in every day: You type in a search term, get back some results, see one that appears to be relevant, and click on it only to find that the link leads to a preview snippet for a pay site, a link farming blog, a redirect to a registration page, whatever. Some SEO guy has worked hard to make sure you see that useless crap, and now Google will let you nuke it. The next time it happens, and you click the ‘Back’ button in exasperation, look again at the entry in the search results. It will look something like this…

The link that I’ve highlighted in yellow is the new addition. Click that, and the offending site is added to your block list and banished forever from your search results. Now I should probably mention that I don’t mean to beat up on Experts Exchange… well I sort of do. I’m sure they are all nice people over there, and I’m sure there are lots of people who use their service, and I’d be likely to hear from a bunch of them if anyone actually read this blog. But let me explain why Experts Exchange is typical of the annoyances that site blocking cures for me.

I’m a developer, and as such I am constantly Googling for technical information, like everyone else in my business. Don’t recall how to resolve a missing link dependency in an MFC app because you haven’t done C++ in ten years? Google. Want to know whether you can call out to a DLL from a Sidebar gadget? Google. Here’s the experience I don’t want: search, get results, click relevant link text without looking at domain, end up on Expert’s Exchange with “Sign up to read this solution. It’s free!” splash covering the page. I don’t want to sign up. I want to find a blog post, or forum entry, or bit of API documentation that answers my damn question. If your link turns up in my results, and I click it and don’t get to see the information it promises without taking additional steps to submit form data, that link wasn’t very relevant. I understand the business model. It’s called trolling. I’m sure it works for some people, but it doesn’t work for me.

So I hope that explains why, although I wish the fisherfolk at Experts Exchange all the best, I’m dancing with glee at never having to see their links in my search results again. And that’s why I wonder whether this new capability will shift the ground under existing Internet marketing techniques. If people can click one link and ban J.C. Penney then all that money J.C. Penney spent may turn out to not only have been unproductive, but perhaps counterproductive as well. Banning J.C. Penney will ban all links in search results, not just stupid ones, so there is now a strong incentive not to behave in a way that encourages people to hit that button.

And of course you can remove the site from the blocked list almost at any time. I say almost because I haven’t yet found a way to go directly to the list and edit it, but if you perform a search that would have returned results from the blocked site you’ll see a message that some results were blocked, with a link to a place where you can lift the ban if you’re inclined to give them another chance. Personally I expect my list to have five or six domains on it in short order, and I don’t see any of them getting a second chance anytime soon. Score 1 for me, 0 for the webshapers.

Does IE9 RC Break Netflix in Media Center?

Looks like it might. Hard on the heals of my excellent first impressions of IE9 RC I had a little bit of a jarring return to earth in the matter of installing Beta/RC versions of products. I started Windows Media Center and activated Netflix so I could watch another espisode of my current obsession, “Rescue Me” with Denis Leary, and this is what happened after I clicked “Play” and the familiar red screen appeared:

If I had to guess I would say that IE9 RC updated the javascript engine or otherwise changed the way javascript is handled in a page. I’ll try to find a workaround and post it here.

[Update] No work-around, but some additional information. I spoke to Netflix customer support, and they hadn’t heard anything about this issue. The woman on the other end of the call stressed that they are not compatible with IE9 yet, and I guess this proves the point. I uninstalled the IE9 RC Windows Update and after a restart Netflix streaming functionality was restored. So this is a heads-up for people who use Netflix in  Media Center, and would like to try IE9 RC.

[Update] The problem was confirmed by poster “Dark Shroud” on the Anandtech forums. The same poster did note that he was able to get Netflix working in the browser, so that will have to be the fallback option for WMC users who want to run the latest Microsoft browser, at least until Netflix releases an update to their player script for Media Center.

[Update 2-24] Poster Marc W. sent a link to a Microsoft KB that supposedly fixed the issue, but didn’t due to a typo. Marc commented again today that it has been updated with the correct registry value, and that it now works. I haven’t had time to test this myself, but give it a go if you’re yearning to use IE9RC and watch Netflix in Media Center: http://support.microsoft.com/kb/2512239.

IE9 RC: Internet Explorer Returns

Microsoft released Internet Explorer 9 RC (Release Candidate) today, and after finishing up some work hacking up a Windows installer script, I downloaded and installed the new version to take it for a spin. IE hasn’t been my default browser for awhile now, and I wondered whether MS could regain that coveted slot in the system registry. I switched to Google Chrome over a year ago simply because it was faster to start up, and faster to render sites than IE8. I think most people who have used both would agree with those impressions, although Microsoft always contended that their own tests showed IE8 had a small performance edge. Whatever side of that argument you support, my experience today navigating a number of complex sites with both IE9 and Chrome lead me to conclude that Microsoft has leapfrogged the competition.

My purely subjective results are that IE9 now starts and renders faster than Chrome. In addition it scrolls complex sites more smoothly, a benefit of the new hardware-accelerated rendering engine. The interface hasn’t changed dramatically, but it is a little more streamlined. I’m sure there are many more changes to explore under the hood, and I will be interested to run some of my Silverlight and javascript code to see how it behaves and performs, but at the very least this version of Internet Explorer is a definite Chrome competitor. That is a meaningful achievement, and confirms once again that anyone who writes Microsoft off in a market they want to be in, does so at the peril of their business.

SyncToy 2.1 and Windows 7/64

So what gives with SyncToy 2.1 on Windows 7 64-bit? Last night I manually closed Outlook, then after it had exited cleanly I executed my folder sync. As mentioned previously (here, and here, and here) the sync command runs all my “active for sync” folder pairs in SyncToy 2.1, of which there are two. The first backs up c:\users\myname, and the second backs up the singly-rooted folder hierarchy in which I have stored all my stuff since 1987 or so.

I run this before I head upstairs for the night. Twice now I have come down the next morning and found Outlook sitting there with the message “The default storage file cannot be opened,” or something very similar to that. Also, and I had initially disregarded this as coincidence, when Sidebar restarted it only displayed one of three gadgets, and none of it’s windows would activate off of the desktop context menu. In both cases restarting solved the problems, but Outlook complained on restart that the PST file needed to be scanned.

The last point of interest: I keep my Outlook .PST file in that singly-rooted folder structure I mentioned above, not in the default location. Not sure what’s going on here, but I am beginning to think this one is SyncToy’s issue, not Outlook’s.

More Outlook

So last night, the actions taken by my script caused a problem with Outlook’s storage file. Just to review, what the script did was: a) execute the Outlook.Application.Quit method to instruct Outlook to shut down; b) wait until outlook.exe dropped off the process list; and then c) execute a SyncToy folder pair that backs up the PST file (among many others.) Since I haven’t been able to get this working from Task Scheduler yet, I’ve been running it right before I turn off the monitor for the night. In the morning I check the log to see what happened.

This morning Outlook was sitting there with a message “The default data file cannot be opened” or something like that. Gulp. I closed it and Outlook exited. Gulp. Repeated this action a couple more times with the same result. Navigated to the storage file folder in explorer and it seemed like someone had a lock on that file, or there was a hung operation. Rebooted, and when Outlook started this time it displayed the message about the storage file not being properly closed, did a quick scan and then opened fine. Whew. I have something like 10 years of stuff in that stupid 1.2 GB file (and a separate .8 GB archive).

Checking the SyncToy log I saw that it reported two failures in that folder, which is what usually shows up when it can’t copy Outlook.pst. That sort of indicates that Outlook didn’t fully close and release its death grip on the file, although the script log did confirm that outlook.exe was not to be found in the process list once the close command had been sent. This was all semi-fun to dig into at first, but it’s sort of become a giant pain in the ass. I ought to be able to back-up my Outlook file without shutting down the program to do it.  SyncToy uses Volume Shadow Copy, and it backs up a lot of other stuff that is in use too.

To be fair SyncToy does fail on some other stuff in the user folder tree, but I am less concerned about that, and I can understand if some of the goings on in AppData cause files to be uncopyable. But why should my email/calendar program have such a headlock on its storage file that it can’t be backed up using VSC? I’m assuming someone out there knows the answer to this question, and many others of deep and lasting import. It all seems to me like an increasingly good argument for online mail/contact/calendar apps that you don’t have to back up yourself. Or maybe I should just migrate to Thunderbird.

Backing Up Outlook, Continued

So a couple of days ago I wrote a post on the subject of Outlook and how it locks its PST file, making it impossible for SyncToy to copy it in my nightly backup. I don’t know what Outlook does to put such a bear hug on this file. Lots of other stuff is running and touches files that get backed up just fine. After all, SyncToy just wants to copy it, not change it. But for whatever reason Outlook opens that file when it starts and puts “No Trespassing” signs all over it.

In response I came up with a script that would close Outlook and then execute the sync job. On that a couple of points. First, I stated that I used the NirCmd utility because it allowed the application to close gracefully, which implied that the taskkill command does not. I was wrong about that. If you don’t use the /F switch then taskkill does something similar to NirCmd, which is find the process main window and post a WM_CLOSE (or WM_QUIT, WM_BYEBYE, WM_GTFO, who does windows messages anymore?) to its message queue. I don’t think they use the exact same method, because they seem to behave slightly differently, but whatever. They’re similar.

Second, there is a third way to close Outlook.exe. That is by using the automation interfaces that all the office applications expose. The ProgId of interest is Outlook.Application.

Type outlookAppType = Type.GetTypeFromProgID("Outlook.Application");
object appInstance = Activator.CreateInstance(outlookAppType);
appInstance.GetType().InvokeMember("Quit", BindingFlags.InvokeMethod, null, appInstance, null);

That little bit of code will (with some error checking) semi-reliably close Outlook. I say semi-reliably because all these methods seem to occasionally fail for less-than-obvious reasons. Last night in a test the method above failed to close Outlook while it was minimized to the tray, but worked once the window had been restored then minimized again. Go figure.

But even if any of these methods was 100% reliable, and the NirCmd approach seems to come closest, it still won’t work from a service like the task scheduler. I figured this out over a couple of evenings of testing and poking around. The explanation lies in the changes to the way services are launched since Vista/Server 2008/Windows 7. Here is an MS article that discusses the issue. The upshot is that services are launched in window session 0, and the first interactive user logs on to window session 1. The messsage queues are tied to the window session, so there is no way for a service to interact with the user desktop, or post windows messages to windows on that desktop.

There is still a way to mark services as interactive by checking the “Allow service to interact with the desktop” box in the service properties dialog. However at least for Task Scheduler I can confirm that doing so doesn’t enable it to see and post messages to Outlook. exe’s message queue. So for the moment, I am working on a different approach that I may post more about later.

Learning Geography

I suspect that children are losing track of where stuff is. Not things like socks and backpacks, which they have never been able to locate reliably, but counties, states, nations, rivers, mountains, hemispheres. I already knew that my own kids have no sense of where stuff is in our locality. How could they? They never leave the house other than to strap themselves into a vehicle for transport to some other network-enabled structure. But when one of them made a statement the other day alluding to Portugal’s proximity to China I was a little surprised. I could quickly show her where Portugal is using Google, Bing, National Geographic, but she wouldn’t be interested. She’s a teenager, and doesn’t believe I have enough brain cells left to tie my own shoes.

I often wear slippers, so she may have a point. In any case, people don’t think much about where stuff is anymore. They don’t need to. Our town is where we are, Portugal is at Newark Airport, and everything else is on the web. But supposing they did want to know? What would be the best way to find out? The answer, you might presume, has already been given: just pop open Google Earth or Bing Maps. But unfortunately both of those tools flat-out suck for answering geographic questions. With the appropriate label layers turned on they do fine for things at the scale of countries, so yes you could answer the Portugal question, but they fall to pieces when it comes to geographic features. Quick, open Google Earth and find me the river Vistula.

No, not the Vistula in Houston, Texas, nor the one in Elkhart, Indiana either. The river. Here’s a hint: it’s in Northern Europe. Just zoom in on that general area and search for “Vistula” again. Wow, “Vistula and Wolczanka” is a very popular something in Poland… but still no river. How about the Elbe? The Oder? The Don? Dnieper? Dniester? Rhine? Ok, dammit, just show me the Danube. You must have the freaking Danube. Actually, no, they don’t. Google Earth is an amazing tool, and it’s primarily good at the daunting task of stitching together different imagery of the planet, and of overlaying roads and towns on that imagery. Mountains? Rivers? Estuaries? Peninsulas? Not so much. So let’s try Bing Maps. That must be better, right?

Yes, a little. In the U.S. at least the new Bing beta mapper does a halfway-decent job of labeling some regions, and some bodies of water. At certain elevations it gets the major rivers, but then you scroll out a little and they disappear. In general Bing suffers from place name overload. Some views present you with a vast dense carpet of place names, and no way to filter them out that I can find. But even so it is better than Google Maps, which is specifically and solely about roads and cities. They don’t even bother labeling the Black Sea or the Mediterranean. Forget mountains, and even if you scroll all the way in they won’t tell you what river that blue line represents.

Of course Google and Bing remain the best way to answer all these questions, and perhaps the only way that matters. If you Google “Danube” you’re going to find out which river it is, and where it is. The information is always out there, but just not in the mapping and visualization tools. So consider this a call to web mapping developers everywhere to make their already neat tools more geography-friendly. Give me accessible means for filtering place names (a population slider would be great). Allow me to layer in other features that I want to see. Let me highlight a mountain range, or all the tributaries of a major river. Let me click on a feature and search for its name. Let me visualize ocean currents and prevailing winds, or highlight all the desert environments or forests.

In short, make it easy for me to find out where stuff is on the planet from within your app. And then get to work on my daughter’s backpacks. I’ve bought seventy-five of them and they are all missing.

Getting Outlook out of SyncToy’s Way

I use Microsoft’s SyncToy for nightly backups. In the past I’ve used many different solutions, from custom xcopy scripts to robocopy to the built-in backup program that started shipping with Windows back in the NT days. I also keep complete disk images for those rare occasions when a full recovery is needed. But for making sure that my daily work is saved I haven’t found a better alternative than SyncToy and a second hard disk.

SyncToy uses the Microsoft Sync Framework 2.0 to manage pairs of folders and keep them in sync according to certain rules. To keep some set of files safe all you do is organize them under one or more folders, then use SyncToy to define named folder pairs that match the original content folders with sync folders on the second drive. There are several modes, but I use the “echo” action, which propagates changes from the source folder to the sync folder. I have a scheduled task that runs every night to tell SyncToy to process all the defined folder pairs.

This setup worked great for months, or at least I thought it did. It’s one of those deals where I set something up, make sure it works, and then forget about it. In this case it turns out that the SyncToy command didn’t quite work as well as I thought. The problem was my Outlook.pst mail file. I have Outlook open all the time, minimized to the system tray, and when Outlook is open SyncToy can’t read the .pst file. When I happened to check the log I saw the read errors and realized that my voluminous mail file had never been backed up.

So, what I needed was a command that would close Outlook, run SyncToy, then restart Outlook. After Googling around and experimenting this is what I came up with:

@echo off
echo Synchronizing folders...
echo Shutting down Outlook.exe...
"c:\Mark Personal\System Tools\NirCmd\nircmd.exe" closeprocess outlook.exe
:wait
timeout 2
tasklist /FI "IMAGENAME eq outlook.exe" 2>NUL | find /I /N "outlook.exe">NUL
if "%ERRORLEVEL%"=="0" goto :wait
echo Outlook closed; executing SyncToyCmd...
"C:\Program Files\SyncToy 2.1\SyncToyCmd.exe" -R
echo Sync completed; starting Outlook.exe...
start "" /B /MIN C:\Users\Mark\Desktop\Outlook
echo Outlook.exe started; exiting
@echo on

This script first sends Outlook.exe a shutdown command using a nifty command line tool called NirCmd.exe, from NirSoft. You could also use taskkill, but NirCmd is a gentler method that gives Outlook a chance to properly close its data files, and if you know anything about Outlook, you know you want it to properly close its data files.

After telling Outlook to close the script loops until it can no longer find outlook.exe in the running task list, and then proceeds to run SyncToy for all active folder pairs, and then restart Outlook after it completes. Pretty neat, and I can’t take credit for any of it. Just some stuff cobbled together from bits and pieces of others’ wisdom around the net.

QAMplexity

In the 34 years or so since I was introduced to computers there have been a few moments that really grabbed my attention, and that formed lasting memories. One was the first time I loaded up buttfish.gif on my 12 mhz. 286 with it’s shiny new Orchid graphics card, and saw 640 x 480 pixels of stunning 16 bit color. That was about 1988 or so. Another occured around 2004 when I was over at a friend’s house relaxing after dinner, and he opened up a window on his desktop. There was a spinning hourglass for a couple of seconds, and then the local newscaster was in that window delivering the 11 PM update. Live television in a window. That was very slick. Later that same friend was an early adopter of a sweet 24″ wide-screen monitor, which he hooked up to an ATSC tuner for over-the-air HD. That was even slicker.

Before long I had my own tuner, an ATI device based on their Theatre-550 chip hooked up to Comcast, running Snapstream’s Beyond TV as a front end. These types of tuners are a combination of television receiver and video capture circuitry. They receive and decode the incoming analog RF signal and send the frames to a digitizer, which converts them to digital and outputs them as an mpeg stream. With this combination of hardware and software I received 70 analog channels, and could do all the requisite DVR things like skip around and record. The quality was so-so, but not any worse than an older analog television, and it was right there on my desktop. I could watch breaking news as I worked on code or a document, record movies, all that good stuff, either full screen or in a window. Neato.

Then came the digital switchover. Suddenly the 70 analog channels that I had been able to receive became 23. Life was dark and colorless. ST:TNG could not be recorded. And thus it remained for some time. I stopped watching TV on my computer, and took to renting movies when I felt like it. DVDs always looked great on my Dell 24″ widescreen;  much better than those old analog TV channels, and who the hell needs them anyway? Sniff. More time passed, and I found myself opening my birthday present a couple of weeks ago, and there inside the box was a Silcondust HDHomerun.

Since the rest of this post isn’t going to necessarily wax all sweetness and light about digital cable on the PC, let me say one thing about the HDHomerun up front: good God, what a cool little device this is! If you aren’t familiar with it, an HDHomerun is a little unit about the size of a 4-port switch. It has two coaxial connections for signal inputs, an ethernet port, and a power brick. You connect the coax from the cable provider, patch it to your router or a switch, and plug it in. Install a little well-written software on your PC (native Windows 7 64-bit included) and voila! You now have two QAM-capable HD tuners streaming beautiful 1080 HD (and whatever else) over the network to whatever QAM-capable front end you want to use. I’m using Windows Media Center, which at least in the Windows 7 version works very well with clear QAM. Wait, what? QAM? What is that, you ask? It sounds like an acronym for a gastric disorder. In fact it is not. It merely causes gastric disorders.

QAM stands for Quadrature Amplitude Modulation, and all the technical details that I don’t understand aside, it is a protocol by which streams of bits are transmitted between the cable company’s digital head end, and the QAM decoder in your set-top box, cable-ready widescreen TV, or HDHomerun. There it may become beautiful HD video, or still-pretty-decent SD video, or information such as program-guide entries, channel assignments, programming metadata, etc. Since the average coaxial cable connection is a very fat pipe indeed, squeezing nearly 40 megabits/second out of a single 6 mhz. slice of its over 750 mhz. of bandwidth, the protocol is continually evolving to add additional services. In fact, one of the reasons cable companies are eager to go digital is that the old analog channels take up a big chunk of that 750 mhz that could be used for other stuff.

Unfortunately, tuning in to all this wonderful stuff is a little like sitting in front of my Dad’s old tube-driven Hallicrafters spinning the shortwave dial, circa 1970. You’d hear lots of interesting things, but never really know where they came from, or why, or even necessarily how to find them again if you wanted to. In the digital cable world the set-top box, or STB to geeks, receives all this digital information and uses it to organize the channels, construct the program guide, send and receive commands, etc. Most importantly, it decrypts the channels that have been scrambled by the cable company, which might include all the channels, or just some of them, wholly depending on where you live and what the local cable operator chooses to do. The FCC says that they have to send the digital streams for network and some other feeds “in the clear” so that devices like the HDHomerun and Digital Cable-ready televisions can receive them, but that might mean as few as a dozen channels on some networks.

If you buy a digital QAM tuner card or device and hook it up to your cable system, then scan for clear channels, and you’re lucky like me, you may have as many as 120 or so come back in a list. Each will be identified by a virtual channel number, potentially a guide number, and a callsign or identifying string of text. The virtual channel numbers consist of a primary channel and subchannel, for example, 65.4, which is currently the History Channel on my system. For stations with multiple feeds you might find them snuggled in next to each other within one primary channel. If the station is one of the old-line networks then it has probably been assigned the primary digital channel that corresponds to the traditional analog assignment in that area. For example, WABC in New York has been channel 7 for years. On the Comcast digital network the four WABC feeds are currently 7.1, 7.3, 7.5, and 7.7. When these channels are presented to the viewer by the STB it hides them behind the guide numbers, so WABC is at guide channel 7, where it has always been, while the HD feed at 7.7 is mapped up into the 200’s with the rest of Comcast’s HD lineup.

This mapping capability allows the cable company to move digital channel assignments around without affecting the programming guide, and unfortunately for the PCTV enthusiast that’s exactly what they do. In the short time I have had my HDHomerun Comcast has made changes to the channel assignments twice, throwing my carefully edited channel mappings in Media Center out the window. But if the channel assignments change, at least it should be easy to pick up the new location by scanning the callsigns, right? I mean, WABC is WABC. The problem is that once again the STB has access to channel metadata that the tuner and Media Center do not, and it uses this metadata to make the callsigns understandable. Without this information, it’s left up to you to figure out that WABCDT2 is the WABC HD feed, that WPIXDT is the ION HD feed, that WLVTDT4 is the HD feed for the local public television station, etc. And if you take the time to do that, editing in the correct descriptions for all the available channels as Media Center allows you to do, and then have to rescan because the assignments changed… well it sucks, let’s put it that way.

It would be nice if the cable companies would assign stable digital channels, but for whatever reason they don’t. And I don’t mean to imply that the shifting is done for spite. I applaud Comcast for sending nearly 120 clear channels in my area, and in each of the last two shufflings I have actually gained new channels. So they have their reasons, but it still makes setting up and maintaining digital cable on your PC a real challenge. Not that I doubt the overall antipathy, or at least apathy, that cable providers feel toward component digital tuners. What they want is to rent you a set top box, which gives them pretty close to end-to-end control over the content and the presentation. What the rest of us want is for them to act a little like the old networks, that broadcast in the clear and for the public good to at least some extent. The FCC also wants this, and has required the cable companies to support the Cablecard standard, which allows you to plug a decryption module into a digital tuner and receive all the encrypted content without a STB, and without any two-way services or programming guide. Still the cable companies support Cablecard only half-heartedly, and the fact that there are Cablecard-ready PC tuners on the way is unlikely to cause any smiles to break out at Comcast.

The cable providers have been in an enviable position for some time now, with municipally-granted monopoly markets that companies like Verizon are only now beginning to break in to. Within the fuzzy embrace of those regulatory monopolies they have built a truly impressive hybrid fiber/coax network. As I mentioned above they have a tremendous amount of downstream bandwidth in that pipe, and over time they will have many opportunities to fill it with premium services. My own feeling is that they should send all the non-premium SD and HD feeds in the clear, with stable channel assignments. Let’s face it, most of these channels are commercial-choked crap these days anyway. By sending the straight video in the clear companies like Comcast will only increase the number of screens in the average consumer household on which their content is playing, and they can include advertisements for their premium services within those content streams.

Whether this will ever happen I don’t know, but one thing I am quite sure of: over time there will be more and more alternatives to the cable content feeds. Driven by competitive pressure they have steadily increased the bandwidth and quality of consumer level connections. I get 20 mbps downstream here, and 2 mbps up. Not long ago I watched every episode of the PBS series “Carrier” in HD, using Microsoft’s Internet TV service through Media Center. The cable providers can’t close down the IP pipe they’ve opened, and that pipe represents competition. Perhaps, over time, that competition will spur them to become more open and less defensive about what they consider to be the proprietary parts of their content stream. We’ll see. In the meantime setting up a digital cable-ready PC is going to remain something for enthusiasts with more time and patience than sense, and that includes me.