Hyperlapse: Creating Assets

Have you seen Google Street View Hyperlapse? It’s the latest project from the minds at Teehan+Lax. To be more precise, it’s from the Teehan+Lax Labs, an offshoot within this top-tier creative/design agency where people explore new ways to use technology to communicate. https://player.vimeo.com/video/63653873 If you want to see how Hyperlapse works, the source code is available on GitHub. Teehan+Lax is a company that really likes to share. They share source code, tools,  ideas, design strategies, business philosophy. They follow this principle: ‘create more value than you capture.’  That’s a powerful idea championed by open source crusader and tech book publisher Tim O’Reilly. It’s an idea that you’ll find embodied on many of the best sites on the web.

When I was playing with Hyperspace yesterday, I was reminded of a 2010 post (that I happened to read only a few weeks ago) by Robert Niles, former editor of USC Annenberg’s Online Journalism Review. It’s an article about the need for journalists to think in terms of creating assets instead of stories. Here’s the crux of it:

To me, that’s the word ["assets"] that should replace “stories” in your vocabulary as a journalist. Too many of the journalists I’ve seen try to make the transition to running their own blogs and websites remain mired in the “story” mindset, endlessly creating newspaper-style “stories” or even brief-length snippets for their blogs. But they fail to create assets of enduring value that ultimately provide the income that they need to remain viable businesses online.
This is as true for online publishing as it is for any other online content. Assets that have enduring value keep people coming back. But I'd add that creating a good story, or narrative, to support your assets is just as important. Teehan+Lax is a great example of how this is done. Read their 'behind-the-scene' story about how they designed Medium to see what I mean.

I Won’t Miss Google Reader

I've used Google Reader for years, but I won't miss the service when it shuts down later this year. There are plenty of alternatives (and more on the way). A few of the more intriguing choices are Feedly, Feedbin, Fever, and NewsBlur.

Like many users, I never actually visit my Google Reader page. I rely on third-party services that suck in my Google Reader subscriptions. For the desktop, I use Feedly. For iOS, I use Reeder. Will it matter that I'm no longer using Google Reader on the back-end? Not really. I take solace knowing that I'll be using fewer Google services. My main concern is that this may be part of a broader trend with Google: trying to funnel us all into Google+ and clamping down on how (and if) third parties can use Google services. I wouldn't be all that surprised if Google were to lock down Gmail someday soon so that it could only be accessed via Google's mobile apps or their web-based service. It is an ad-based company, after all.

In any case, of the many alternative news aggregator services, my bet is that Feedly will rise to the top of the pack in terms of popularity. They're poised to seamlessly transition existing Google Readers (without any required user action). That's very handy, but it would only go so far if the service was so-so. On that front, I think the Feedly experience is one of the best out there. It looks great, it's easy to customize to fit different workflows and visual preferences, and they're aggressively honing the service to make it better.

As an example of this, I've just rediscovered Feedly's mobile apps. I've used Feedly on the desktop for quite a while and like how easy it is to view and manage feeds in various ways. While I tried the Feedly iOS apps early on in their history, I wasn't drawn in. Reeder was still a better experience on iOS. However, I tried the apps again last night. I'm glad I did. These apps have come a long way and I'm fairly convinced that they'll work for me quite well.

As an aside, I also enjoy news aggregation services like Zite and Prismatic, but I tend to put these sort of services in a different category as they focus on presenting stories based on reader interests. They are fantastic for discovery and casual browsing and are certainly worth a look. Lastly, you may note that I haven't mentioned Flipboard anywhere in this article. I must be one of the few people out there who just don't care for it. Nothing personal, Flipboard. I note it here, though, because it's an alternative highly-regarded reader that is also certainly worth a test drive.

Codecademy

I’m a hybrid content author and web designer with no formal training in computer science. Over the years, I've honed my HTML and CSS skills through trial and error, repetition, books, online courses, and by tapping the expertise of colleagues. 

But JavaScript? I'm not so good with that. Sure, I can deploy a jQuery plugin and fiddle with parameters. And I know a bit of PHP (enough to get me in trouble, as they say). In most cases, I can decipher code, copy what I need, and modify it to meet my needs … as long as I don’t have to change too much. But my depth of understanding is shallow, which is something I’ve long wanted to remedy. Now I feel like I'm really making some progress with Codecademy, a free online ‘academy’ aimed at teaching basic programming skills.

Codecademy gets it right. For starters, you aren’t required to sign up for an account prior to beginning lessons. Instead, you can dive right in by typing your name in the site’s integrated editor. Entering your name is your first lesson. Only later, after completing a few exercises, are you prompted to sign up for a free account (which you only need to do if you want to keep tabs on your progress). At this point, you’ll have a good idea if this is for you. While this is a relatively minor detail, it’s a thoughtful touch that underscores how this is a different kind of training tool.

Lessons are divided into topical sections that grow in complexity as you progress. At each step of the way, accompanying text explains what’s going on and why. Within a few days, you’re writing simple programs that tie together all that you’ve learned up to that point.

While there are badges for completing sections, progress meters, and a point scoring system to help keep motivation up, the real driver – and the heart of Codecademy – is the integrated editor that accompanies each lesson. Rather, the integrated editor really is the lesson. You read a short bit of natural language text explaining a concept or new syntax, and then you’re asked to write some code to demonstrate comprehension. Everything you learn, in other words, you learn by doing yourself. You can’t move on to the next lesson unless you get the code right. This real-time feedback works.

There’s a lot of course material available, which is growing exponentially thanks to the addition of crowdsourced exercises submitted by other developers. User forums are active, so you can get help when you get stuck or need something clarified. Right now, only JavaScript lessons are available, with Python and Ruby courses to come later. I reckon these lessons will keep me occupied and learning for a long time to come. The best part is that the people behind Codecademy say they’re committed to keeping this learning resource free.

More than other online courses, videos and books that I’ve tried over the years, Codecademy fosters a clearer understanding of what it is that I’m doing and why I'm doing it because it is, quite literally, engaging. It’s not that other courses I’ve taken are not good, it’s that the Codecademy model is particularly good.

Reminder: Delete Your Google History by March 1

Don't forget that Google's new privacy policy goes into effect on March 1. Policy changes will affect you if you use Google search while logged into a Google user account.

Here are the instructions from the Electronic Frontier Foundation on how to clear you browsing history. If you use multiple Google accounts, you'll want to delete browsing history for all of them. If you don't take these steps, all of your browsing history will be combined with and shared across all the other Google services you use. If you're not sure why this might be a concern, see this EFF post and this Slate article ... or search on it!

You might also consider trying out an alternative default search engine. Many people (me included) are now using DuckDuckGo. This search engine does not collect user data and emphasizes privacy. It's quite capable, although I do notice differences in terms of rankings and results compared to Google. That's not a bad thing, it's just different. 

If you're using Chrome, it's easy to change your default engine.  Look under 'Preferences' > 'Manage Search Engines.' It's relatively easy with Firefox, too. You'll find the option to manage search engines by choosing the dropdown arrow located in the browser's built-in search box. With Safari, it's a bit more complicated because the browser only offers Google, Bing, and Yahoo as default search engines. You can make DuckDuckGo your default, though, if you install the free Glims add-on. 

Captioning Web Video

I'm no video expert. Yet I often find myself encoding, editing, and otherwise manipulating video for the web. Recently, I completed a video project that involved converting a DVD of a 40-minute presentation into a movie that could be viewed on a web page, as a whole or in chapters. The final product had to be captioned.

Converting the DVD into video for the web was easy. I used Handbrake to rip the DVD into MP4 format. Editing was equally easy. I used iMovie to add title screens and transitions, and to break the movie up into chapters. Adding the captioning, however, was tricky.

Why bother with captioning? Here are some good reasons: so that those who are deaf or hard of hearing can enjoy the video, so the text is indexed by search engines, and to aid those for whom English is a second language. And here’s another: the Twenty-First Century Communications and Video Accessibility Act of 2010.

If captioning is important, then why isn't it a mainstream practice? I'm not qualified to answer that question, but my guess is that it's in part due to the fact that captioning is time-consuming and difficult. For instance: with external captioning (where captions are contained in an external file and sync with the video), there are multiple formats and a lack of clear standards. And for embedded captioning (where captions are simply typed in an editor and then exported with the movie), it's just plain tedious work.

For my recent video project, I considered three captioning options:

  1. Embed the captions. The first option is to place the captions directly into the movie itself using a tool such as Final Cut Pro,  iMovie, or  Adobe Premiere.  I have Final Cut Pro, but I tend to use iMovie since most of the video work I do is short and simple. It’s the easiest tool for the job and the results look good. Here’s the thing about iMovie: while there are dozens of title/text effect options, none are designed for captioning (which is surprising given Apple's robust accessability options for the OS). Despite this shortcoming, I’ve discovered that I can 'fake' captions by adding lower thirds to each segment of video. Making a default lower third overlay in iMovie into something that resembles a caption is a matter of changing font sizes. You can see an example of this in a recent video podcast I produced. This works, but it isn't a practical solution for a long movie. In truth, it's really not an ideal solution for any length movie because the captions are permanently embedded in the video. Screen readers and search engines can't see this text. People can’t choose to turn the captions on/off. So I didn't choose this option for my project.
  2. Dump the text on the page. A second option is to dump the captioning for a video on a page, underneath the video as HTML text. This may technically meets accessibility requirements, but it’s a lousy solution. The text is unassociated with the video. One can read the text or watch the video. It's not feasible to do both at the same time. Nix.
  3. Create an external caption file. This last choice is the best solution: create an external caption file that will appear in sync with the video. Captioning is then matched up with the video, it's readable by screen readers, and it's good for search engines. It can also be turned on or off at the user's discretion.

So how do you create and deploy and external caption file? If you simply wish to place a video on Youtube, it's easy. Once you upload your video to the free service, Youtube offers free auto-generated machine transcription. While you'll find that video speech-to-text accuracy is hit-and-miss (more miss in my experience), the important part is that Google generates  time codes that precisely match the the audio in the video. So once you download the caption file from Youtube, it's a simply a matter of manually correcting the text so that what appears in the caption will match what is actually being said in the video.

If you don't want to (or can't because of workplace policy) solely use Youtube to present your video, it's still a very useful tool. How? If you are embedding captions in a video using an editor such as iMovie, YouTube will do half of the work for you by delivering a fair approximation of a transcript. If you want to use an external caption file elsewhere with a different video player, you can still use this Google-generated file. You just need to convert it into the right format.

Here’s the process I used to generate a caption file for my video project:

  • I began by uploading the video to my YouTube channel.
  • I then requested that YouTube auto-generate a Subviewer caption file for this movie (Be patient. It may take hours to get this file back from Google because you'll be in a queue with tons of other people).
  • I then downloaded this file and opened it up in text editor. 
  • The next step is tedious, but necessary: cleaning up the machine-generated text. I opened my movie in a QuickTime player window and, as it played, edited my caption text to correct errors and typos. It's not too bad if you toogle between a text editor and QuickTime using Cmd-Tab.
  • Once I had my cleaned-up Subviewer text file, I copied and pasted it it into a free online converter to generate into the appropriate format. In my case, I generated a DFXP file for use with a Flash player. Here are three conversion tool options:
    • 3PlayMedia Caption Format Converter. This converter lets you convert from SRT or from SBV to  DFXP, SMI or SAMI (Windows Media), CPT.XML (Flash Captionate XML), QT (Quicktime), and STL (Spruce Subtitle File).
    • Subtitle Horse. A free online caption editor. Exports DFXP, SRT, and Adobe Encore files.
    • Subviewer to DFXP. This free online tool from Ohio State University converts a YouTube .SBV file into DFXP, Subrip, or QT (QuickTime caption) files. I used this tool for my project.

What’s the appropriate format?

  • YouTube: Subviewer (.SBV) 
  • iTunes, iOS: Scenarist Closed Caption (.SCC) 
  • Flash: DFXP, Timed Text Markup Language, the W3C recommendation. These are plain ol’ XML files.  You could also use the SubRip (.SRT) file format for Flash.
  • HTML5:  See this post.

If you're not using a hosted service like YouTube or Vimeo (which, incidentally, does not support external captions), you'll of course have to decide how to present the video on your site. There are many options. You can roll your own player with external captions using Adobe Flash. You can use off-the-shelf players that support captioning such as Flowplayer and JW Player — these two commercial products offer very easy setup and they offer HTML5 video players with Flash fallback. Another option: you might try HTML5 with experimental captioning support (note that Safari 5 now supports captioning with the HTML5 video tag). As I said, there are options. The video player discussion is beyond the scope of this post (and I don’t want to go down the HTML5 vs. Flash rabbit hole!).

My main goal here is to point out that Google's machine transcription is good for more than just hosting a captioned video on Youtube. It's trivial to convert this caption file into a variety of formats. The key point is that you don't have to manually add time codes for your video. This critical step is done for you.

Yet even with this handy Google tool, generating caption files (and getting them to work with video players) remains an unwieldy task. We clearly need better tools and standards to help bring video captioning into the mainstream.

P.S. While researching this post, I came across two low-cost tools that look like solid options to create iOS and iTunes movies with captions. Both are from a company called bitfield. The first is called Submerge. This tool makes it very easy to embed (hard-code) subtitles in a movie and will import all the popular external captioning formats. The second is called iSubtitle. This tool will ‘soft-code’ subtitle tracks so you can add multiple files (languages) and easily add metadata to your movie.