Viewing Big History with ChronoZoom

Cell Size and Scale visualization from the University of Utah, the Scale of the Universe visualization from Cary and Michael Huang, and the Universcale from Nikon.

A new entrant in the field called ChronoZoom ups the ante. You have to see it for yourself. It's a really impressive visualization (HTML5) tool that explores Big History. The people behind the project have lofty ambitions for the future and they're looking for users:


ChronoZoom Beta is ready for mass consumption and feedback, structured to scale up to petabytes of content, and architected for the future of personal computing.

 

Codecademy

I’m a hybrid content author and web designer with no formal training in computer science. Over the years, I've honed my HTML and CSS skills through trial and error, repetition, books, online courses, and by tapping the expertise of colleagues. 

But JavaScript? I'm not so good with that. Sure, I can deploy a jQuery plugin and fiddle with parameters. And I know a bit of PHP (enough to get me in trouble, as they say). In most cases, I can decipher code, copy what I need, and modify it to meet my needs … as long as I don’t have to change too much. But my depth of understanding is shallow, which is something I’ve long wanted to remedy. Now I feel like I'm really making some progress with Codecademy, a free online ‘academy’ aimed at teaching basic programming skills.

Codecademy gets it right. For starters, you aren’t required to sign up for an account prior to beginning lessons. Instead, you can dive right in by typing your name in the site’s integrated editor. Entering your name is your first lesson. Only later, after completing a few exercises, are you prompted to sign up for a free account (which you only need to do if you want to keep tabs on your progress). At this point, you’ll have a good idea if this is for you. While this is a relatively minor detail, it’s a thoughtful touch that underscores how this is a different kind of training tool.

Lessons are divided into topical sections that grow in complexity as you progress. At each step of the way, accompanying text explains what’s going on and why. Within a few days, you’re writing simple programs that tie together all that you’ve learned up to that point.

While there are badges for completing sections, progress meters, and a point scoring system to help keep motivation up, the real driver – and the heart of Codecademy – is the integrated editor that accompanies each lesson. Rather, the integrated editor really is the lesson. You read a short bit of natural language text explaining a concept or new syntax, and then you’re asked to write some code to demonstrate comprehension. Everything you learn, in other words, you learn by doing yourself. You can’t move on to the next lesson unless you get the code right. This real-time feedback works.

There’s a lot of course material available, which is growing exponentially thanks to the addition of crowdsourced exercises submitted by other developers. User forums are active, so you can get help when you get stuck or need something clarified. Right now, only JavaScript lessons are available, with Python and Ruby courses to come later. I reckon these lessons will keep me occupied and learning for a long time to come. The best part is that the people behind Codecademy say they’re committed to keeping this learning resource free.

More than other online courses, videos and books that I’ve tried over the years, Codecademy fosters a clearer understanding of what it is that I’m doing and why I'm doing it because it is, quite literally, engaging. It’s not that other courses I’ve taken are not good, it’s that the Codecademy model is particularly good.

Reminder: Delete Your Google History by March 1

Don't forget that Google's new privacy policy goes into effect on March 1. Policy changes will affect you if you use Google search while logged into a Google user account.

Here are the instructions from the Electronic Frontier Foundation on how to clear you browsing history. If you use multiple Google accounts, you'll want to delete browsing history for all of them. If you don't take these steps, all of your browsing history will be combined with and shared across all the other Google services you use. If you're not sure why this might be a concern, see this EFF post and this Slate article ... or search on it!

You might also consider trying out an alternative default search engine. Many people (me included) are now using DuckDuckGo. This search engine does not collect user data and emphasizes privacy. It's quite capable, although I do notice differences in terms of rankings and results compared to Google. That's not a bad thing, it's just different. 

If you're using Chrome, it's easy to change your default engine.  Look under 'Preferences' > 'Manage Search Engines.' It's relatively easy with Firefox, too. You'll find the option to manage search engines by choosing the dropdown arrow located in the browser's built-in search box. With Safari, it's a bit more complicated because the browser only offers Google, Bing, and Yahoo as default search engines. You can make DuckDuckGo your default, though, if you install the free Glims add-on. 

iPhone Doesn't Have a Mute Switch

It has a 'Ring/Silent' switch. If you're not familiar with last week's mute switch controversy, start with this post from John Gruber. Then read Andy Ihnatko's post. Finally, read Dan Benjamin's take.

I agree that you can't design around every edge case, and it's logical to assume that most people want alarms to make noise so important events are not missed (e.g., waking up in the morning). I am such a user. I typically leave silent mode engaged, but I rely on my phone to wake me up for work. That said, I'm sure that many users will naturally assume that the 'mute' button on the iPhone mutes. Everything. That's a logical assumption.

How can we satisfy the need to make our iPhones emit noise in some situations and to remain silent in others? Some have suggested introducing software controls so users can choose on a per-event basis. Others have envisioned an intelligent rules-based system based on GPS location (e.g. remain silent when at the coordinates of the Lincoln Center Plaza). I think both solutions are overly complicated.

Here's a simpler idea that would catch most user edge cases: leave the 'silent mode' functionality as is. When the phone is set to mute, the phone is silent except for events (alarms) that the user has explicitly set. Add, by default, one minute of vibration prior to sounding manually-set alarms when silent mode is engaged.

In most cases, users in concert halls and staff meetings will be physically alerted by their vibrating phone. They'll have time to pull the phone out and cancel the event before an audible alarm sounds. Sure, some users won't hear or feel the vibrating phone because it's buried in a jacket pocket hung behind a seat or stuffed in a purse. But most people will. They'll have time to react.

A Better iPad Stylus

Handspring Visor Edge? I had the metallic silver model (and still do). It sports a blazing fast 33 MHz CPU and 8MB of RAM. I've kept it over the years because it still works ... and because I think it's a great design.  I especially loved the weight, shape, and feel of the little stylus.  That stylus happens to be metallic.

You see where I'm going here. Since the stylus is metal, all that I needed was some sort of conductive tip.

Here's what I came up with. It works great as long as any part of my hand is touching the metal pen (which is hard not to do). It looks nice (I wouldn't say it's beautiful, but I think it looks better than most homemade styli). It's compact and easy to tote around. And here's the best part: the tip offers far more accuracy and draws a thinner line than commercial or homemade conductive styli that I've tried or seen demonstrated.

Here's how I made it:

Here are the primary ingredients. Heavy-duty aluminum foil, tape (I used electrical tape, but you could use duct tape), and a rubber foot that I found in my shop.

About the rubber foot. This may be the hardest bit to find, but it's something you should be able to pick up at a hardware store (or, at least, you can find something similar). I cut off part of the foot as seen in the photo above, then drilled a hole into the rubber that would tightly fit the metal stylus. Other materials will also work. I made an earlier model with a cheap wood plug using the same method. It worked well, but isn't as flexible (meaning that you may have trouble with the wood cracking when you drill into it). Rubber works best. 

Now wrap the foil-wrapped rubber foot and stylus with a short piece of strong tape. Once you've done that, you're done. The blue shrink tubing you see here isn't really necessary. It's just for looks. I took a short segment of blue shrink tube, stretched it out with pliers so it would fit over the stylus, cover the tape, and partially cover the foil-wrapped foot. Then I applied heat to seal it all up. And here is the completed stylus, ready for action.

So that's all there is to it. It's a bit more involved than most of the DIY capacitive stylus tutorials you'll find on the web, but I think it's worth the effort. It works great. It looks nice. It's a great way to recycle a peice of old tech. I've been using it for a while and the aluminum is showing no signs of splitting. If it does split, it's a relatively simple matter to rip off the tip and make a new one. If you don't have an old Handspring Visor Edge in your closet and want to try this, would you believe that you can still buy a metal stylus

New Life for a Broken Lamp

I started out by (carefully) destroying the lamp with a screwdriver and small pry bar. I threw out the plastic junk and kept all the internal parts.

This is the wall-facing side of the lamp, showing how I reassembled the 'guts' of the old plastic lamp in the new wood structure. Only the on/off switch required soldering; I had to completely unsolder the switch to fit it through the hole in the wood. I used heat-shrinking plastic tubes to cover up the solder work. For the other wires, I used plastic connector caps to join them back up. I attached the components to the wood with screws and staples. It's hard to tell here, but I mounted the metal reflective shield from the old lamp to the wood surface behind the bulb. Last note: I had to cut all the wires when extracting them from the old lamp's plastic housing. The key thing to point out here is this: if you try something like this, be sure to mark the wires very carefully so you can remember how to reattach them.

And here's a wider view so you can see the effect of the light reflecting off the wall behind my main monitor.So that's it. The entire project took about five hours on a Sunday. I'm waiting for the glue to completely dry before applying a coat of polyurethane to the front. 

The most challenging part was figuring out the design: I wanted to create a very simple and functional lamp using only scrap wood left behind from other projects. Aside from my time, the project didn't cost a dime.

The tools I used to assemble the lamp included a miter saw (to cut all lengths and angles), a biscuit joiner (to join the two pine pieces and the feet to the base of the lamp), a drill (to create a hole for the on/off switch), a table saw (to cut a strip of oak for the top edge of the lamp), wood glue, and a sheet sander.  For the electrical work, I used a soldering gun and some heat-shrink tubing, wire connectors, a wire cutter/stripper, and a few screws and staples.

I think it looks better than the original. It certainly fits in better with my wooden desk than did the plastic lamp. I may have to go and break the other lamp now.

Spotified

Spotify launched in the U.S., I signed up for a Premium account for $10 per month. Now that I’m nearing the two-month membership mark,  I’m familiar enough with the service to share some thoughts.  I should start by noting that I’m not the type of person who regularly signs up for paid services. I don’t even subscribe to a cable TV package.

So why do I think Spotify Premium is worth the price of admission?

First and foremost, access to millions upon millions of tracks. While my musical tastes tend toward the eclectic and obscure, I’ve been able to find most of what I was looking for.  Second, the Premium service allows me to stream all the content I can reasonably consume, without ads, on my Mac or on my iPhone. Third, Premium serves up higher-quality audio. Fourth, I can cache songs for offline listening,  useful for my daily train commute through farm country with spotty 3G service. And, finally, I can listen to most of my iTunes music on-the-go (provided I have a connection), as Spotify reads what I own and matches what it can with copies in the cloud.

Spotify is a different sort of service from that of Pandora or Last.fm. It’s better suited for people who know what they want, or at least are willing to take the time to explore. While there is an 'Artist Radio' function to stream similar artists, it’s not a well-promoted feature.  To be honest, I didn't even notice this feature for the first month and have never had the urge to use it. Instead, I tend to seek out a specific artist, then choose from a list of Spotify-suggested related artists. This often leads to uncharted territory and new artist discoveries. I like it because I feel that I am in direct control of the discovery process.  

Unfortunately, all  that I just described in the previous paragraph is available only on the desktop. The iPhone app is geared towards playing tracks already lined up in a playlist, with the exception of seeking out a specific artist, album, or track. In other words, I can search the Spotify database from the iPhone, but I have to know what I’m looking for. There is no ‘Artist Radio' streaming option and no ‘Related Artists’ category on the mobile app. That’s a shame.

As I mentioned earlier, Spotify allows syncing of tracks from iTunes. The promise is that this will mostly alleviate the need to fire up the other music platform. I’ve found this to be largely true. While the service only syncs non-DRM protected music from an iTunes library, that’s not that big of a deal. I can always search out those missing files from Spotify’s database, provided they’re available. 

I can also listen to most of my iTunes library on my iPhone or iPad without worrying about managing playlists due to limited storage space (provided I don’t overdo it with offline caching). Spotify automatically matches the tunes in my iTunes library with online versions in Spotify’s massive database. It’s seamless.

Unfortunately, a fair number of my more obscure tracks and albums aren’t available in Spotify’s database. If I want these tracks to be available, I have to choose to sync them locally for offline listening. I’ve also noticed that some of my iTunes tracks appear on my phone with little link symbols. I had to look up what this meant. It indicates that (for some reason) the version of the song that I own isn’t available to play in my country, so Spotify has substituted it for a playable version. 

I admit I am mystified as to why some material isn’t in the Spotify catalog, and why some tracks or albums are not available to U.S. customers. I'm sure it’s based on agreements that Spotify has worked out with labels, but it can be frustrating because it can be so ... random. For instance, when I first started the service I downloaded ‘De Stilj’ by the White Stripes. A day later, this album vanished from my playlist. That album is no longer available to stream in the U.S. However, all other White Stripes albums are available. In terms of explanation, all I get from Spotify is a notice that the tracks ‘are not currently available in the United States.’ I can only imagine the convoluted paperwork that Spotify legal is juggling to keep this service going, so this isn’t really a complaint. I'm impressed that they got it off the ground at all. I’m just a bit miffed that I can’t stream some albums and tracks that I’d like to hear. Oddly, I've even come across many cases where all but one or two songs on a given album are available to stream. What's so special about those songs? Arg!

Another example: The first disc of ‘Brewing Up With Billy Bragg,’ circa 1984, is available if you search for it via the Spotify desktop app. However, the second disc in this two-disc set is unavailable in the U.S. How odd. Worse, if I search for this album via the iPhone app, the album doesn't appear at all. And a minor annoyance: that Billy Bragg album shows up as published in 2006. I’m guessing that’s a re-release date. I’ve found this time and again with albums I’ve sought out. The years don't match up with actual release dates. I’ve also found that the same album often appears many times over in search results, but I can only listen to one of those albums in my country. I surmise that there are different licensed versions for different regions of the world.  It would be nice to have the option within Spotify’s preferences to hide the albums and tracks that I can’t stream. It’s the same thing to me as if those tracks and albums didn't exist at all, so I don't want to see them.

Functionally speaking, the desktop and mobile Spotify apps work quite well, with a few caveats regarding playlists. The main problem I’ve encountered is that the service doesn’t import smart playlists from iTunes, which is how nearly all of my nearly 8,000 files in iTunes are organized. The remedy for this, of course, is to make new playlists. It's a simple task to copy and paste the contents of a smart playlist into a 'dumb' playlist within iTunes, and then import that. But that's annoying. And speaking of smart playlists, Spotify absolutely needs some sort of intelligent playlist functionality to sort through and categorize Spotify music. Dumb playlists just don’t cut it.  

Here’s a round-up of what I’d like to see in future Spotify app releases:

  • More social sharing options. Right now, it’s only Facebook. I have no urge to share anything with Facebook. Actually, I'm not sure I'm inclined to share my personal music library via any service, but I'm sure that many users would appreciate greater choice.
  • Tooltips. The meaning of some of Spotify's color-coding and iconography isn't always obvious. Simple tooltips would help.
  • It would be nice to have ‘Related Artists’ and ‘Artist Radio’ on the mobile app.
  • I would appreciate the option to hide music that is not available for my country. I only want to see it if I can stream it.
  • Smart playlists: the ability to import from iTunes, and to create within Spotify. Perhaps there may be patent/legal issues here to prevent some of this functionality, but surely Spotify could devise some sort of ‘intelligent’ playlist capability. It’s an all-you-can-eat music service, so we need better organization options.
  • The user interface isn’t always intuitive. For instance, on the desktop app, you can’t get more information about an artist, or seek more albums/tracks from an artist, by selecting the artist name from within one of your playlists. You have to enter the name in the search box. When you do search for and select an artist, Spotify returns an interface with four tabs: an Overview, Biography, Related Artists, and Artist Radio. Maybe it's just me, but I didn’t even notice the tabs at first. Oddly, the main window (the artist ‘Overview’ tab) displays the beginning sentence or two of the artist biography and a short list of a few related artists. Since there's not much space here, only a fraction of the biography and related artists are visible, yet you can’t select one of these items to access the full bio or related artist entries. You just get to see a tiny fraction of the content. There isn't even an option to scroll through the rest of the content. The only way to access this content is to select one of the tabs. Check out the screenshot below to see what I mean. Why not link the short blurbs on the 'Overview' page to the sub-tabs for Biography and Related Artists?

The odd Spotify 'Overview' PaneMy overall experience? I love it. Prior to Spotify, I had hundreds of dollars of albums in my ‘Wish List’ basket in iTunes. Now I’m listening to all of those albums. Yes, I’m paying $120 dollars a year for the privilege, but I’m consuming far more music than I ever could afford to buy outright. My interest in discovering new artists is greater than it has been since I was in my 20s. Now when I learn of an interesting new artist or album, I don’t have to read second-hand reviews or settle for short previews. And I don’t have to add items to a ‘Wish List.’ I just cue it up and experience it for myself. If I don’t like it, I can just as easily remove it. It’s a liberating experience.

On the flip side, unlimited and instant access to millions of tracks means that it's easy to listen for one minute and then dump an album. Too easy. If I paid for an album, I would never do this. I'd listen to it over and over. I try to keep this habit with Spotify. Sure, I may still not like an album after a few listens. More often, though, I only begin to appreciate and enjoy an album after several weeks or months. Spotify's all-you-can-eat buffet can destroy this practiced patience if you let it.

At any rate, I'm enjoying the service. Still, I am trying to keep my tracks well organized should I someday wish to cancel my subscription. What if fees get too steep? What if label agreements break down and the catalog drastically shrinks in size? My strategy is to carefully cultivate what I really like through playlists and by ‘starring’ favorites. Should I need to leave and return to iTunes,  I’ll have a good idea of which artist albums and tracks I want to buy and which I can do without.

Of course, I hope that day won’t arrive anytime soon. I'd love to see Spotify-like models appear for other content. I would consider signing up for similar services for audiobook, digital magazines, and ebook subscriptions. Hhave you heard the rumor that Amazon.com may soon roll out ebook rentals?

 

Captioning Web Video

I'm no video expert. Yet I often find myself encoding, editing, and otherwise manipulating video for the web. Recently, I completed a video project that involved converting a DVD of a 40-minute presentation into a movie that could be viewed on a web page, as a whole or in chapters. The final product had to be captioned.

Converting the DVD into video for the web was easy. I used Handbrake to rip the DVD into MP4 format. Editing was equally easy. I used iMovie to add title screens and transitions, and to break the movie up into chapters. Adding the captioning, however, was tricky.

Why bother with captioning? Here are some good reasons: so that those who are deaf or hard of hearing can enjoy the video, so the text is indexed by search engines, and to aid those for whom English is a second language. And here’s another: the Twenty-First Century Communications and Video Accessibility Act of 2010.

If captioning is important, then why isn't it a mainstream practice? I'm not qualified to answer that question, but my guess is that it's in part due to the fact that captioning is time-consuming and difficult. For instance: with external captioning (where captions are contained in an external file and sync with the video), there are multiple formats and a lack of clear standards. And for embedded captioning (where captions are simply typed in an editor and then exported with the movie), it's just plain tedious work.

For my recent video project, I considered three captioning options:

  1. Embed the captions. The first option is to place the captions directly into the movie itself using a tool such as Final Cut Pro,  iMovie, or  Adobe Premiere.  I have Final Cut Pro, but I tend to use iMovie since most of the video work I do is short and simple. It’s the easiest tool for the job and the results look good. Here’s the thing about iMovie: while there are dozens of title/text effect options, none are designed for captioning (which is surprising given Apple's robust accessability options for the OS). Despite this shortcoming, I’ve discovered that I can 'fake' captions by adding lower thirds to each segment of video. Making a default lower third overlay in iMovie into something that resembles a caption is a matter of changing font sizes. You can see an example of this in a recent video podcast I produced. This works, but it isn't a practical solution for a long movie. In truth, it's really not an ideal solution for any length movie because the captions are permanently embedded in the video. Screen readers and search engines can't see this text. People can’t choose to turn the captions on/off. So I didn't choose this option for my project.
  2. Dump the text on the page. A second option is to dump the captioning for a video on a page, underneath the video as HTML text. This may technically meets accessibility requirements, but it’s a lousy solution. The text is unassociated with the video. One can read the text or watch the video. It's not feasible to do both at the same time. Nix.
  3. Create an external caption file. This last choice is the best solution: create an external caption file that will appear in sync with the video. Captioning is then matched up with the video, it's readable by screen readers, and it's good for search engines. It can also be turned on or off at the user's discretion.

So how do you create and deploy and external caption file? If you simply wish to place a video on Youtube, it's easy. Once you upload your video to the free service, Youtube offers free auto-generated machine transcription. While you'll find that video speech-to-text accuracy is hit-and-miss (more miss in my experience), the important part is that Google generates  time codes that precisely match the the audio in the video. So once you download the caption file from Youtube, it's a simply a matter of manually correcting the text so that what appears in the caption will match what is actually being said in the video.

If you don't want to (or can't because of workplace policy) solely use Youtube to present your video, it's still a very useful tool. How? If you are embedding captions in a video using an editor such as iMovie, YouTube will do half of the work for you by delivering a fair approximation of a transcript. If you want to use an external caption file elsewhere with a different video player, you can still use this Google-generated file. You just need to convert it into the right format.

Here’s the process I used to generate a caption file for my video project:

  • I began by uploading the video to my YouTube channel.
  • I then requested that YouTube auto-generate a Subviewer caption file for this movie (Be patient. It may take hours to get this file back from Google because you'll be in a queue with tons of other people).
  • I then downloaded this file and opened it up in text editor. 
  • The next step is tedious, but necessary: cleaning up the machine-generated text. I opened my movie in a QuickTime player window and, as it played, edited my caption text to correct errors and typos. It's not too bad if you toogle between a text editor and QuickTime using Cmd-Tab.
  • Once I had my cleaned-up Subviewer text file, I copied and pasted it it into a free online converter to generate into the appropriate format. In my case, I generated a DFXP file for use with a Flash player. Here are three conversion tool options:
    • 3PlayMedia Caption Format Converter. This converter lets you convert from SRT or from SBV to  DFXP, SMI or SAMI (Windows Media), CPT.XML (Flash Captionate XML), QT (Quicktime), and STL (Spruce Subtitle File).
    • Subtitle Horse. A free online caption editor. Exports DFXP, SRT, and Adobe Encore files.
    • Subviewer to DFXP. This free online tool from Ohio State University converts a YouTube .SBV file into DFXP, Subrip, or QT (QuickTime caption) files. I used this tool for my project.

What’s the appropriate format?

  • YouTube: Subviewer (.SBV) 
  • iTunes, iOS: Scenarist Closed Caption (.SCC) 
  • Flash: DFXP, Timed Text Markup Language, the W3C recommendation. These are plain ol’ XML files.  You could also use the SubRip (.SRT) file format for Flash.
  • HTML5:  See this post.

If you're not using a hosted service like YouTube or Vimeo (which, incidentally, does not support external captions), you'll of course have to decide how to present the video on your site. There are many options. You can roll your own player with external captions using Adobe Flash. You can use off-the-shelf players that support captioning such as Flowplayer and JW Player — these two commercial products offer very easy setup and they offer HTML5 video players with Flash fallback. Another option: you might try HTML5 with experimental captioning support (note that Safari 5 now supports captioning with the HTML5 video tag). As I said, there are options. The video player discussion is beyond the scope of this post (and I don’t want to go down the HTML5 vs. Flash rabbit hole!).

My main goal here is to point out that Google's machine transcription is good for more than just hosting a captioned video on Youtube. It's trivial to convert this caption file into a variety of formats. The key point is that you don't have to manually add time codes for your video. This critical step is done for you.

Yet even with this handy Google tool, generating caption files (and getting them to work with video players) remains an unwieldy task. We clearly need better tools and standards to help bring video captioning into the mainstream.

P.S. While researching this post, I came across two low-cost tools that look like solid options to create iOS and iTunes movies with captions. Both are from a company called bitfield. The first is called Submerge. This tool makes it very easy to embed (hard-code) subtitles in a movie and will import all the popular external captioning formats. The second is called iSubtitle. This tool will ‘soft-code’ subtitle tracks so you can add multiple files (languages) and easily add metadata to your movie.

CSS Lint

CSS Lint. It's an open-source online tool to check for typos, bad practices, incorrect properties for rules, inefficiencies, and other potential problems in your code.

I pasted in the primary style sheet I use for my work website. CSS Lint returned one error and 173 warnings.  The error was a missing colon in one selector. As for the warnings, they could be grouped into the three main problem areas: using IDs in selectors, broken box models, and qualified headings.

It's an instructional and helpful tool, especially for lengthy style sheets that have been used and abused for years. While you may not need or want to take action on every warning, CSS Lint will help you write better code moving forward. Users are welcome to contribute new rules to the tool.

British Library App for iPad

new iPad app launched this week by the British Library that provides access to scanned copies of original versions of 19th century books. This app is free for now with 1,000 titles, but will soon be a paid app offering more than 60,000 titles.

The stand-out feature of the new app is that it offers full scans of original versions. While you can't search or highlight text, take notes, or get word definitions, you do get to enjoy the real deal: aged paper, author notes in margins, embossed covers, engraved illustrations, and colored plates. I can almost smell it (I admit it, I love the smell of old books). Perusing through 'Woods and Lakes of Maine,' I was struck by how much context and texture is missing from straight-text digitized ebooks.

So this is an immersive way to explore old books on a modern device, but I have to admit that I've been spoiled by the interactivity of digital books à la Kindle and iBooks. The British Library app is almost like reading a real book, which is a great thing. But the lack of ability to draw on pages,  search text, highlight passages, or define words seems like a missed opportunity to harness the platform.

Since many of these texts have already been digitized, wouldn't it be fantastic to offer users the ability to switch (or overlay, or display side-by-side) a scanned original page in a book and its corresponding digitized text? Then we could have the best of both worlds. At a minimum, we need a way to take some notes and add multiple bookmarks. That said, this is a great app for the book junkie. It's free for now.