Why Google Gets It

I’ve stolen the title of this post from Shawn Rider’s article “Why Nintendo Gets It” because the title explains the whole point of this post and because of the parallels between Google and Nintendo. Nintendo gets it because they understand that games are about playability more so than technological innovation and because they understand that innovation can be  evolutionary or sustaining as well as disruptive. Evolutionary or sustaining innovations build incrementally on existing structures, but disruptive innovation changes the whole landscape.
The 8-bit NES to the Super Nintendo was an evolutionary or sustaining innovation, largely technological, but that technology enabled longer and deeper games. The current console gaming market changed in response to the Sony PlayStation 2 both because of the system and because so many had grown up with games. In the last console release, however, Nintendo showed how they got it by releasing the Wii and inviting all non-players and casual players to get into gaming and inviting existing players to learn to play in new ways. Nintendo used a disruptive technology to their advantage–investing in its development instead of in the best graphics card on the market and instead of pushing an ever-increasing polygon count, they focused on playability and leveraged it for an even greater market share and for a community of Nintendo followers.
Google announced yesterday that they’re scanning microfilm to digitize historical newspapers, which is just the latest of their work to get more content online. This could be seen as an evolutionary innovation, where Google has digitized books and now they’re working on newspapers. However, Google gets it because they make interoperable and open content. Google is digitizing whatever it can and indexing whatever it can to ensure that it has access to the most data for use by Google’s search engine and for Google’s paid services like advertisements. Google isn’t simply adding newspapers into this collective vat of information, though. Google has shown time and again that they’re adding and indexing content so that it can be faceted–for searching only by news or only by places with mapped locations–and that they’re allowing those facets to be connected together in context.
Placing content in context is an enormous task, especially when context means historical, spatial, cultural, social, and personal. Some of the existing components in traditional library records (if complete) can be extended and mined to create a basic infrastructure that can then be further enhanced, mined, and adapted for further use and this is what Google has done. This enhancement, mining, and adaptation are also what UF’s Digital Library Center has been doing for several years beginning in earnest with the Ephemeral Cities Project. The Ephemeral Cities Project began before I came to the Digital Library Center and its goals are only now beginning to be fully realized with the Map It! feature for items in the UF Digital Collections, enabled through KML becoming an Open Standard in 2008 leading to our use of the Google Maps API.
We’ve also been digitizing newspapers for the Florida Digital Newspaper Library and the Caribbean Newspaper Imaging Project, the same reasons Google is interested. Newspapers tell the stories of history in the making, connecting the current social and personal concerns to the larger cultural and historical movements and eras, and newspapers tell the local stories of their areas, along with the larger national and international stories of their days.
What surprises me most is not that Google gets it in terms of seeing the immediate need and the long tail future goals for massive amounts of interoperable data, but that there are so many people who got it and were working toward so much earlier than I’d have expected. In UF’s Digital Library Center alone, Director Erich Kesse first proposed the Ephemeral Cities Project in 2003 and Mark Sullivan (our wonderful programmer at the time who’s still with us as well) began developing the digital library software for users to access such data and for the digital library staff to most easily create the necessary metadata within the digitization process. I can’t say that I got it in 2003, but I’m glad so many others did so that the infrastructure is in place to help support the wonderful projects to come.
I’m also extremely happy that Google gets it in particular because they have the business infrastructure to make the incredibly tedious and expensive work of digitizing materials in context affordable and sustainable through ads which have a return on investment value. Universities return investments from society in the form of knowledge, a more educated and capable workforce and community, and through the infrastructure necessary for other advances, but in difficult economic times the investment itself becomes more difficult. Luckily for all, Google gets the full context of their investment and knows that digitized materials have more value when they can easily be used, thus ensuring greater usage. The smart business plan for Google requires keeping materials open and usable by as many others as possible,making it good business for Google to do what’s already in the public interest. Of course, Google is facing monopolistic concerns and smart business models can go bad with changes in leadership, so its smartest public institutions like universities to continue getting it and ensuring that the digital revolution brings as many benefits as it can for accessing, using, and understanding information while building the infrastructure for the next innovations be they sustaining or disruptive.

1 Comment

Comments are closed.