How Google Became Generic (Not Quite Like Coca-Cola)

Interbrand.com compiles an annual top 100 “Best Global Brands” and it always makes for fascinating reading, especially so this year now I have started studying Information Science at university. The site’s editors use various criteria to judge how these brands are performing such as the publicly available information on their financial performance, their prospects for growth and their visibility across major markets. Many familiar names are represented near the top of the rankings such as carmakers Toyota, Mercedes-Benz and BMW and technology giants Apple, IBM and Samsung. E-commerce titans Amazon are 2016’s fastest-rising entry. No matter where in the world you live, it is likely that these companies’ products and/or services are readily available and heavily advertised across various channels. All this is very interesting but what I really want to focus on here is two particularly big yet very different corporations, namely Coca-Cola and Google.

Last year I read Mark Prendergast’s hugely enjoyable history of the Coca-Cola Company “For God, Country & Coca-Cola”. As one of their most loyal customers, I was both informed and entertained by the story of how Atlanta chemist John Pemberton’s eccentric health tonic (originally containing actual cocaine though the Company has subsequently denied this) was transformed into the market-leading soft drink by Asa Candler and became one of the pioneers of today’s global economy with Coke becoming available virtually anywhere in the world.

1930s_coca-cola_neon_sign

This remains true today even as the Coca-Cola Company has diversified into other areas as anywhere you go the original brown fizzy drink is still ubiquitous. The very name “Coca-Cola” has become synonymous with its most notable product almost everywhere, much to rival Pepsi’s chagrin. The annual Christmas adverts featuring Santa’s sleigh delivering the famous beverage are even believed to have “standardised” the appearance of Claus’ outfit in people’s minds to the Company’s red and white colours! In 2016’s Interbrand rankings, Coca-Cola is third, behind a much newer company that has much more relevance to librarians and information professionals.

Everyone seems to use Google as their search engine when they search the web, to search an extent that its very name has become a generic term. I was actually surprised to learn that Google’s global market-share was actually only 64% in 2015 as nobody ever mentions “Binging” something. This suggests that they have not actually succeeded in creating an internet search monopoly, even though public perception may suggest otherwise.

Despite the recent launch of the Pixel smartphone and increasingly unavoidable advertising of their Chrome browser, Google has an incredibly different business model not only from Coca-Cola but also from the vast majority of other brands on the Interbrand list. Very few of Google’s customers have actually paid any money to use their services let alone made a conscious decision to choose them over a competitor. Anyone who buys a Mac will have chosen that in preference to a PC and anyone who drives a Ford will have preferred it to say a Toyota. Yet people seem to opt for using Google without considering any alternatives. This creates a potential problem for information specialists (and indeed for anyone with a stake in markets operating fairly).

In last week’s class with David Bawden, we had a first look at how relational database management systems work and did a practical exercise that involved groups of us searching for journal references on MedlinePlus and Web of Science. While these platforms have sophisticated interfaces with advanced search and command line functions, David suggested that most users in his experience prefer a much simpler “search box” system. This does have the advantage of convenience and familiarity. These are obviously two of Google’s main selling points as a search engine. Our role, however, is to help pinpoint the most relevant and useful data we can for our users and so we need to think critically about how we go about doing so, even in a world where Google and its approach are dominant. If one company can have too much influence on the way people search for information, we should all be worried.

Advertisements

How Fans “Saved” Sonic The Hedgehog

There will be few of you reading this who have never encountered Sonic The Hedgehog. The iconic blue hedgehog capable of running at high speeds has served as the mascot for the Japanese video game developer Sega since the early 1990s. Sonic is one of the most recognisable characters in video games, even amongst non-gamers, as the franchise has spawned many spin-offs including comic books and several animated series (and many, many toys). Created by Yuji Naka as a counterpoint to Nintendo’s Mario, Sonic has become a staple of popular culture, both in Japan and in the West, with over 80 million games sold over a 25-year period.

While the early games in the series for the Mega Drive/Genesis were and are widely celebrated, more recent entries have not fared as well, either critically or commercially. Two games in particular, “Sonic The Hedgehog” (the 2006 multi-platform release) and “Sonic Boom: Rise Of Lyric” (for the Nintendo Wii U console) were met with a highly negative reception. This backlash hasn’t derailed plans for new Sonic games, however, with two new titles expected in 2017 and this, I think, is largely due to the active fan culture surrounding the character online.

comikaze_expo_2011_-_sonic_the_hedgehog_6325367068

Sega has made effective use of their social media accounts to showcase fans’ love of the character and keep public awareness of the brand high despite disappointing returns on the latter-day games. A recent discussion with my friend Miguel Olmedo Morell led me to consider the implications of these developments for people studying information science.

 We spent our last “Foundation” session talking about the various ways documents may develop in the future and the consequences this will have for us as people providing access to information. It is thought that the public’s changing expectations of entertainment may lead to new kinds of documents evolving. One of the concepts we encountered was that of “participatory culture” which is where consumers of media take a more active role in its creation, dissemination and use. While this is not a new phenomenon, emerging technologies have enabled more people than ever before to take part in the production of media. Henry Jenkins is a leading scholar in the field of cultural studies, especially in areas relating to popular culture and fan involvement in it, and I would highly recommend his 2006 book “Fans, Bloggers & Gamers”. Henry’s official website defines “participatory culture” as one with the following features:

1. With relatively low barriers to artistic expression and civic engagement

2. With strong support for creating and sharing one’s creations with others

3. With some type of informal mentorship whereby what is known by the most experienced is passed along to novices

4. Where members believe that their contributions matter

5. Where members feel some degree of social connection with one another (at the least they care what other people think about what they have created).

The official Sonic account on Twitter has nearly a million followers at the time of writing. The people working on the account, whatever their academic and professional backgrounds, could be described as information professionals in that they provide access to fan documents to promote understanding. I for one would certainly be interested in such a line of work! Sega has used this medium to share fan’s Sonic memes and showcase the community’s creativity. Their relatively lax interpretation of copyright laws has allowed fan participation to flourish and knowledge of the character to spread. I would imagine that fans enjoy Sega sharing their artwork with the wider public and feel that the large Japanese corporation appreciates the contribution fans are making to the Sonic brand. A perfect example of participatory culture!

 

Introducing the Semantic Web & BIBFRAME

While I am new to the world of libraries, I work as a bookseller in my other life and am spending increasing amounts of my time buying second-hand books and cataloguing them so they can be put out to sell. Before that point, I need to allocate a category for each one and decide on how much to charge for it. We generally aim to make between 40% and 50% margin on a book but its condition is really important when pricing it. One of my great bugbears when doing this, however, is the cataloguing software we use and the ludicrously small text box it provides for the book’s title. This means that only a small proportion of the title appears on the label when we print it out and also means that only the part that can be printed is saved within the database. Sometimes we get asked whether we have a second-hand book and we enter the full title into our software and then it spits it out again! This is pretty infuriating and a prime example of how inadequate bibliographic data can make our lives needlessly difficult.

As discussed in one of my previous posts, we have grown accustomed to hearing the term “Web 2.0” to describe the period in the history of the internet where user-generated content became prevalent. This has led of course to a vast increase in the total amount of data generated worldwide and made the job of information professionals considerably more difficult as they attempt to steer their audiences towards the useful and relevant. Naturally the average web user is even more overwhelmed in comparison.

hkust_%e9%a6%99%e6%b8%af%e7%a7%91%e6%8a%80%e5%a4%a7%e5%ad%b8_library_%e5%9c%96%e6%9b%b8%e9%a4%a8_bookcase_bookback_old_culture_in_chiense_sept-2013

Thankfully, there have been huge gains in the amount of computational power available. Moore’s Law states that we can expect the number of transistors in a circuit to double every two years. Although this rate of growth seems to be slowing down, we still have much more power to play with compared to, say, a decade ago. The whole concept of the “Semantic Web” or “Web 3.0” is to use this extra power in an intelligent way in order to make the data work for us.

At the moment, most of the content published on the Web is in the HTML format rather than as raw data. I have no coding experience myself but my understanding is that HTML elements are of limited intelligibility to computers.  Due to the upsurge in data I have already discussed, this is already becoming very inefficient. If machines were to be given more access to this raw data, the need for human input to extract meaning is reduced. Librarians and information professionals have been working for some time to make the plethora of data they have available to them more freely accessible to the wider public. I also get the impression that various libraries and their parent organisations are now working more collaboratively than they have done in the past.

Hence there has been the development of BIBFRAME by the Library of Congress which it is hoped will become a new set of standards that will replace the MARC ones that are currently widespread. It is at this point that I begin to struggle to keep up! The MARC standards were designed in the 1960s so that library records could be shared digitally but commentators now say they are not up to the task of presenting bibliographic data in the user-friendly way information professionals hope will become the new normal. More simply put, people’s expectations have changed and they want to be able to make connections between various data sets.

The LoC website states in its FAQ section that “the BIBFRAME Model is the library community’s formal entry point for becoming part of a much wider web of data, where links between things are paramount”. Apparently, the goal is to present the data we hold about books and other media at what the LoC calls “three core levels of abstraction”. These are as follows: Work, Instance and Item. The Work level of data contains things like subjects, authors and languages. This then leads us to the Instance level which involves the various manifestations a Work may take such as a printed publication or as a Web document and the data required, such as date and publisher, needed to access them. The Item level allows us to access a specific iteration of a Work either physically on a library shelf or virtually. I think the aim of opening access to this data is a laudable one, providing BIBFRAME is integrated with the search engines already widely used by the public. Of course, this then leads to another debate on the power companies such as Google hold over the consumer!

Cave Art & The Story Of Documents

During our “Foundation” module at City, we have been looking at the story of documents and the intellectual tools we have developed to deal with them. Lyn Robinson defines documents as “the containers into which we put our representations of ideas/information/knowledge to give them some tangible permanence”. This then makes documents not just scrolls or codices containing writing or pictures or even electronic files containing data but instead tools for communicating information. So what then were the first documents? Some people would say that Palaeolithic cave paintings are the first documents, as one could argue that they convey information to others. We think the earliest of these paintings date from around 40,000 BCE, making them quite staggeringly old.

On a purely visual level, I must say I find these paintings stunning and I haven’t even been fortunate enough to see any in person. The hue of the colours in the Altamira paintings, for example, is so appealing even in reproduction. The depiction of animals has been a constant theme in human art and this too leads them to feel less remote. The fact that the people who painted these images had to venture so far into the caves also takes me aback. We do know by the fact we haven’t discovered any other artefacts that humans did not generally venture that far inside.

Cave paintings

But what to make of these mysterious paintings? Could we say that they are works of art in any kind of modern sense? When these paintings began to be discovered in the nineteenth century (coincidentally around the time Darwin’s “Origins Of Species” was beginning to change preconceptions about human prehistory), scholars initially believed that they were purely decoration and that they were evidence of hunter-gatherers having free time on their hands! This seems like a huge generalisation to me and David Lewis-Williams’ brilliant 2002 book “The Mind In The Cave” is of great help to us here. I can only summarise incredibly briefly here but he makes very compelling arguments about the significance of these paintings and I would highly recommend it. One of the points Lewis-Williams makes is that we can’t understand art outside of its context. Just because we can’t see what these pictures of animals suggested to their intended audience doesn’t mean they weren’t attempts at communication. And perhaps we can then say that they are documents. They certainly seem to me that they must have been used for something, whether that would have been religious rituals or for magical spells.

I have an admission to make. The art that really speaks to me at least gives the illusion of effortlessness, whatever the far more laboured reality. I also relate to something that feels like it could be part of the fabric of everyday life, a work that has an element of pragmatism to it. That really self-conscious kind of art that insists on its own transcendence and total separation from the mundane tends to alienate me. I prefer artists that don’t feel the need to force their craft into the foreground in order to impress. People who create things for others to use. I think these seeming qualities are what have drawn me to cave art. I like the boldness of the outlines and the vividness of the colours. The fact that these incredible images might have had a purpose, even though we can never be sure what it may have been.

Note: Photo of Altamira Bisons is by Thomas Quine, taken from Wikimedia Commons

 

 

 

History & Hyperhistory: Reading Luciano Floridi

Here at City, the first module of the MSc Library Science involves us looking at Digital Information Technologies and Architectures and the implications they have for us working in the time of so-called “Big Data”. This term refers to a period in history, really only beginning relatively recently, where data sets have become so complicated that new intellectual tools are required in order to deal with them effectively. Philosophers such as Luciano Floridi are concerned with how these developments will impact us both as individuals and as societies. His 2014 book “The Fourth Revolution: How The Infosphere Is Reshaping Human Reality” discusses his concepts of history and hyperhistory, which are new to me at this point. A bit of a disclaimer is required here: I am only 40 pages or so in so I am only able to offer my initial thoughts on these ideas. Floridi himself may well go on to explain better than I can later on in the book.

The book describes how a proliferation of new user-generated content on the internet and the increasing prevalence of smart devices has led to an exponential increase in the amount of total data worldwide. Floridi believes that this data, and the devices that are generating it, have led us into a new period in human history, one which he calls “hyperhistory”. This refers to a paradigm of human experience which is totally dependent on ICTs (information and communication technologies). This is distinct from “history” which he describes as a paradigm where societies made use of ICTs but were not yet fully reliant on them.

There is consensus amongst people working in our field, concerned as we are with documentation, that history began when knowledge began to be transmitted in the form of written documents. Throughout recorded history, it has been humans that have actively created documents in order to communicate with others. But from when could we date Floridi’s “hyperhistory”, assuming we accept his theory? My own feeling is that this new paradigm probably began at most around twenty years ago in developed nations as we began to see increasing use of the Internet in everyday life. Things began to accelerate around the turn of the millennium when broadband started to replace slower dial-up connections and many people’s dream of the “always on” web came to fruition.

What many people term the “Millennial” generation came of age around this time. This generation’s beginnings in terms of dates of birth have divided analysts, some having the first being born in the late 1970’s and others still counting those born around 2000 as part of the same cohort. I tend to think that those who came of age in the mid-to-late ‘90s will generally have had markedly different life experiences to those becoming adults in the next five years or so, as those born around 2000 will do. It was this later group of people who cannot remember a time before ICTs were ubiquitous. Many observers agree, however, that this generation’s life experiences have been largely defined by their relationships with new technologies, whenever they were born in that fairly large timespan.

 img_6972

It was in the mid-00s that we see the mass adoption of social media, blog hosting platforms and video sharing sites. In fact, the term “Web 2.0” was coined in 2004 by Tim O’Reilly to describe the way in which the World Wide Web was coming to be used in a much more collaborative way by users. Instead of largely consuming static content, people were beginning to create their own with the aid of increasingly user-friendly software which was accessible to the non-specialist (such as WordPress, for example). This of course has led to a huge upsurge in data and the eventual existence of huge and complex data sets. From this point onwards, I think we can say, by Floridi’s definition, that we are living in a state of “hyperhistory”. Will this paradigm see documents largely generated by machines without conscious human involvement? It is very difficult to give definitive answers but I feel we need to give careful consideration to the potential ramifications this could have for all of us.