The SNES Classic Mini & LIS

Nintendo’s announcement of the Super Nintendo Classic Mini retro console today provided me with an opportunity to reflect on the changing nature of technology and the challenges we face in providing access to the videogames of yesteryear. At City, we have discussed how the role of information professionals is increasingly to organise and give access to a wide range of digital media and the Super Nintendo Classic Mini aims to bring the games of the 1990s to a contemporary audience. The long-rumoured device is a miniature reproduction of Nintendo’s second home console with various classic games built-in and is expected to sell out almost immediately worldwide given how well its predecessor the NES Classic Mini did at retail.

H2x1_NintendoClassicMiniSNES_image912w

As many of you will know, the Super Nintendo Entertainment System (often abbreviated as the SNES) originally launched in 1990 and competed with the Sega Mega Drive as one of the first 16-bit machines. It was hugely successful at the time, particularly in Japan, but also in the West. Millions of machines were sold during its lifespan and many of its best-selling titles such as Super Mario World, Super Mario Kart and Star Fox are now regarded as seminal by both fans and critics.

Many people are interested in playing SNES games via emulation on modern devices and Nintendo’s Virtual Console service (available on a number of their recent platforms but not yet the Switch) allows them to do just that. It is worth mentioning here that there are also a number of unofficial emulators available but their legality is somewhat dubious.

It is no surprise that the original physical SNES games are also highly sought after by collectors. Many original titles can be found on auction sites such as EBay and Amazon Marketplace and there are a growing number of smaller mail order businesses that seek to serve the demand for retro games. However, the nature of the media presents technological obstacles. As was common in its era, SNES games were supplied on cartridges which had a nominal capacity of 128 MB (!). The ability to save your progress was enabled via a small battery inside the cartridges. The gaming medium was developing quickly in the early 1990s and games like The Legend of Zelda: A Link to the Past were becoming far too long and complex to be completed in one sitting so the save facility was absolutely essential. Unfortunately though these batteries only have a finite lifespan. My attempts to resurrect classic Nintendo games have resulted in failure on a number of occasions as the batteries in the cartridges have died, erasing all my progress and losing the ability to save the game ever again. If anyone knows how to replace these batteries, please let me know… In all seriousness, however, this is a challenge that anyone attempting to archive these games would have to address.

Another noteworthy feature of SNES cartridges was the sometime inclusion of an additional “Super FX” chip which allowed a primarily 2D-oriented machine to render primitive 3D graphics. The science fiction themed Star Fox was the first game to make use of this chip in 1993. Although the game was developed internally by Nintendo, it was reliant on the Super FX chip produced by a company called Argonaut Games. Licencing disputes with Argonaut have prevented Star Fox from appearing on Nintendo’s Virtual Console service to date. Copyright disputes are a wider problem for information professional and are often an impediment to making important documents available. It is, then, exciting news that both the original and the previously unreleased sequel will be included on the Super Nintendo Classic Mini.

Disclaimer: I do not own rights to the above image and it is presented for informational purposes only

 

Digital Public Library of America: My First Impressions

Library Science students at City have been studying a module on Digital Libraries this semester. These emerging platforms, used to share library collections electronically, are playing an increasingly prominent role in the lives of both library users and librarians. I thought it would be useful to take a quick look at one very well-known example of a digital library, the Digital Public Library of America, to see what was on offer to both academic researchers and interested members of the public.

I first became aware of the project through John Palfrey’s excellent book “BiblioTech: Why Libraries Matter Move Than Ever in the Age of Google” which was published a couple of years ago. As the title would suggest, his central argument is that we need to stop thinking of libraries as collections of dusty old books, even if we have fond memories of the way they were in decades past, but instead embrace the possibilities they can offer for learning in our contemporary digital environment. John Palfrey is writing as something of an outsider, being a lawyer by trade. Nevertheless  he has been closely involved with them in his working life managing the Harvard Law School Library and then as the principal of the prep school Phillips Academy. He is alarmed by the possibility of for-profit tech companies such as Google encroaching on the role public libraries have in disseminating information. The commercial success of the Google search box has arguably changed library users’ expectations in a way that does not always allow the most accurate or relevant match to a query to be easily found.

Given that Palfrey is one of the founding members of the DPLA initiative, we would expect to see some of his concerns addressed in the way it has been implemented.The home page gives us some indication of the vast scope of the collection: we are told we can “explore 15,384,819 items from libraries, archives, and museums”! This is both enticing and overwhelming! This Tagxedo word cloud gives some idea of the contents of the DPLA homepage.

dpla-word-cloud

What is interesting about the emphasis on libraries, archives and museums is that it points towards these formerly separate entities converging as digitalised images of documents and objects can increasingly be stored and accessed in the same way. The DPLA is emulating the likes of Google in bringing together documents from a whole variety of sources in a convenient way that is free at the point of use. The front page also emphasises how users can search for documents relating to places and events that have been in the news recently, something Google also does effectively. We are invited to search for documents relating to Clemson, South Carolina for instance as American users would likely be aware that its University’s football team won the College Football Playoff in January. Exploring by date gives us the option at looking at documents pertaining to 1949 when it looks like some famous horse race took place (the Kentucky Derby possibly?). The Twitter feed widget in the bottom right hand corner explains that we can search for material from the Harriet Tubman Museum which again shows that the site is responding to current events (February being designated Black History Month in the U.S.A.)

The site seems very much to cater for a domestic audience which is a bit unusual in today’s climate. The overall look and feel of the site suggest to me that its target users are adults who are relatively technologically savvy and well-educated (the type of people we might stereotypically assume to be local history enthusiasts?). While there is a menu for educational resources, it does not really seem to be seeking to engage children and young people. This is perhaps surprising given John Palfrey’s position as a school principal. Equally visible is a menu giving information for prospective software developers. The whole intention of the DPLA is that it becomes a platform that new applications can be built upon and perhaps it is thought that these will make the collection more accessible to a wider audience.

Word Clouds & Illuminated Manuscripts

A few years back, my father and I went to see the exhibition “Royal Manuscripts: The Genius of Illumination” at the British Library. The books on display were those collected by English monarchs during the Middle Ages and the idea was that they were of both historical and artistic value. An illuminated manuscript is one where little illustrations were added to the text, often using gold leaf. The great expense of doing this, as well as the time required, meant that these books were precious artefacts even in their own time and owning them became symbolic of power and authority. No wonder kings and queens were keen to acquire them!   The exhibition was one of the most memorable I have ever seen as the intricate detailing of the lettering was truly breath-taking. The contrast between the dark room in which the books were being displayed and the shining gold of the illustrations was really striking. Most of the works in the collection were religious texts, being missals, psalters or edition of the gospels. This was obviously indicative of the centrality of Christianity to life in medieval England and most of the books were intended for practical use. The themes of the images can also tell us something about the priorities of the time.

Paying what would have been huge amounts of money for the time to have one of these books produced was a show of piety on the part of the owner as well as power. Even so, it was not conventional to commission the artists to portray oneself in the illustrations. However, one book owned by Henry VIII did just that and I remember the label on the cabinet explain that it would have been the usual thing for the Biblical King David to be depicted at that point of the text (I think it was one of the books of the Old Testament). This was a typical display of egotism by Henry and tells us a lot about how he wanted himself to be perceived by others.

In contemporary society, we can use word visualisation tools such as word clouds to see which terms occur most frequently in a body of text. They are particularly widely used to look at metadata on the web and one of the main advantages is that popular services such as Tagxedo update in real time to reflect the way the sites’ content is changing constantly. Like illuminated manuscripts, they are intended to be aesthetically appealing as well as useful. I might be extrapolating slightly but I do think there are some similarities! There are plenty of options to play around with colours and fonts here! Let’s go back to the exhibition I was talking about originally.  I am going to use the Wordle application to create a word cloud to input some text from the British Library’s medieval manuscripts blog about “Royal Manuscripts: The Genius of Illumination”. By seeing which words are most commonly used, we can see what the priorities of the exhibition may have been.

wordle

We can see that the words “royal”, “manuscripts”, “collection” and “English” occur most. This does give us an accurate impression of what the exhibition had to offer, even though this particular word cloud doesn’t really offer us any surprises. Perhaps applications such as Wordle would be better suited to much larger bodies of texts where the most commonly used terms would be less obvious. Are there really many similarities with the illuminated manuscripts I discussed earlier? Well, not really. It is difficult to compare a 21st century web application with a hand-painted book from the Middle Ages, even though they are both trying to draw our eyes to words and letters in a body of text. Beyond them being nice to look at!

 

Our Visit to the National Archives

DIGITAL CAMERA

 Photo of National Archives taken from Wikimedia Commons

On Wednesday 23rd November, MSc Library Science students from City were offered the chance to visit the National Archives in Kew (many thanks to David Haynes for organising everything for us). I just wanted to take this opportunity to share a few thoughts and observations on the day’s activities…

Once we had all arrived and had been ushered into the Training Room, we were greeted by Val Johnson and the programme began in earnest. Firstly, we heard from Chris Day about the role riots and public disturbances had played in the history of the Archives. Frustration at the voting restrictions in 19th century England (where only landowners had the vote and so-called rotten boroughs were disproportionately represented in the House of Commons) had led to a wave of popular protest, culminating in events such as the Peterloo Massacre in 1819. The reaction of the authorities was to quite literally read the Riot Act to the protesters (!) and this naturally led to a glut of paperwork which then needed a home. This certainly contributed to the founding of the Public Record Office in 1838, which was originally in Chancery Lane. The talk was a good illustration of how political events can have long-lasting and often unforeseen consequences. Chris’ section was then followed by Howard Davies giving us a brief overview of the history of the Archives which was both informative and entertaining. We were then split into two groups for the next lot of activities.

My group was first given a tour of the public exhibition gallery which had some interesting sights to take in. There was a display on the Cambridge Spies which looked intriguing (unfortunately time was a bit limited) and also items connected with William Shakespeare (it being the 400th anniversary of his death in 2016). There were also some eye-catching visual installations that were linked to the Somme centenary commemorations. I thought the one of the machine-gunner was particularly memorable, though a bit frightening…

After an introduction to the cataloguing system used by the Archives (I remember David Haynes asking some pertinent questions about metadata) and a short tea break, my group was then shown around the repositories by Document Services. Needless to say, there are a huge number of documents at the Kew site and a vast amount of space is needed to hold them (so much so that many of the lesser-requested items are now held in a former salt-mine in Cheshire), although they do use mobile shelving to cram in as much as possible. It was interesting to see the employees at work and impressive to find out that they aim to supply the document requested by a reader in under an hour. Understandably there is a system to control the climate in place at the repositories and, although there are alarms, it is usually the staff who report it if something seems wrong. It was also fascinating to see the variety of documents such as being shown a 18th century map of the British invading one of the French colonies in the West Indies (our guide explained that you can tell it was a British map because it showed the French running away!).

Once we returned to the Training Room, we were given a useful introduction to Discovery by Tom Storrar, which is the Archive’s online database which is free to use and contains documents like service records from the First World War. After that, Diana Newton gave a brief talk about the challenges of archiving all the web pages produced by the various departments of HM Government (she estimated there were around 800 URLs currently in use!). It can take a web crawler weeks to go through them all. And then it was time to head home so it just remains for me to thanks everyone at the Archives for putting together such a diverse programme for us.

 

Colin Kaepernick, the National Anthem Protest & Twitter

In last week’s session, we looked at the various forms of web services and considered the roles they play in our lives. Increasingly, we rely on cloud computing to store and retrieve our documents, and manufacturers have increasingly done away with CD/DVD drives and instead only include USB ports on newer machines. Our data is held on remote servers and has the advantage of access from any device. We then went on to talk about social media platforms such as Twitter, one such web service, which generates huge amounts of data about its users. The practical exercise that followed involved us seeing what information could be gleamed from this data and the ethical questions that this may pose.

One important concept that was introduced was that of the API or Application Programming Interface. APIs allow third party applications to acquire data from these services for their own purposes. This may allow the makers of these programs to collect valuable information about the demographics of people using the platforms or their political preferences. Martin Hawksey has developed an app called TAGS which allows users to collect a whole series of tweets with the same hashtag on one Google Spreadsheet. This would allow us to see what type of people were talking about an issue and build up a picture of what they were saying about it. A controversial topic with a suitable hashtag could yield fascinating results, although Twitter users would probably be unaware they were contributing to this bit of research.

colin_kaepernick_and_kyle_williams_warm_up

One potential case study here would a high-profile recent controversy in American Football (and one that is highly relevant with the presidential election in the background). San Francisco 49ers quarterback Colin Kaepernick’s decision to protest what he sees as ongoing oppression of African-Americans by kneeling during the playing of the U.S. national anthem that precedes NFL games caused a furore on Twitter and spawned numerous trending topics. While receiving death threats for his “unpatriotic” stance, he has also become a hero to many, having the highest-selling jersey on nflshop.com during the month of September. Although he was the starter during the 49ers’ run to Super Bowl XLVII in 2013, Kaepernick entered the 2016 season backing up the often-derided Blaine Gabbert on a team widely expected to struggle. He took a very real risk in my view by protesting because it was uncertain at that point whether he would be able to continue his professional career, at least in the NFL.

By putting the term #ColinKaepernick into an app like TAGS, we could see some of the responses to the protest on Twitter and who they emanated from. By looking at other hashtags they used, we could build up a fairly detailed picture of their cultural background and political beliefs. We would expect that many of those sympathetic would be other African-Americans, especially as they are both, according to recent research, more likely to use Twitter than other groups and to tweet frequently. We could see whether these people had used terms like #BlackLivesMatter or #Ferguson. We might also anticipate that those most hostile to Kaepernick might be political conservatives who used hashtags such as #MakeAmericaGreatAgain that are associated with support for Donald Trump. The point is none of these users has specifically given their consent to their data being used in this way (other than agreeing to Twitter’s Terms of Service) and they may not be comfortable with such assumptions being made about their beliefs. Making sure this data is used responsibly is something we information professionals can contribute to.

Margaret Egan, Jesse Shera & Social Epistemology

adelbert_hall_-_case_western_reserve_universityLast time in our “Foundation” class, David Bawden spoke to us about epistemology, which is the philosophical study of knowledge and its scope. The relevance of this to LIS is obvious given how our focus is enabling understanding by using data. David followed this by introducing some of the various attempts over the years to apply a philosophical framework to our discipline. This has been a difficult thing to achieve as there has been no agreement within the profession whether such a theoretical structure is even needed at all. People involved in Library and Information Science have generally used pre-existing philosophical ideas from other contexts and translated them into ours. UCL academic Brian Campbell Vickery was particularly critical of this practice. This has started to change in recent years with the work of Luciano Floridi, now at the Oxford Internet Institute, who has created his own philosophical ecosystem for the study of information and sees librarians and information scientists as working at the “applied end” of it. While his ideas have become increasingly influential, there are certainly competing ones. Today I am going to briefly look at the thought of Margaret Egan and Jesse Shera who coined the term “social epistemology” to make sense of what librarians did and were trying to achieve.

Margaret Egan and Jesse Shera worked together at Case Western Reserve University in Cleveland, Ohio during the 1950s and this is when they developed the theory. While Shera was by far the more high-profile figure, in part due to Egan’s early death from a heart attack, he did credit her for her work on this. The basic premise of social epistemology is that knowledge is not an isolated phenomenon and is instead interwoven with a given societal context. This was in opposition to traditional views of knowledge as “justified belief” held by an individual person independent of others. Egan and Shera suggest that societies at different times and places create paradigms of knowledge and libraries and librarians operate within these, working to meet the needs of their users. This idea has similarities to the concept of domain analysis which Birger Hjørland and Albrechtsen wrote about. It seems to me to be no coincidence that this more humanistic approach to library science began to develop at a time of great social change in the U.S.A. when the Civil Rights Movement saw African-Americans campaign against racial discrimination and the emergence of feminism was beginning to challenge conventional ideas about gender. There were of course great technological advances being made in this period which would impact on the world of libraries and so it makes sense that people like Egan and Shera would see knowledge as not immutable but instead something that would evolve along with society.

Photo is of Adelbert Hall at Case Western Reserve University where Margaret Egan and Jesse Shera worked. Taken from Wikimedia Commons.

How Google Became Generic (Not Quite Like Coca-Cola)

Interbrand.com compiles an annual top 100 “Best Global Brands” and it always makes for fascinating reading, especially so this year now I have started studying Information Science at university. The site’s editors use various criteria to judge how these brands are performing such as the publicly available information on their financial performance, their prospects for growth and their visibility across major markets. Many familiar names are represented near the top of the rankings such as carmakers Toyota, Mercedes-Benz and BMW and technology giants Apple, IBM and Samsung. E-commerce titans Amazon are 2016’s fastest-rising entry. No matter where in the world you live, it is likely that these companies’ products and/or services are readily available and heavily advertised across various channels. All this is very interesting but what I really want to focus on here is two particularly big yet very different corporations, namely Coca-Cola and Google.

Last year I read Mark Prendergast’s hugely enjoyable history of the Coca-Cola Company “For God, Country & Coca-Cola”. As one of their most loyal customers, I was both informed and entertained by the story of how Atlanta chemist John Pemberton’s eccentric health tonic (originally containing actual cocaine though the Company has subsequently denied this) was transformed into the market-leading soft drink by Asa Candler and became one of the pioneers of today’s global economy with Coke becoming available virtually anywhere in the world.

1930s_coca-cola_neon_sign

This remains true today even as the Coca-Cola Company has diversified into other areas as anywhere you go the original brown fizzy drink is still ubiquitous. The very name “Coca-Cola” has become synonymous with its most notable product almost everywhere, much to rival Pepsi’s chagrin. The annual Christmas adverts featuring Santa’s sleigh delivering the famous beverage are even believed to have “standardised” the appearance of Claus’ outfit in people’s minds to the Company’s red and white colours! In 2016’s Interbrand rankings, Coca-Cola is third, behind a much newer company that has much more relevance to librarians and information professionals.

Everyone seems to use Google as their search engine when they search the web, to search an extent that its very name has become a generic term. I was actually surprised to learn that Google’s global market-share was actually only 64% in 2015 as nobody ever mentions “Binging” something. This suggests that they have not actually succeeded in creating an internet search monopoly, even though public perception may suggest otherwise.

Despite the recent launch of the Pixel smartphone and increasingly unavoidable advertising of their Chrome browser, Google has an incredibly different business model not only from Coca-Cola but also from the vast majority of other brands on the Interbrand list. Very few of Google’s customers have actually paid any money to use their services let alone made a conscious decision to choose them over a competitor. Anyone who buys a Mac will have chosen that in preference to a PC and anyone who drives a Ford will have preferred it to say a Toyota. Yet people seem to opt for using Google without considering any alternatives. This creates a potential problem for information specialists (and indeed for anyone with a stake in markets operating fairly).

In last week’s class with David Bawden, we had a first look at how relational database management systems work and did a practical exercise that involved groups of us searching for journal references on MedlinePlus and Web of Science. While these platforms have sophisticated interfaces with advanced search and command line functions, David suggested that most users in his experience prefer a much simpler “search box” system. This does have the advantage of convenience and familiarity. These are obviously two of Google’s main selling points as a search engine. Our role, however, is to help pinpoint the most relevant and useful data we can for our users and so we need to think critically about how we go about doing so, even in a world where Google and its approach are dominant. If one company can have too much influence on the way people search for information, we should all be worried.

How Fans “Saved” Sonic The Hedgehog

There will be few of you reading this who have never encountered Sonic The Hedgehog. The iconic blue hedgehog capable of running at high speeds has served as the mascot for the Japanese video game developer Sega since the early 1990s. Sonic is one of the most recognisable characters in video games, even amongst non-gamers, as the franchise has spawned many spin-offs including comic books and several animated series (and many, many toys). Created by Yuji Naka as a counterpoint to Nintendo’s Mario, Sonic has become a staple of popular culture, both in Japan and in the West, with over 80 million games sold over a 25-year period.

While the early games in the series for the Mega Drive/Genesis were and are widely celebrated, more recent entries have not fared as well, either critically or commercially. Two games in particular, “Sonic The Hedgehog” (the 2006 multi-platform release) and “Sonic Boom: Rise Of Lyric” (for the Nintendo Wii U console) were met with a highly negative reception. This backlash hasn’t derailed plans for new Sonic games, however, with two new titles expected in 2017 and this, I think, is largely due to the active fan culture surrounding the character online.

comikaze_expo_2011_-_sonic_the_hedgehog_6325367068

Sega has made effective use of their social media accounts to showcase fans’ love of the character and keep public awareness of the brand high despite disappointing returns on the latter-day games. A recent discussion with my friend Miguel Olmedo Morell led me to consider the implications of these developments for people studying information science.

 We spent our last “Foundation” session talking about the various ways documents may develop in the future and the consequences this will have for us as people providing access to information. It is thought that the public’s changing expectations of entertainment may lead to new kinds of documents evolving. One of the concepts we encountered was that of “participatory culture” which is where consumers of media take a more active role in its creation, dissemination and use. While this is not a new phenomenon, emerging technologies have enabled more people than ever before to take part in the production of media. Henry Jenkins is a leading scholar in the field of cultural studies, especially in areas relating to popular culture and fan involvement in it, and I would highly recommend his 2006 book “Fans, Bloggers & Gamers”. Henry’s official website defines “participatory culture” as one with the following features:

1. With relatively low barriers to artistic expression and civic engagement

2. With strong support for creating and sharing one’s creations with others

3. With some type of informal mentorship whereby what is known by the most experienced is passed along to novices

4. Where members believe that their contributions matter

5. Where members feel some degree of social connection with one another (at the least they care what other people think about what they have created).

The official Sonic account on Twitter has nearly a million followers at the time of writing. The people working on the account, whatever their academic and professional backgrounds, could be described as information professionals in that they provide access to fan documents to promote understanding. I for one would certainly be interested in such a line of work! Sega has used this medium to share fan’s Sonic memes and showcase the community’s creativity. Their relatively lax interpretation of copyright laws has allowed fan participation to flourish and knowledge of the character to spread. I would imagine that fans enjoy Sega sharing their artwork with the wider public and feel that the large Japanese corporation appreciates the contribution fans are making to the Sonic brand. A perfect example of participatory culture!

 

Introducing the Semantic Web & BIBFRAME

While I am new to the world of libraries, I work as a bookseller in my other life and am spending increasing amounts of my time buying second-hand books and cataloguing them so they can be put out to sell. Before that point, I need to allocate a category for each one and decide on how much to charge for it. We generally aim to make between 40% and 50% margin on a book but its condition is really important when pricing it. One of my great bugbears when doing this, however, is the cataloguing software we use and the ludicrously small text box it provides for the book’s title. This means that only a small proportion of the title appears on the label when we print it out and also means that only the part that can be printed is saved within the database. Sometimes we get asked whether we have a second-hand book and we enter the full title into our software and then it spits it out again! This is pretty infuriating and a prime example of how inadequate bibliographic data can make our lives needlessly difficult.

As discussed in one of my previous posts, we have grown accustomed to hearing the term “Web 2.0” to describe the period in the history of the internet where user-generated content became prevalent. This has led of course to a vast increase in the total amount of data generated worldwide and made the job of information professionals considerably more difficult as they attempt to steer their audiences towards the useful and relevant. Naturally the average web user is even more overwhelmed in comparison.

hkust_%e9%a6%99%e6%b8%af%e7%a7%91%e6%8a%80%e5%a4%a7%e5%ad%b8_library_%e5%9c%96%e6%9b%b8%e9%a4%a8_bookcase_bookback_old_culture_in_chiense_sept-2013

Thankfully, there have been huge gains in the amount of computational power available. Moore’s Law states that we can expect the number of transistors in a circuit to double every two years. Although this rate of growth seems to be slowing down, we still have much more power to play with compared to, say, a decade ago. The whole concept of the “Semantic Web” or “Web 3.0” is to use this extra power in an intelligent way in order to make the data work for us.

At the moment, most of the content published on the Web is in the HTML format rather than as raw data. I have no coding experience myself but my understanding is that HTML elements are of limited intelligibility to computers.  Due to the upsurge in data I have already discussed, this is already becoming very inefficient. If machines were to be given more access to this raw data, the need for human input to extract meaning is reduced. Librarians and information professionals have been working for some time to make the plethora of data they have available to them more freely accessible to the wider public. I also get the impression that various libraries and their parent organisations are now working more collaboratively than they have done in the past.

Hence there has been the development of BIBFRAME by the Library of Congress which it is hoped will become a new set of standards that will replace the MARC ones that are currently widespread. It is at this point that I begin to struggle to keep up! The MARC standards were designed in the 1960s so that library records could be shared digitally but commentators now say they are not up to the task of presenting bibliographic data in the user-friendly way information professionals hope will become the new normal. More simply put, people’s expectations have changed and they want to be able to make connections between various data sets.

The LoC website states in its FAQ section that “the BIBFRAME Model is the library community’s formal entry point for becoming part of a much wider web of data, where links between things are paramount”. Apparently, the goal is to present the data we hold about books and other media at what the LoC calls “three core levels of abstraction”. These are as follows: Work, Instance and Item. The Work level of data contains things like subjects, authors and languages. This then leads us to the Instance level which involves the various manifestations a Work may take such as a printed publication or as a Web document and the data required, such as date and publisher, needed to access them. The Item level allows us to access a specific iteration of a Work either physically on a library shelf or virtually. I think the aim of opening access to this data is a laudable one, providing BIBFRAME is integrated with the search engines already widely used by the public. Of course, this then leads to another debate on the power companies such as Google hold over the consumer!