“The Transformation of the World”: Part Two – Osterhammel on Libraries

k10179

As I mentioned in my previous post, I have been reading Jurgen Osterhammel’s “The Transformation of the World”, a huge book that attempts to document the many societal and political changes of the nineteenth century around the world. Looking through the index, I was intrigued to see several pages dealing with the subject of libraries and a couple more discussing archives and museums (the boundaries between these institutions not always being obvious). Most of the books recommended for our courses at City on library history are intended for a specialist audience and their authors are usually people working in the field of LIS. With this in mind, I thought it would be interesting to see what someone with a background outside the subject area like Osterhammel and who was writing a much more general book on international history had to say about them…

One of the early chapters of the book is entitled “Memory and Self-Observation” and Osterhammel uses it to remind readers in the West that the legacy of the nineteenth century surrounds us everyday. He identifies the musical art form of opera as one of our key cultural inheritances from this period. Not only are works from the nineteenth century still the bedrock of the modern operatic repertoire but also we have the permanent physical presence of the many opera houses built at this time. Leading on from that, Osterhammel discusses how much of the architecture of Europe’s towns and cities dates from the nineteenth century, particularly many important civic buildings. Some of these, of course, were built as libraries and many of them are still being used for this purpose today. He could have mentioned the Carnegie libraries here for instance but he doesn’t…

Osterhammel certainly seems enthusiastic about libraries, describing them as no less than “places of remembrance that crystallize the collective imagination of the past”, although the preservation of documents is only one function that they provide. He notes that it was in the nineteenth century that libraries and archives first began to open their doors to the public and thus become recognisable in their modern form. Archives began to be increasingly important as sophisticated government bureaucracies evolved. This was a process that could be seen happening across Europe as we know from our visit to the National Archives last November which is of course the successor to the Public Record Office founded in 1838 as the British Empire expanded and communication technologies improved.

Osterhammel identifies the British Library, founded in 1753, as the model for many other national libraries around the world. He credits Sir Anthony Panizzi’s work in cataloguing as being especially influential and also cites the Reading Room of the Library as an important innovation. He argues that national libraries served as “the nation’s memory for the respectable public and serious students” but acknowledges that they also existed to organise the more general collected knowledge of the time. Public libraries also began to flourish around the end of the century with growing awareness of the benefits of a literate and educated population. In countries like Britain and the United States, funding these out of taxation became accepted practice. Osterhammel then argues that although there was a long tradition of academic and scholarly libraries in China, public libraries didn’t start to open until the early twentieth century there and the concept was one imported from Europe. He mentions the emergence of public libraries in Japan in the 1860s as being influenced by the journalist Fukuzawa Yukichi who had travelled widely in Europe. Osterhammel ends this section by discussing how the later development of printing in Arabic-speaking countries meant there was not the same proliferation of printed texts and hence libraries didn’t become commonplace until a bit later.

Although this small section of the book gives readers only a brief introduction to the world of libraries, it is pleasing to see that Jurgen Osterhammel recognises their importance to our shared culture and their key place in the first era of globalisation.

Advertisements

“The Transformation of the World” by Jurgen Osterhammel: Part One

k10179

I have been meaning to read Jurgen Osterhammel’s “The Transformation of the World: A Global History of the Nineteenth Century” since it was first published in an English version in 2014. At well over 900 pages (before you get to notes and references!) it is a bit of an intimidating prospect and doubtless was a substantial piece of work to translate. No wonder it didn’t appear here until five years later than it did in Germany! However, I have been reading nineteenth century French authors like Hugo, Balzac and Flaubert over the last couple of years and I thought I would like having a deeper understanding of the social, economic and political changes that form the backdrop to these novels. Balzac’s La Comedie Humaine in particular tries to provide a very broad overview of French society after the Revolution and attempts to investigate the lives of people in every social class (I have only read a few). A contemporary historian’s perspective on this upheaval I thought would be valuable. Our visit to the National Archives with City last November also piqued my interest in the period in that it was the glut of documents produced by the expansion of the British Empire that led to the foundation of the Public Record Office in 1838. The need to organise and then provide access to these documents led to similar institutions being set up around the world. I then had a vague idea that national libraries must have largely evolved into their current form in this period too and supposed this would be due to the growing output of the publishing industry and a more literate population, certainly in the more developed countries of the time. The rise in nationalism in areas of Europe and South America might also have been a factor in this as well with people becoming more aware of the importance of preserving their own languages even if they were not the official one of the state they resided in. It will be fascinating to see what Osterhammel makes of all this and I intend to write more as I work my way through the book…

The SNES Classic Mini & LIS

Nintendo’s announcement of the Super Nintendo Classic Mini retro console today provided me with an opportunity to reflect on the changing nature of technology and the challenges we face in providing access to the videogames of yesteryear. At City, we have discussed how the role of information professionals is increasingly to organise and give access to a wide range of digital media and the Super Nintendo Classic Mini aims to bring the games of the 1990s to a contemporary audience. The long-rumoured device is a miniature reproduction of Nintendo’s second home console with various classic games built-in and is expected to sell out almost immediately worldwide given how well its predecessor the NES Classic Mini did at retail.

H2x1_NintendoClassicMiniSNES_image912w

As many of you will know, the Super Nintendo Entertainment System (often abbreviated as the SNES) originally launched in 1990 and competed with the Sega Mega Drive as one of the first 16-bit machines. It was hugely successful at the time, particularly in Japan, but also in the West. Millions of machines were sold during its lifespan and many of its best-selling titles such as Super Mario World, Super Mario Kart and Star Fox are now regarded as seminal by both fans and critics.

Many people are interested in playing SNES games via emulation on modern devices and Nintendo’s Virtual Console service (available on a number of their recent platforms but not yet the Switch) allows them to do just that. It is worth mentioning here that there are also a number of unofficial emulators available but their legality is somewhat dubious.

It is no surprise that the original physical SNES games are also highly sought after by collectors. Many original titles can be found on auction sites such as EBay and Amazon Marketplace and there are a growing number of smaller mail order businesses that seek to serve the demand for retro games. However, the nature of the media presents technological obstacles. As was common in its era, SNES games were supplied on cartridges which had a nominal capacity of 128 MB (!). The ability to save your progress was enabled via a small battery inside the cartridges. The gaming medium was developing quickly in the early 1990s and games like The Legend of Zelda: A Link to the Past were becoming far too long and complex to be completed in one sitting so the save facility was absolutely essential. Unfortunately though these batteries only have a finite lifespan. My attempts to resurrect classic Nintendo games have resulted in failure on a number of occasions as the batteries in the cartridges have died, erasing all my progress and losing the ability to save the game ever again. If anyone knows how to replace these batteries, please let me know… In all seriousness, however, this is a challenge that anyone attempting to archive these games would have to address.

Another noteworthy feature of SNES cartridges was the sometime inclusion of an additional “Super FX” chip which allowed a primarily 2D-oriented machine to render primitive 3D graphics. The science fiction themed Star Fox was the first game to make use of this chip in 1993. Although the game was developed internally by Nintendo, it was reliant on the Super FX chip produced by a company called Argonaut Games. Licencing disputes with Argonaut have prevented Star Fox from appearing on Nintendo’s Virtual Console service to date. Copyright disputes are a wider problem for information professional and are often an impediment to making important documents available. It is, then, exciting news that both the original and the previously unreleased sequel will be included on the Super Nintendo Classic Mini.

Disclaimer: I do not own rights to the above image and it is presented for informational purposes only

 

Digital Public Library of America: My First Impressions

Library Science students at City have been studying a module on Digital Libraries this semester. These emerging platforms, used to share library collections electronically, are playing an increasingly prominent role in the lives of both library users and librarians. I thought it would be useful to take a quick look at one very well-known example of a digital library, the Digital Public Library of America, to see what was on offer to both academic researchers and interested members of the public.

I first became aware of the project through John Palfrey’s excellent book “BiblioTech: Why Libraries Matter Move Than Ever in the Age of Google” which was published a couple of years ago. As the title would suggest, his central argument is that we need to stop thinking of libraries as collections of dusty old books, even if we have fond memories of the way they were in decades past, but instead embrace the possibilities they can offer for learning in our contemporary digital environment. John Palfrey is writing as something of an outsider, being a lawyer by trade. Nevertheless  he has been closely involved with them in his working life managing the Harvard Law School Library and then as the principal of the prep school Phillips Academy. He is alarmed by the possibility of for-profit tech companies such as Google encroaching on the role public libraries have in disseminating information. The commercial success of the Google search box has arguably changed library users’ expectations in a way that does not always allow the most accurate or relevant match to a query to be easily found.

Given that Palfrey is one of the founding members of the DPLA initiative, we would expect to see some of his concerns addressed in the way it has been implemented.The home page gives us some indication of the vast scope of the collection: we are told we can “explore 15,384,819 items from libraries, archives, and museums”! This is both enticing and overwhelming! This Tagxedo word cloud gives some idea of the contents of the DPLA homepage.

dpla-word-cloud

What is interesting about the emphasis on libraries, archives and museums is that it points towards these formerly separate entities converging as digitalised images of documents and objects can increasingly be stored and accessed in the same way. The DPLA is emulating the likes of Google in bringing together documents from a whole variety of sources in a convenient way that is free at the point of use. The front page also emphasises how users can search for documents relating to places and events that have been in the news recently, something Google also does effectively. We are invited to search for documents relating to Clemson, South Carolina for instance as American users would likely be aware that its University’s football team won the College Football Playoff in January. Exploring by date gives us the option at looking at documents pertaining to 1949 when it looks like some famous horse race took place (the Kentucky Derby possibly?). The Twitter feed widget in the bottom right hand corner explains that we can search for material from the Harriet Tubman Museum which again shows that the site is responding to current events (February being designated Black History Month in the U.S.A.)

The site seems very much to cater for a domestic audience which is a bit unusual in today’s climate. The overall look and feel of the site suggest to me that its target users are adults who are relatively technologically savvy and well-educated (the type of people we might stereotypically assume to be local history enthusiasts?). While there is a menu for educational resources, it does not really seem to be seeking to engage children and young people. This is perhaps surprising given John Palfrey’s position as a school principal. Equally visible is a menu giving information for prospective software developers. The whole intention of the DPLA is that it becomes a platform that new applications can be built upon and perhaps it is thought that these will make the collection more accessible to a wider audience.

Word Clouds & Illuminated Manuscripts

A few years back, my father and I went to see the exhibition “Royal Manuscripts: The Genius of Illumination” at the British Library. The books on display were those collected by English monarchs during the Middle Ages and the idea was that they were of both historical and artistic value. An illuminated manuscript is one where little illustrations were added to the text, often using gold leaf. The great expense of doing this, as well as the time required, meant that these books were precious artefacts even in their own time and owning them became symbolic of power and authority. No wonder kings and queens were keen to acquire them!   The exhibition was one of the most memorable I have ever seen as the intricate detailing of the lettering was truly breath-taking. The contrast between the dark room in which the books were being displayed and the shining gold of the illustrations was really striking. Most of the works in the collection were religious texts, being missals, psalters or edition of the gospels. This was obviously indicative of the centrality of Christianity to life in medieval England and most of the books were intended for practical use. The themes of the images can also tell us something about the priorities of the time.

Paying what would have been huge amounts of money for the time to have one of these books produced was a show of piety on the part of the owner as well as power. Even so, it was not conventional to commission the artists to portray oneself in the illustrations. However, one book owned by Henry VIII did just that and I remember the label on the cabinet explain that it would have been the usual thing for the Biblical King David to be depicted at that point of the text (I think it was one of the books of the Old Testament). This was a typical display of egotism by Henry and tells us a lot about how he wanted himself to be perceived by others.

In contemporary society, we can use word visualisation tools such as word clouds to see which terms occur most frequently in a body of text. They are particularly widely used to look at metadata on the web and one of the main advantages is that popular services such as Tagxedo update in real time to reflect the way the sites’ content is changing constantly. Like illuminated manuscripts, they are intended to be aesthetically appealing as well as useful. I might be extrapolating slightly but I do think there are some similarities! There are plenty of options to play around with colours and fonts here! Let’s go back to the exhibition I was talking about originally.  I am going to use the Wordle application to create a word cloud to input some text from the British Library’s medieval manuscripts blog about “Royal Manuscripts: The Genius of Illumination”. By seeing which words are most commonly used, we can see what the priorities of the exhibition may have been.

wordle

We can see that the words “royal”, “manuscripts”, “collection” and “English” occur most. This does give us an accurate impression of what the exhibition had to offer, even though this particular word cloud doesn’t really offer us any surprises. Perhaps applications such as Wordle would be better suited to much larger bodies of texts where the most commonly used terms would be less obvious. Are there really many similarities with the illuminated manuscripts I discussed earlier? Well, not really. It is difficult to compare a 21st century web application with a hand-painted book from the Middle Ages, even though they are both trying to draw our eyes to words and letters in a body of text. Beyond them being nice to look at!

 

Our Visit to the National Archives

DIGITAL CAMERA

 Photo of National Archives taken from Wikimedia Commons

On Wednesday 23rd November, MSc Library Science students from City were offered the chance to visit the National Archives in Kew (many thanks to David Haynes for organising everything for us). I just wanted to take this opportunity to share a few thoughts and observations on the day’s activities…

Once we had all arrived and had been ushered into the Training Room, we were greeted by Val Johnson and the programme began in earnest. Firstly, we heard from Chris Day about the role riots and public disturbances had played in the history of the Archives. Frustration at the voting restrictions in 19th century England (where only landowners had the vote and so-called rotten boroughs were disproportionately represented in the House of Commons) had led to a wave of popular protest, culminating in events such as the Peterloo Massacre in 1819. The reaction of the authorities was to quite literally read the Riot Act to the protesters (!) and this naturally led to a glut of paperwork which then needed a home. This certainly contributed to the founding of the Public Record Office in 1838, which was originally in Chancery Lane. The talk was a good illustration of how political events can have long-lasting and often unforeseen consequences. Chris’ section was then followed by Howard Davies giving us a brief overview of the history of the Archives which was both informative and entertaining. We were then split into two groups for the next lot of activities.

My group was first given a tour of the public exhibition gallery which had some interesting sights to take in. There was a display on the Cambridge Spies which looked intriguing (unfortunately time was a bit limited) and also items connected with William Shakespeare (it being the 400th anniversary of his death in 2016). There were also some eye-catching visual installations that were linked to the Somme centenary commemorations. I thought the one of the machine-gunner was particularly memorable, though a bit frightening…

After an introduction to the cataloguing system used by the Archives (I remember David Haynes asking some pertinent questions about metadata) and a short tea break, my group was then shown around the repositories by Document Services. Needless to say, there are a huge number of documents at the Kew site and a vast amount of space is needed to hold them (so much so that many of the lesser-requested items are now held in a former salt-mine in Cheshire), although they do use mobile shelving to cram in as much as possible. It was interesting to see the employees at work and impressive to find out that they aim to supply the document requested by a reader in under an hour. Understandably there is a system to control the climate in place at the repositories and, although there are alarms, it is usually the staff who report it if something seems wrong. It was also fascinating to see the variety of documents such as being shown a 18th century map of the British invading one of the French colonies in the West Indies (our guide explained that you can tell it was a British map because it showed the French running away!).

Once we returned to the Training Room, we were given a useful introduction to Discovery by Tom Storrar, which is the Archive’s online database which is free to use and contains documents like service records from the First World War. After that, Diana Newton gave a brief talk about the challenges of archiving all the web pages produced by the various departments of HM Government (she estimated there were around 800 URLs currently in use!). It can take a web crawler weeks to go through them all. And then it was time to head home so it just remains for me to thanks everyone at the Archives for putting together such a diverse programme for us.

 

Colin Kaepernick, the National Anthem Protest & Twitter

In last week’s session, we looked at the various forms of web services and considered the roles they play in our lives. Increasingly, we rely on cloud computing to store and retrieve our documents, and manufacturers have increasingly done away with CD/DVD drives and instead only include USB ports on newer machines. Our data is held on remote servers and has the advantage of access from any device. We then went on to talk about social media platforms such as Twitter, one such web service, which generates huge amounts of data about its users. The practical exercise that followed involved us seeing what information could be gleamed from this data and the ethical questions that this may pose.

One important concept that was introduced was that of the API or Application Programming Interface. APIs allow third party applications to acquire data from these services for their own purposes. This may allow the makers of these programs to collect valuable information about the demographics of people using the platforms or their political preferences. Martin Hawksey has developed an app called TAGS which allows users to collect a whole series of tweets with the same hashtag on one Google Spreadsheet. This would allow us to see what type of people were talking about an issue and build up a picture of what they were saying about it. A controversial topic with a suitable hashtag could yield fascinating results, although Twitter users would probably be unaware they were contributing to this bit of research.

colin_kaepernick_and_kyle_williams_warm_up

One potential case study here would a high-profile recent controversy in American Football (and one that is highly relevant with the presidential election in the background). San Francisco 49ers quarterback Colin Kaepernick’s decision to protest what he sees as ongoing oppression of African-Americans by kneeling during the playing of the U.S. national anthem that precedes NFL games caused a furore on Twitter and spawned numerous trending topics. While receiving death threats for his “unpatriotic” stance, he has also become a hero to many, having the highest-selling jersey on nflshop.com during the month of September. Although he was the starter during the 49ers’ run to Super Bowl XLVII in 2013, Kaepernick entered the 2016 season backing up the often-derided Blaine Gabbert on a team widely expected to struggle. He took a very real risk in my view by protesting because it was uncertain at that point whether he would be able to continue his professional career, at least in the NFL.

By putting the term #ColinKaepernick into an app like TAGS, we could see some of the responses to the protest on Twitter and who they emanated from. By looking at other hashtags they used, we could build up a fairly detailed picture of their cultural background and political beliefs. We would expect that many of those sympathetic would be other African-Americans, especially as they are both, according to recent research, more likely to use Twitter than other groups and to tweet frequently. We could see whether these people had used terms like #BlackLivesMatter or #Ferguson. We might also anticipate that those most hostile to Kaepernick might be political conservatives who used hashtags such as #MakeAmericaGreatAgain that are associated with support for Donald Trump. The point is none of these users has specifically given their consent to their data being used in this way (other than agreeing to Twitter’s Terms of Service) and they may not be comfortable with such assumptions being made about their beliefs. Making sure this data is used responsibly is something we information professionals can contribute to.

Margaret Egan, Jesse Shera & Social Epistemology

adelbert_hall_-_case_western_reserve_universityLast time in our “Foundation” class, David Bawden spoke to us about epistemology, which is the philosophical study of knowledge and its scope. The relevance of this to LIS is obvious given how our focus is enabling understanding by using data. David followed this by introducing some of the various attempts over the years to apply a philosophical framework to our discipline. This has been a difficult thing to achieve as there has been no agreement within the profession whether such a theoretical structure is even needed at all. People involved in Library and Information Science have generally used pre-existing philosophical ideas from other contexts and translated them into ours. UCL academic Brian Campbell Vickery was particularly critical of this practice. This has started to change in recent years with the work of Luciano Floridi, now at the Oxford Internet Institute, who has created his own philosophical ecosystem for the study of information and sees librarians and information scientists as working at the “applied end” of it. While his ideas have become increasingly influential, there are certainly competing ones. Today I am going to briefly look at the thought of Margaret Egan and Jesse Shera who coined the term “social epistemology” to make sense of what librarians did and were trying to achieve.

Margaret Egan and Jesse Shera worked together at Case Western Reserve University in Cleveland, Ohio during the 1950s and this is when they developed the theory. While Shera was by far the more high-profile figure, in part due to Egan’s early death from a heart attack, he did credit her for her work on this. The basic premise of social epistemology is that knowledge is not an isolated phenomenon and is instead interwoven with a given societal context. This was in opposition to traditional views of knowledge as “justified belief” held by an individual person independent of others. Egan and Shera suggest that societies at different times and places create paradigms of knowledge and libraries and librarians operate within these, working to meet the needs of their users. This idea has similarities to the concept of domain analysis which Birger Hjørland and Albrechtsen wrote about. It seems to me to be no coincidence that this more humanistic approach to library science began to develop at a time of great social change in the U.S.A. when the Civil Rights Movement saw African-Americans campaign against racial discrimination and the emergence of feminism was beginning to challenge conventional ideas about gender. There were of course great technological advances being made in this period which would impact on the world of libraries and so it makes sense that people like Egan and Shera would see knowledge as not immutable but instead something that would evolve along with society.

Photo is of Adelbert Hall at Case Western Reserve University where Margaret Egan and Jesse Shera worked. Taken from Wikimedia Commons.

How Google Became Generic (Not Quite Like Coca-Cola)

Interbrand.com compiles an annual top 100 “Best Global Brands” and it always makes for fascinating reading, especially so this year now I have started studying Information Science at university. The site’s editors use various criteria to judge how these brands are performing such as the publicly available information on their financial performance, their prospects for growth and their visibility across major markets. Many familiar names are represented near the top of the rankings such as carmakers Toyota, Mercedes-Benz and BMW and technology giants Apple, IBM and Samsung. E-commerce titans Amazon are 2016’s fastest-rising entry. No matter where in the world you live, it is likely that these companies’ products and/or services are readily available and heavily advertised across various channels. All this is very interesting but what I really want to focus on here is two particularly big yet very different corporations, namely Coca-Cola and Google.

Last year I read Mark Prendergast’s hugely enjoyable history of the Coca-Cola Company “For God, Country & Coca-Cola”. As one of their most loyal customers, I was both informed and entertained by the story of how Atlanta chemist John Pemberton’s eccentric health tonic (originally containing actual cocaine though the Company has subsequently denied this) was transformed into the market-leading soft drink by Asa Candler and became one of the pioneers of today’s global economy with Coke becoming available virtually anywhere in the world.

1930s_coca-cola_neon_sign

This remains true today even as the Coca-Cola Company has diversified into other areas as anywhere you go the original brown fizzy drink is still ubiquitous. The very name “Coca-Cola” has become synonymous with its most notable product almost everywhere, much to rival Pepsi’s chagrin. The annual Christmas adverts featuring Santa’s sleigh delivering the famous beverage are even believed to have “standardised” the appearance of Claus’ outfit in people’s minds to the Company’s red and white colours! In 2016’s Interbrand rankings, Coca-Cola is third, behind a much newer company that has much more relevance to librarians and information professionals.

Everyone seems to use Google as their search engine when they search the web, to search an extent that its very name has become a generic term. I was actually surprised to learn that Google’s global market-share was actually only 64% in 2015 as nobody ever mentions “Binging” something. This suggests that they have not actually succeeded in creating an internet search monopoly, even though public perception may suggest otherwise.

Despite the recent launch of the Pixel smartphone and increasingly unavoidable advertising of their Chrome browser, Google has an incredibly different business model not only from Coca-Cola but also from the vast majority of other brands on the Interbrand list. Very few of Google’s customers have actually paid any money to use their services let alone made a conscious decision to choose them over a competitor. Anyone who buys a Mac will have chosen that in preference to a PC and anyone who drives a Ford will have preferred it to say a Toyota. Yet people seem to opt for using Google without considering any alternatives. This creates a potential problem for information specialists (and indeed for anyone with a stake in markets operating fairly).

In last week’s class with David Bawden, we had a first look at how relational database management systems work and did a practical exercise that involved groups of us searching for journal references on MedlinePlus and Web of Science. While these platforms have sophisticated interfaces with advanced search and command line functions, David suggested that most users in his experience prefer a much simpler “search box” system. This does have the advantage of convenience and familiarity. These are obviously two of Google’s main selling points as a search engine. Our role, however, is to help pinpoint the most relevant and useful data we can for our users and so we need to think critically about how we go about doing so, even in a world where Google and its approach are dominant. If one company can have too much influence on the way people search for information, we should all be worried.