Licensing – Why it is so important

This blog post originally was originally written for FutureEverything as part of their Open Data Cities programme.

I’m no expert but I really need to be – Licensing

Licensing is a subject that comes up a lot with Open Data. The licence is a key component of the dataset. It defines the use and liability and it shapes how or what innovation will come from data release.

As mentioned in the title I am no expert in this area and I would appreciate any correction or amendments to my understanding.

Traditionally public data has been closed so that the only way you could get access to data to build products was by buying a licence to use. In many cases these licences were expensive and restrictive. The to mitigate this cost often, te licence would also have some level of service agreement built in. You paid for the licence for the data and the data provider would provide you with a level of continuity and support. This helps to limit risk and encourage investment into a product.

The closed ‘paid licence’ system generally has a high barrier to entry ‘price of licence’ limiting the amount of innovative products developed. If innovation ecosystems are ideas that live with most failing. The price of failure being too high could have a chilling effect on the whole system.

One of the first licenses used for the release of Open Data was Creative Commons CC-BY-SA. This licence allowed people to create services and products off the back of the data as long as they attribute where the data came from and share back any data that was created off the back off the originally released dataset (value-added data). The original Creative Commons licenses were devised as an answer to restrictive copyright laws relating to ‘works’ – articles, text, images, music etc., as these were deemed increasingly anachronistic in the digital age. It is up for discussion if data can be deemed as a ‘work’ in the context of this licence.

The Open Database Licence (ODbL) developed by Open Data Commons, was created to address the doubt that data could be seen as a ‘work’. It carries the same attribution and share alike clauses and is used by many datastores including the newly opened Paris Datastore.

Anyone can develop products and services that use datasets with these licences but intellectual property doesn’t extend to the value-added datasets created in the process of developing these products. Releasing value-added datasets back to the community allows further innovative products to be released off the back of these datasets, so potentially the pace of innovation could be increased – It is analogous to the ‘standing on the shoulders of giants’ idea.

By imposing further use of value-added data by other organisations might chill the development of products that create value-added data.

With the above licences there is generally no liability or guarantee of service from data providers. This creates a greater risk scenario. If you were investing in product development this potentially is a source of concern and may be an inhibiting factor

In the UK we have the recently released Open Government Data Licence. That was developed specifically for government data. It borrows from some aspects of the CC-BY-SA licence and ODbL. Unlike the those licences there is no need to share back value-added data.

Would this have any impact on products and services that are developed from Open Data? Again in the licence there is no liability or guarantee of service from the data provider but the developing organisation gets to keep all the rights on the products and services they develop – including value-added datasets.
The advantage of this could be that by allowing people to keep hold of the rights to the products that they develop might be mitigate against the exposed risk posed by the lack of liability and guarantee. The main disadvantage could be that the pace of innovation could be curtailed due to people having to replicate process and value-added datasets.

Why Open Data?

Back in May 2009 after the final presentations at Futuresonic 09. I sat down with Adam Greenfield and we talked about how cities evolved and grew, and how they developed inequalities through those that have access to information and those who don’t. This coupled with an individual’s ability to act on that information in a meaningful way begged the question, that if all information/data was open and available, how would a city evolve? Would it grow with the same asymmetries, as Adam suggested in his Futuresonic presentation, is this inequality a preconfigured state?

At the time there were few cities who had embarked down the route of fully opening up their datasets although some cities in North America had started a process that would eventually, as in the case of Vancouver, lead to an adoption of open source, open standards and open data principles.

It was through seeing this emergence of open systems that the Open Data City project began to evolve. Data is is the lifeblood of our modern technologised society. It tracks, evidences and creates mechanisms for decisions. Much of this data doesn’t exist outside the confines of City Hall but we see evidence of the impact of this data everyday. Speed humps suddenly appear on your road or your bus doesn’t turn up when you thought it would. Bins only get emptied every two weeks or your local school closes down. This is the physical manifestation of the publicly held data that few have access to.

The inability to connect action taken by a public body with the evidence on which the decisions are made can have an insidious and corrosive effect on the relationship between the citizenry and government. Just as Louis Brandeis said ‘Sunlight is the best disinfectant’ with regards to transparency and corruption, the opposite is also true. In a closed system even though the decisions might be taken with the most honourable of intentions, the lack of evidence for the decision creates doubt, rumour and misrepresentation. In a closed system the power of the media increases as the distrust of the political sphere decreases. The media becomes the interlocutor and which can interfere with the relationship between citizen and government. This all presumes that those that govern have nothing to hide. The lack of transparency in government creates the opportunity for the media to expose the bad apples using a system of clandestine briefings and investigative reporting. This process of exposé undermines the trust the public has in the system of government because there is no evidence to the contrary or that the evidence that people can see has been derived from a seemingly arbitrary decision making process.

The opportunity has arisen for public bodies to create a new relationship with the people who they serve. A more transparent and open system can lead to a more equitable environment, where the citizen is not a customer or passive consumer of service and information, but an engaged citizen who is able to make decisions based upon facts, not rumour and can hold to account public servants with less than honest intentions.

The Sunlight Foundation www.sunlightfoundation.com, named after the Louis Brandeis quote, are an American lobby group advocating transparency in government. They have produced this graphic which they call the Cycle of Transparency which aptly illustrates the benefits of transparency in government. As each element of the Cycle of Transparency moves forward concurrently, bringing about the changes needed to create a more transparent government whilst identifying new needs.

The Cycle Of Transparency highlights the use of technology to make information open and accessible. It can be argued that transparency and openness has been enabled by digital technology. People are now able to access, interpret and distribute information easily. Until quite recently, the channels for making information open and accessible where limited and to a certain extent controlled.

The landscape is changing. The opening up of data will have a seismic effect on the way we access and share information. New services will be created, as citizens and institutions demand the ability to interpret and navigate through data in the way they want. It will create a more efficient data environment where information is shared rather than duplicated, and it will highlight errors in the system with anomalies being addressed rather than hidden.