Provisional programme for 2017

IMG_0066From last night’s planning meeting we now have a provisional programme for 2017 and it is quite an ambitious one. What is great from our perspective is that there is a continuation of a number of themes that we have been looking at over the last year and a resurfacing of perennial ones. Highlights include the ‘making and doing’ workshops that have been developed as part of the Echo Chambers and ‘Post-Fact’ Politics programme and the Visualising Data workshops. There are a number of sector and technically specific events but one to watch out for is alternative ways of looking at the world which will be a day of walks, talks and explorations. As always there is a large dose of how data and technology impact on society and much more.

This is a provisional programme and we are looking for as much input as possible (Dates and sessions are subject to change). Please click on the Google Doc and add comments. We are looking for people who can contribute, sponsors, venues and partners.

Link to Google Doc

Screen Shot 2016-11-30 at 10.37.12

Screen Shot 2016-11-30 at 10.37.27

Making data useful and other stories – How GM authorities are using data to help their citizens

6.30pm – 8.30pm, Tuesday 27th September 2016
Greenheys Business Centre
Manchester Science Park
Pencroft Way
Manchester M15 6JJ

Map here

Sign up on Eventbrite here

This month’s Open Data Manchester looks at how two local authorities are using data to deliver service.

Alison Mckenzie Folan and Alison Hughes from Wigan Council will show how they are using data and open data to help them engage the community, target resources and enhance services. Wigan Deal has been seen as an exemplar of engagement between the public sector, local businesses and community.

Jamie Whyte leads Trafford Innovation Lab which has been developing new and innovative ways to make open data understandable. The insight created has enabled community groups to use data to help them apply for funding, created resources for councillors and shown a spotlight onto the complex world of school admissions

Open Data Manchester events are spaces for learning, discussion and collaboration. The events are open and free

Open Data Cooperation – Building a data cooperative

Last year Open Data Manchester held two workshops one in Berlin and the other in Manchester to explore whether cooperative structures could enable the creation of open data and personal data stores for mutual benefit. The idea of the mutual came out of an ongoing conversation between people within the cooperative movement and the open data world about the role of cooperatives, and the possibility that they could rebalance what many perceive as asymmetric relationship between data subjects (people with personal data) and data users (people who use data to develop services and products).

[slideshare id=47070639&doc=opendatacooperationv2-150416081428-conversion-gate02]

Background

Our modern technologised societies exist on data. Mostly these data are invisible and unknown to us. The services that we interact with, the daily transactions that we make and the way we negotiate through our everyday generate data, building a picture of who we are and what we do.  In the age of the Quantified Self there is a growing trend for self monitoring allowing us to track what we do and how feel when we do it. These data are valuable. Aggregated they enable organisations to predict, personalise and intervene seamlessly and sometimes invisibly. Even for the most technically literate, keeping track of what we do and don’t give away is daunting. Personal Information Management Services (PIMS) are starting to emerge offering people the chance to stem the unbridled exploitation of personal data by both public and private organisations whilst also creating monetary rewards for their users. Many of these commercial organisations seek to act as a brokerage service for personal data. The creation of data cooperatives that can act as PIMS have the potential to empower individuals to have more control over their data, creating value for themselves and their communities, and for people to have more of a say in the services that are built.

The sensational revelations by Edward Snowdon shone a spotlight on the personal data that is collected through the IT software and hardware infrastructure that we rely on today. Although highlighting that we unintentionally give away a lot, it perhaps hasn’t built  a wider popular discussion around protection and usage of personal data. It is inevitable that as the awareness about the data that we produce rises there will be a demand for services that give people more control. PIMS offer to deliver monetary value to users, but how much value is up for debate as there are differing methodologies to quantify it [OECD 2013]. Value can also be context dependant – data about someone exhibiting behaviours that might indicate a large purchase might be deemed more valuable by companies that manufacture or sell that item.

Data cooperatives are starting to emerge that have a broader social and ethical outlook than simple monetary transaction. The Good Data which allows people to control data flow at a browser level with benefits going to social causes and the Swiss-based Health Bank where personal health data is aggregated for the advancement of medicine, are examples of this. As the principles of data custodianship for social good become understood there becomes an opportunity for more to emerge.

Data cooperatives can represent the interests of data subjects and data users

Cooperatives come in many flavours, traditionally coming out of the needs of the membership who subscribe to them. Structures of these cooperatives have generally been organised around a single class of member – workers, producers, consumers, etc. The single class structure although creating an equitable environment for members, can tend towards self interest and even though they may be bound by the notion of common good, the mechanism for the creation of the common good or commons is seldom explicit.

Internationally the creation of new forms of cooperatives that explicitly express the development of common good, across multiple classes of stakeholders are more abundant. Social co-ops in Italy and Solidarity coops in Canada often provide services such as health and social care, education as well as community infrastructure projects.

The ability to have multiple classes of stakeholders within a data cooperative has the potential to create a more equitable environment for both data users and data subjects to exchange data. The influence of different classes within the organisations could be managed by fair distribution of voting rights with a user such as a research organisation having the same voting rights as a data subject.

Michel Bauwens founder of the P2P Foundation talks about the creation of these new forms of cooperatives, and how they can build a commons both material and immaterial. This commons would be subscribed to by other commons creating entities and licenced to non-commons creating organisations. This suggests a federated relationship between such organisations where commons is shared, could exist. But the challenge would be how to define the exchange within this system and if a cooperative contained both producers and users how does this affect the production of commons?

Would a data cooperative necessarily adopt these newer forms of distributed and commons creating structure? There appears to be a consensus that commons creating, multi-stakeholders cooperatives are positive, but they come with increased complexity. Can individual circumstances especially when dealing with communities based around sensitive issues, create an environment for sharing beyond a single class of stakeholder? A single class cooperative may seem to be a simpler, immediate solution for a community of people who have specific needs and issues and where strong trust relationships need to be maintained.

The scale of data cooperatives

Data cooperatives have the potential to work at scale generating and trading in bulk, high worth data as well as forming around smaller communities of interest, such as around a particular health issue, to draw down or negotiate for a better services.

Creating a critical mass of data subjects that would allow the data cooperative to operate at scale would be challenging. Marcos Menendez from The Good Data sees that for PIMS such as themselves would need to create a minimum data subject base of around 500,000 people to be viable. There is potential for data cooperatives to partner with organisations or charities with a similar ethical outlook to build the data subject base.

It may be easier to form cooperatives around single issues as the community the cooperative seeks to represent will already be in existence. The value of such an organisation might be that it can help create a more informed decision making process with views of the data subject being represented. Within a multi-stakeholder model the service provider might also be part of the data cooperative such as a local authority or other public sector organisation

Making the purpose of the data cooperative understandable is key. Although single issue cooperatives are relatively simple to understand, the representation of data at scale may be challenging. Data cooperatives could act as a platform that builds consent and allowing the representation of personal data across a broader portfolio of interests.

Building trust and consent within the data cooperative

Trust and consent should be the foundations on which PIMS are built and data cooperatives have the potential to create both. Mutuality offers an opportunity – especially with a multi stakeholder model – to represent the interests of all stakeholders from individual data subjects to data users – creating an environment of mutual understanding and trust. The benefits of enhanced trust between the individual data subjects and data users could enable better data and context to be created by data subjects. Through understanding the ways that the data is being used and trusting that the data user understands the needs and concerns of the data subjects, could create a more enlightened and responsive relationship. Even without data users being part of the organisation, the data cooperative would be able to take on the role of trusted representative which in turn could create consent.

Informed consent across all data subjects in a cooperative could be challenging. It would be easy for a data organisation to empower those that already have knowledge and agency to maximise their data, but the data cooperative should have an interest in empowering everyone.

Increasing data literacy amongst members

Raising the level of data awareness amongst cooperative members would create more informed decision making, but this task would need to be delivered in a sympathetic and nuanced way. Ultimately some people may not engage because of service dependency, lack of choice. or a perception that it isn’t relevant or useful to engage.

For a data cooperative to represent its membership and control the flow of data it needs to have legitimacy, know and understand the data assets of the membership, and have the authority to negotiate with those data assets on the members behalf.

Decisions around data sharing and understanding the potential consequences are difficult and complex. As an intermediary the cooperative would need to ensure that individual members were able to give informed consent. Data literacy goes some way to achieving this but also mechanisms need to be created that can allow people to have agency over the way that their data is used.

Creating consent

Can one organisation be representative of the broader range of ethical positions held within a membership structure? For practical reasons the data cooperative might have a high level ethical policy but individuals within the cooperative may make data sharing choices based on their personal ethical standpoint. This could be enabled by proxy or preset data sharing preferences. The alternative could be to have smaller federated or distributed niche organisations that have specific restrictions on data reuse.

There exist many mechanisms for the creation of consent. These by and large create the environment for proxy voting in decision making processes. A mechanism such as Liquid Feedback – popularised by the Pirate Party, where an individual bestows voting rights to a proxy who aligns to their position, with the ‘liquid’ element allowing proxy rights to be revoked at any point. Other mechanisms might follow along the lines of the Platform Preferences initiative developed by W3C, which sought to create privacy policies that could be understood by browsers – ultimately considered too difficult to implement. A potentially easier solution might work on the basis of preset preferences based on trusted individuals or the creation of archetype or persona based preferences that people can select.

Creating a more equitable data relationship

How would the argument for greater individual data rights be made when service providers see that personal data mediated through their products as their intellectual property? Work has been done through the midata initiative and the developments of personal data passports – where individuals grant rights to organisations to use the data for delivery of service. UK Government has supported this initiative, but has backed away from underpinning the programme with changes in legislation. The lack of regulatory enforcement may limit the efficacy of any initiative that seeks to grant individuals’ rights and agency over their data.

At present there is a certain level of cynicism around voluntary codes of practice where power imbalances exist between stakeholders. The lack of legislation might also create a chilling effect on the ability of data cooperatives to gain the trust of their membership due to their inability to totally control the flow of data.

Existing UK data legislation does give data subjects rights to access personal data held by external organisations through Subject Access Requests. A data cooperative could act as a proxy for individual members automating regular Subject Access Requests. This model is being explored by Our Data Mutual in Leeds, UK. There are challenges with using Subject Access Requests at present. Organisations can charge up to £10 for each request and although provision of the data in digital format may be specified, responses usually take the form of reams of paper print outs with responses taking up to 40 days.

It has been mooted by the UK Government that the cost of Subject Access Requests will be reduced – potentially to zero and that organisations will be compelled to supply the data in digital format. This would go a long way to making the process of automated Subject Access Requests viable but in an ideal world data should be pushed rather than pulled.

Data supply

A challenge that all data cooperatives would face would be how they maintain a relationship with their membership so that services based upon, or value that is extracted from the data is not subject to unforeseen supply-side problems. If a data cooperative represented its membership and entered into licensing relationships with data users on behalf of its membership, what would be reasonable for a data user to expect, especially if data subjects had the right to revoke access to data at anytime? With larger scale data cooperatives this may not be too much of a problem as scale has the potential to damp down unforeseen effects. The Good Data proposes to get around these issues by only holding data for a limited amount of time essentially, minimising disruptions in data supply by creating a buffer. It may be necessary for the data cooperative to create terms and conditions for data subjects to minimise sudden supply-side issues.

Smaller data cooperatives, especially ones that are created around single issues may have difficulty in engaging in activity that requires service guarantees. Developing a mechanism for federation, cumulatively creating data at scale might be a potential solution, but creating a federated system of consent may be more difficult to achieve. As suggested previously economic activity might be a low priority for such organisations where the main purpose might be to represent members and create the environment for informed service provision.

The challenge facing federated data cooperatives and how they interact is undefined. It has been noted that building distributed and federated systems is difficult, and that centralised systems persist due to operational efficiencies. The advent of alternative forms of ‘block chain’ transaction could enable distributed organisations to coexist using ‘rules based’ or algorithmic democracy. But alternative transaction systems and currencies often face challenges when they interface with dominant and established forms of currency and value. How data cooperatives could practically use these new mechanisms for exchange needs to be explored.

The data cooperative and open data

Although the much of the discussion in the Berlin and Manchester meetings was based on rights and uses of personal data, data cooperatives do offer an interesting model for organisations that create open data, or those that seek to enhance open data with personal data.

An open data cooperative might be a good model for stakeholders who create and use data for public access. It may be a single class model where data suppliers such as public bodies work together or more interestingly a multi-stakeholder model where public data providers work with organisations that manage personal data – these in themselves could be data cooperatives

In summary data cooperatives;

  1. are owned by their membership and therefore should be more accountable;
  2. have the potential put a halt to the over collection of personal data through representing data subjects and advocating on their behalf;
  3. can create value for their membership;
  4. can form around single issues or scale with many data subjects;
  5. can become representative and be used to create change;
  6. could help their membership to understand how data is used – data literacy;
  7. can liberate personal data on members behalf through Subject Access Requests;
  8. can encourage better data and context to be produced by data subjects;
  9. build trust and consent within the organisation and
  10. can be a blend of open data and personal data organisations

Democracy Projects – Open Election Special

It was appropriate that March’s Open Data Manchester meeting should focus on projects related to the forthcoming elections. Not only because the country goes to the polls next month but also that election data was the first area of interest when Open Data Manchester started five years ago. The release of election data by Trafford Council in 2010 started them off on their journey to become open data champions, and it is through forums such as Open Data Manchester and Social Media Cafe Manchester people became connected.

Election Station

In 2015 there are a number of fantastic initiatives that try and unpick the proposed policies of the political parties, and filter through the fluff and bluster of our incumbent and prospective parliamentary candidates.

Digital and networked technology as well as access to data that is either open, scraped, scanned, manually transcribed or crawled, creates the opportunity to understand and analyse what is proposed and how parliamentarians have delivered on the promises of the past. Not only do these technologies allow us to be more informed but might also offer a way of building the policies of the future through novel forms of engagement and participation. Advocates for direct democracy see that creating opportunities for people to have a say in policy decisions makes for a more engaged society. Estonia’s Charter 12 and Iceland’s Crowd Sourced Constitution Bill are examples of these approaches in action. Both coming directly out of crises, where faith had been lost in the democratic process.

The need to re-enfranchise people into democratic participation is critical. In Manchester Central constituency where Labour candidate Lucy Powell was elected in a 2012 by-election, there was an 18% voter turnout. Without democratic mandate the legitimacy of government is vastly reduced. Which in turn has impact on the way the country is run and how people engage and align with the decisions of government.

There are many examples of projects in the UK that are seeking to make the sometimes arcane processes of government and its representatives more understandable. Notable in this space are the many projects that have been supported and developed by mySociety, with the stated aim of inventing and popularising digital tools that enable citizens to exert power over institutions and decision makers. Democracy Club, Full Fact and Unlocking Democracy are active in this space, as well as a raft of people who volunteer their time and see the importance of making the election process more open.

  • YourNextMP – Built by Democracy Club is an open database and API of candidate data
  • Meet Your Next MP – Created by JMB Technology lists independent events and hustings in your constituency
  • The Big Monitoring Project – Being developed by Full Fact seeks to record what politicians and the media says and hold them to account.
  • ElectionLeaflets.org – By Election Leaflets, Unlocking Democracy and Democracy Club. Crowdsourcing a database of the leaflets that candidates shove through your door and what they say.

Many of these initiatives are looking for people to volunteer their time and expertise.

The subsequent discussions focussed on why people don’t engage and possible ways that technology can help. Many of the group had direct experience of trying to get social housing tenants to vote on matters that affected their tenancy due to a large housing stock transfer. Although the subject affects tenants in an immediate and tangible way there was difficulty engaging people who were not otherwise engaged. In the end staff from the housing association had to knock on doors and explain to people what they were voting for in order to get people to vote. This highlights the difficulty those working on engagement with the democratic process face. Ways of making the process easier were discussed but this led to a deeper exploration as to the nature of engagement. If we make voting easier does it change the nature and relationship between the voter and the subject being voted upon? Perhaps we are trying to look at the symptoms rather than the cause and a democracy based upon weak or passive interaction was not as strong as one where effort was needed to register an opinion. One of the group highlighted the difference between the situation in the UK compared to countries where engaged public discussion where part of life.

Making the democratic process more understandable is vitally important to engagement. Voters need to feel as though they have agency and that their decision has importance. A challenge faced when trying to decide who to vote for is cutting through the rhetoric masking policy. There is also difficulty in creating key comparators and metrics. How do we create an environment where we can compare what one person says over another and how can we understand the impact those position would make to our communities. It was suggested that if we could standardise certain aspects of a manifesto we would be able to compare across positions. This could then be overlaid on to data from local communities that has been modelled in a standardised way allowing direct comparison of potential impact. There are a number of challenges associated with this – such as the candidates local position might differ from that of the party.

There is a wealth of data that evidences the voting behaviour of incumbent MPs which could be used as a metric to judge the attractiveness of a candidate. This data is only available for incumbents and not those in opposition. Party politics can override the voting preferences of individual MPs and politicians often have to make difficult decisions that may be seen as undesirable. If an MP stated a position to which you voted for and then evidences a pattern of voting behaviour in office that doesn’t correlate, that information would be useful in helping you choose who to vote for.

Creating a service where you can map your own preferences with those of candidates and then follow the voting patterns of your parliamentary representative over time was deemed useful – allowing the user to understand the reasons why they voted for that candidate and whether, in light of those historic preferences, the candidate was a good representative.

Creating standardisation so that you can map candidates directly onto locality – assumes that the individual would act independently and not be whipped by the party.

Voting data also enables you to see how rebellious a candidate who doesn’t necessarily tow the party line is. A number of the group suggested that ‘Rebellion Ratings’ could be seen as a good indicator of principled behaviour, over the representatives desire to further their own political career.

Democracy Club is crowdsourcing the CVs of prospective candidates so that people get a better idea of who they are voting for. It was mentioned that this would be interesting to compare with the LinkedIn profiles of candidates. Comparing a professional business facing persona with one that has been created to garner public support.

There are a lot of excellent projects that are trying to make the process of government and the effectiveness of MPs more understandable. It would be interesting to see if some of these could be implemented at a local government election level. If people are more connected to their locality it would make sense to develop projects that help people to engage with local decision making. Perhaps this could be another front to fight disenfranchisement within the democratic process.

 

 

Open Data Manchester March Edition – GMDSP Data Dive

6.30 – 8.30pm, Monday 24th March 2014

Tech Hub Manchester
3rd Floor
Carver’s Warehouse
77 Dale Sreet
Manchester
M1 2HG
Map here

This March’s Open Data Manchester is a special event held in partnership with our friends at FutureEverything. Since September they have been working on the Greater Manchester Data Synchronisation Programme (GMDSP) – a groundbreaking open data initiative that forges links between the code community and local authorities. GMDSP works with the 10 Greater Manchester local authorities to release corresponding datasets as linked open data. This has been done by placing coders (Code Fellows) many from the Open Data Manchester community, in the local authorities to help identify and transform data.

GMDSP_Logo_Full_text

The Data Dive is a chance to see and understand the work being done, talk to the teams releasing the data and more importantly get a heads up regarding the challenges that will be set for the GMDSP Coding Challenge taking place as part of FutureEverything on the weekend of 29th and 30th March.

Sign up for the Data Dive Here

Refreshments will be provided

GMDSP is a partnership between FutureEverything, Connected Digital Economy and Future Cities Catapults working with Manchester City, Salford City and Trafford Metropolitan Borough Councils

Open Data Manchester – Transport Special

6.30pm – 9pm Tuesday 25th February 2014
Greenheys Business Centre
Manchester Science Park
Pencroft Way
Manchester M15 6JJ
Map here

An efficient transportation system that allows people to move around easily for work and leisure is an essential part of today’s modern city. Being able to make decisions on how we travel based on convenience, cost and accessibility especially if information is updated in real time, should allow a more enlightened traveling public.

Open data offers a chance to create a more intelligent environment for making travel decisions. Traditionally the data needed to create applications and services to enable these decisions would be closed due to restrictive licensing, technological difficulties or confused policy decisions, but this is starting to change.

This month’s edition of Open Data Manchester brings together a host of transport based open data initiatives.

Transport for Greater Manchester (TfGM) will be deliver an update as to the progress that they are making with open data now that the Local Sustainable Transport Fund and the DRNETIS (Dynamic Road Network Efficiency and Travel Informaton System) deployment.
C3 and DRNETIS Diagram

Rockshore who’ve created the Network Rail APIs will talk about the progress and uptake of these and answer questions regarding their use.

FutureEverything – Will talk about a new set of APIs that allow people to access up to data scheduling, arrivals and departure information and other related data from their CitySDK programme

The people from ThoughtWorks will talk about their journey planner for Manchester’s trams Tramchester an insight into how they created using graph data bases can be found here

There will also be an opportunity to look at the new Highways Agency NTIS real time services.

So if you are transportista developing the next generation of transport services, data wrangler trying to understand the complexities of our transit system or just someone who would like to get from A to B that little bit easier, this edition is for you.

As always there will be opportunity to present projects, ask questions and network.

We advise people to book through the Eventbrite link here

CitySDK Short Horizontal EUThis event is partially funded under the ICT Policy Support Programme (ICT PSP) as part of the Competitiveness and Innovation Framework Programme by the European Community.

Open Data Manchester – By Jove it’s June.

6.30pm – 8.30pm Tuesday 25th June 2013
MadLab – 36 – 40 Edge Street Manchester M4 1HN

Manchester’s premiere open data meetup will be taking place once more this month.

Open Data Manchester has been meeting regularly since the beginning of 2010 and is a free and open forum for discussion and practice around open data.

It is a chance to catch up on all stuff ‘open data’ that is taking place both locally and beyond. There are a number of initiatives around and it would be good to catch hold of them.

If you have any projects or ideas you want to discuss or have an open data itch that you need to scratch, feel free to bring them along.

 

Open Data Manchester – November 2012 Edition

6.30pm – 8.30pm Tuesday 27th November 2012
MadLab – 36 – 40 Edge Street Manchester M4 1HN

Sign up using Eventbrite

The next Open Data Manchester should be a good one. Hot off the back of the Manchester Hackathon we will be showcasing some of the things developed from some of the participants and having a bit of a debrief. Overall the feedback has been really positive but it would be good to see what could be improved.

As part of the Hackathon there were a number of datasets released by Salford, Trafford and most impressively by Manchester City Council. Some of the data released is a first for a local authority and some of it is quite contentious so worth a look.

Open Data Manchester will be hosting a delegation from Brazil who are on a technical visit to the UK to find out more about the open data, transparency and accountability, and Freedom of Information.

Finally if you are interested in how applicaitons develop during a hackathon, John Rees took a screenshot every 30 seconds whilst building his multi prize winning SATLAV application

[youtube=http://www.youtube.com/watch?v=JZcUEXJ0MhY]


Open Data Manchester is partially funded under the ICT Policy Support Programme (ICT PSP) as part of the Competitiveness and Innovation Framework Programme by the European Community.

The Manchester Hackathon not bad for No.1

For those of you who missed it, the first Manchester Hackathon occurred last weekend. Manchester City Council, FutureEverything and ourselves came together to create 24 hours of coding deliciousness.

The hackathon was part of Manchester City Council’s commitment to open data and was the motivation for the release of datasets, APIs and documentation for the event. Data can be found here on the MDDA website The variety of data available ranges from trees which is all the more pertinent as Ash Dieback spreads through the country, Contact Centre data and Contaminated Land which is a hugely contentious dataset. A lot of the data released was in consultation with the Open Data Manchester community.

The format of the Hackathon created an intense atmosphere in MadLab as 45 coders and designers strove to create something demonstrable by the 5pm deadline. In the end 16 teams presented their creations in two minute quick fire presentations.

The winners were:

Best Under 21’s Creation – £600 – Bus Tracker by 19 year old MMU student Bilawal Hameed, the Bus Tracker app will let you find the nearest bus stop to you, direct you to it and give you the times and destinations of the next bus due.

Best Visualisation and Developers Prize (voted for by everyone taking part in the Hackathon)- £600 for each prize, was won John Rees for his app called Sat Lav. If you are caught short in the City, you just open the app and it will direct to nearest public toilet including those in shops and bars which allow the public to use.

Best Locative Application, the £600 prize was won by Matt Schofield for his Taxi Rank Finder app. Matt’s app shows the nearest taxi rank to you and directs you to it. It also shows if it is a marshalled rank and its opening times.

Best Solution for an Identified Problem (£600 prize) was won by Slawomir Wdowka and Imran Younis for their Manchester Voice which would allow the public to submit ideas to the council, then checks records to see if other people have made the same suggestion. When an idea is developed it would allow the public to vote on it.

The grand prize of £1,000 + £3,600 in development funding was won by Data Crossfader, created by James Rutherford and Ashley Herriott, a visualisation tool that plots information on a map of Manchester to allow people to compare important sets of data. For example, using postcode details it shows the locations of road traffic incidents on a map, and then adds where speed cameras are, so if they is a particular area where accidents happen which are not covered by a camera, it easily shows that on a map.

By the end of the event a number of developers had been approached to develop their ideas further and we’ll try and keep track of where that gets to.

For a much more in depth post by James Rutherford click here

The Economics of Open Data

Data doesn’t make for a very good tradable commodity. It’s benefits spread well beyond the people who trade in it, it’s almost impossible to stop people from copying and sharing it, and it can be enjoyed by multiple people at the same time.

In a post written for Open Data Manchester on The Economics of Open Data, regular member Robin Gower, explains how these characteristics mean that data will have a much greater economic and social impact if it is made available as open data. He also discusses the implications for established “closed-data” business models and for the government.