Provisional programme for 2017

IMG_0066From last night’s planning meeting we now have a provisional programme for 2017 and it is quite an ambitious one. What is great from our perspective is that there is a continuation of a number of themes that we have been looking at over the last year and a resurfacing of perennial ones. Highlights include the ‘making and doing’ workshops that have been developed as part of the Echo Chambers and ‘Post-Fact’ Politics programme and the Visualising Data workshops. There are a number of sector and technically specific events but one to watch out for is alternative ways of looking at the world which will be a day of walks, talks and explorations. As always there is a large dose of how data and technology impact on society and much more.

This is a provisional programme and we are looking for as much input as possible (Dates and sessions are subject to change). Please click on the Google Doc and add comments. We are looking for people who can contribute, sponsors, venues and partners.

Link to Google Doc

Screen Shot 2016-11-30 at 10.37.12

Screen Shot 2016-11-30 at 10.37.27

Open Data May meeting.

6.30pm – 8.30pm Tuesday 26th May 2015
Greenheys Business Centre
Manchester Science Park
Pencroft Way
Manchester M15 6JJ

Map here

Sign up on Eventbrite here

This month’s Open Data Manchester will be an informal get together and a chance to see what projects and opportunities are out there. If you have something that you would like to discuss, get feedback on or present, come along and take the opportunity to share ideas and get some feedback.

If you would like to do a presentation contact julian [at] thegarden [.]io

Open Data Cooperation – Building a data cooperative

Last year Open Data Manchester held two workshops one in Berlin and the other in Manchester to explore whether cooperative structures could enable the creation of open data and personal data stores for mutual benefit. The idea of the mutual came out of an ongoing conversation between people within the cooperative movement and the open data world about the role of cooperatives, and the possibility that they could rebalance what many perceive as asymmetric relationship between data subjects (people with personal data) and data users (people who use data to develop services and products).

[slideshare id=47070639&doc=opendatacooperationv2-150416081428-conversion-gate02]

Background

Our modern technologised societies exist on data. Mostly these data are invisible and unknown to us. The services that we interact with, the daily transactions that we make and the way we negotiate through our everyday generate data, building a picture of who we are and what we do.  In the age of the Quantified Self there is a growing trend for self monitoring allowing us to track what we do and how feel when we do it. These data are valuable. Aggregated they enable organisations to predict, personalise and intervene seamlessly and sometimes invisibly. Even for the most technically literate, keeping track of what we do and don’t give away is daunting. Personal Information Management Services (PIMS) are starting to emerge offering people the chance to stem the unbridled exploitation of personal data by both public and private organisations whilst also creating monetary rewards for their users. Many of these commercial organisations seek to act as a brokerage service for personal data. The creation of data cooperatives that can act as PIMS have the potential to empower individuals to have more control over their data, creating value for themselves and their communities, and for people to have more of a say in the services that are built.

The sensational revelations by Edward Snowdon shone a spotlight on the personal data that is collected through the IT software and hardware infrastructure that we rely on today. Although highlighting that we unintentionally give away a lot, it perhaps hasn’t built  a wider popular discussion around protection and usage of personal data. It is inevitable that as the awareness about the data that we produce rises there will be a demand for services that give people more control. PIMS offer to deliver monetary value to users, but how much value is up for debate as there are differing methodologies to quantify it [OECD 2013]. Value can also be context dependant – data about someone exhibiting behaviours that might indicate a large purchase might be deemed more valuable by companies that manufacture or sell that item.

Data cooperatives are starting to emerge that have a broader social and ethical outlook than simple monetary transaction. The Good Data which allows people to control data flow at a browser level with benefits going to social causes and the Swiss-based Health Bank where personal health data is aggregated for the advancement of medicine, are examples of this. As the principles of data custodianship for social good become understood there becomes an opportunity for more to emerge.

Data cooperatives can represent the interests of data subjects and data users

Cooperatives come in many flavours, traditionally coming out of the needs of the membership who subscribe to them. Structures of these cooperatives have generally been organised around a single class of member – workers, producers, consumers, etc. The single class structure although creating an equitable environment for members, can tend towards self interest and even though they may be bound by the notion of common good, the mechanism for the creation of the common good or commons is seldom explicit.

Internationally the creation of new forms of cooperatives that explicitly express the development of common good, across multiple classes of stakeholders are more abundant. Social co-ops in Italy and Solidarity coops in Canada often provide services such as health and social care, education as well as community infrastructure projects.

The ability to have multiple classes of stakeholders within a data cooperative has the potential to create a more equitable environment for both data users and data subjects to exchange data. The influence of different classes within the organisations could be managed by fair distribution of voting rights with a user such as a research organisation having the same voting rights as a data subject.

Michel Bauwens founder of the P2P Foundation talks about the creation of these new forms of cooperatives, and how they can build a commons both material and immaterial. This commons would be subscribed to by other commons creating entities and licenced to non-commons creating organisations. This suggests a federated relationship between such organisations where commons is shared, could exist. But the challenge would be how to define the exchange within this system and if a cooperative contained both producers and users how does this affect the production of commons?

Would a data cooperative necessarily adopt these newer forms of distributed and commons creating structure? There appears to be a consensus that commons creating, multi-stakeholders cooperatives are positive, but they come with increased complexity. Can individual circumstances especially when dealing with communities based around sensitive issues, create an environment for sharing beyond a single class of stakeholder? A single class cooperative may seem to be a simpler, immediate solution for a community of people who have specific needs and issues and where strong trust relationships need to be maintained.

The scale of data cooperatives

Data cooperatives have the potential to work at scale generating and trading in bulk, high worth data as well as forming around smaller communities of interest, such as around a particular health issue, to draw down or negotiate for a better services.

Creating a critical mass of data subjects that would allow the data cooperative to operate at scale would be challenging. Marcos Menendez from The Good Data sees that for PIMS such as themselves would need to create a minimum data subject base of around 500,000 people to be viable. There is potential for data cooperatives to partner with organisations or charities with a similar ethical outlook to build the data subject base.

It may be easier to form cooperatives around single issues as the community the cooperative seeks to represent will already be in existence. The value of such an organisation might be that it can help create a more informed decision making process with views of the data subject being represented. Within a multi-stakeholder model the service provider might also be part of the data cooperative such as a local authority or other public sector organisation

Making the purpose of the data cooperative understandable is key. Although single issue cooperatives are relatively simple to understand, the representation of data at scale may be challenging. Data cooperatives could act as a platform that builds consent and allowing the representation of personal data across a broader portfolio of interests.

Building trust and consent within the data cooperative

Trust and consent should be the foundations on which PIMS are built and data cooperatives have the potential to create both. Mutuality offers an opportunity – especially with a multi stakeholder model – to represent the interests of all stakeholders from individual data subjects to data users – creating an environment of mutual understanding and trust. The benefits of enhanced trust between the individual data subjects and data users could enable better data and context to be created by data subjects. Through understanding the ways that the data is being used and trusting that the data user understands the needs and concerns of the data subjects, could create a more enlightened and responsive relationship. Even without data users being part of the organisation, the data cooperative would be able to take on the role of trusted representative which in turn could create consent.

Informed consent across all data subjects in a cooperative could be challenging. It would be easy for a data organisation to empower those that already have knowledge and agency to maximise their data, but the data cooperative should have an interest in empowering everyone.

Increasing data literacy amongst members

Raising the level of data awareness amongst cooperative members would create more informed decision making, but this task would need to be delivered in a sympathetic and nuanced way. Ultimately some people may not engage because of service dependency, lack of choice. or a perception that it isn’t relevant or useful to engage.

For a data cooperative to represent its membership and control the flow of data it needs to have legitimacy, know and understand the data assets of the membership, and have the authority to negotiate with those data assets on the members behalf.

Decisions around data sharing and understanding the potential consequences are difficult and complex. As an intermediary the cooperative would need to ensure that individual members were able to give informed consent. Data literacy goes some way to achieving this but also mechanisms need to be created that can allow people to have agency over the way that their data is used.

Creating consent

Can one organisation be representative of the broader range of ethical positions held within a membership structure? For practical reasons the data cooperative might have a high level ethical policy but individuals within the cooperative may make data sharing choices based on their personal ethical standpoint. This could be enabled by proxy or preset data sharing preferences. The alternative could be to have smaller federated or distributed niche organisations that have specific restrictions on data reuse.

There exist many mechanisms for the creation of consent. These by and large create the environment for proxy voting in decision making processes. A mechanism such as Liquid Feedback – popularised by the Pirate Party, where an individual bestows voting rights to a proxy who aligns to their position, with the ‘liquid’ element allowing proxy rights to be revoked at any point. Other mechanisms might follow along the lines of the Platform Preferences initiative developed by W3C, which sought to create privacy policies that could be understood by browsers – ultimately considered too difficult to implement. A potentially easier solution might work on the basis of preset preferences based on trusted individuals or the creation of archetype or persona based preferences that people can select.

Creating a more equitable data relationship

How would the argument for greater individual data rights be made when service providers see that personal data mediated through their products as their intellectual property? Work has been done through the midata initiative and the developments of personal data passports – where individuals grant rights to organisations to use the data for delivery of service. UK Government has supported this initiative, but has backed away from underpinning the programme with changes in legislation. The lack of regulatory enforcement may limit the efficacy of any initiative that seeks to grant individuals’ rights and agency over their data.

At present there is a certain level of cynicism around voluntary codes of practice where power imbalances exist between stakeholders. The lack of legislation might also create a chilling effect on the ability of data cooperatives to gain the trust of their membership due to their inability to totally control the flow of data.

Existing UK data legislation does give data subjects rights to access personal data held by external organisations through Subject Access Requests. A data cooperative could act as a proxy for individual members automating regular Subject Access Requests. This model is being explored by Our Data Mutual in Leeds, UK. There are challenges with using Subject Access Requests at present. Organisations can charge up to £10 for each request and although provision of the data in digital format may be specified, responses usually take the form of reams of paper print outs with responses taking up to 40 days.

It has been mooted by the UK Government that the cost of Subject Access Requests will be reduced – potentially to zero and that organisations will be compelled to supply the data in digital format. This would go a long way to making the process of automated Subject Access Requests viable but in an ideal world data should be pushed rather than pulled.

Data supply

A challenge that all data cooperatives would face would be how they maintain a relationship with their membership so that services based upon, or value that is extracted from the data is not subject to unforeseen supply-side problems. If a data cooperative represented its membership and entered into licensing relationships with data users on behalf of its membership, what would be reasonable for a data user to expect, especially if data subjects had the right to revoke access to data at anytime? With larger scale data cooperatives this may not be too much of a problem as scale has the potential to damp down unforeseen effects. The Good Data proposes to get around these issues by only holding data for a limited amount of time essentially, minimising disruptions in data supply by creating a buffer. It may be necessary for the data cooperative to create terms and conditions for data subjects to minimise sudden supply-side issues.

Smaller data cooperatives, especially ones that are created around single issues may have difficulty in engaging in activity that requires service guarantees. Developing a mechanism for federation, cumulatively creating data at scale might be a potential solution, but creating a federated system of consent may be more difficult to achieve. As suggested previously economic activity might be a low priority for such organisations where the main purpose might be to represent members and create the environment for informed service provision.

The challenge facing federated data cooperatives and how they interact is undefined. It has been noted that building distributed and federated systems is difficult, and that centralised systems persist due to operational efficiencies. The advent of alternative forms of ‘block chain’ transaction could enable distributed organisations to coexist using ‘rules based’ or algorithmic democracy. But alternative transaction systems and currencies often face challenges when they interface with dominant and established forms of currency and value. How data cooperatives could practically use these new mechanisms for exchange needs to be explored.

The data cooperative and open data

Although the much of the discussion in the Berlin and Manchester meetings was based on rights and uses of personal data, data cooperatives do offer an interesting model for organisations that create open data, or those that seek to enhance open data with personal data.

An open data cooperative might be a good model for stakeholders who create and use data for public access. It may be a single class model where data suppliers such as public bodies work together or more interestingly a multi-stakeholder model where public data providers work with organisations that manage personal data – these in themselves could be data cooperatives

In summary data cooperatives;

  1. are owned by their membership and therefore should be more accountable;
  2. have the potential put a halt to the over collection of personal data through representing data subjects and advocating on their behalf;
  3. can create value for their membership;
  4. can form around single issues or scale with many data subjects;
  5. can become representative and be used to create change;
  6. could help their membership to understand how data is used – data literacy;
  7. can liberate personal data on members behalf through Subject Access Requests;
  8. can encourage better data and context to be produced by data subjects;
  9. build trust and consent within the organisation and
  10. can be a blend of open data and personal data organisations

Open Data Manchester – November 2014

6.30pm – 8.30pm Wednesday 26th November 2014
Greenheys Business Centre
Manchester Science Park
Pencroft Way
Manchester M15 6JJ

Map here

Sign up on Eventbrite here

There is a more general theme to this month’s Open Data Manchester although a lot to cover.

Open Data Manchster will have been going for 5 years in April and as a voluntary, unincorporated group that doesn’t even have a bank account, it hasn’t done too bad. Does ODM need to become more formalised? How can we become more representative of the membership? And what do we need to do to be relevant for the coming years? Turn up and have a part in the future of ODM

We will be feeding back on the Open Data Cooperation work that we’ve been involved with over the past few months and will be a chance to add your thoughts

Like always it will be a chance to share ideas, discuss projects and find out what’s happening with open data in Manchester and further afield.

Open Data Manchester March meeting

March’s meeting was an opportunity to help shape Manchester City Council’s forthcoming open data Hackathon. Stuart Baldwin – an ODM regular – spoke about Manchester’s plans for an event in October to coincide with the Manchester Science Festival.

The driver behind this is the recently announced Manchester Digital Strategy and a recent trip that Chief Executive of MCC, Sir Howard Bernstein made to New York. Whilst a guest of Mayor Bloomberg, Sir Howard was apparently impressed with what New York was doing with their open data initiatives such as 311 and App Challenges.

Open Data Manchester and MDDA advised MCC, that for a Hackathon to work it needed to work with the developer community to make the event relevant and developer friendly.

The conversation was mainly focussed on the types of data that developers wanted releasing and there is a list from Duncun Hull @dullhunk here

What was notable was the willingness to listen to what the community wanted and by suggestions from MCC itself, such as Contaminated Land data which has traditionally been contentious.

[vimeo http://www.vimeo.com/36540620 w=400&h=300]
Visualisation by Jonathan Fisher more details here

After the Hackathon discussion attention focussed on Road Traffic Collision data and the work that Steven Flower, Jonathan Fisher and Jonathan S. – There has been discussion about forming a sub-group around RTC data and its use. So if people want to get involved in that contact Steven Flower on the Google Group. Jonathan Fisher’s visualisations where discussed and also the variation in data quality that exists. It was noted that although data was provided to TfGM who collated the data for the Department of Transport. Different flavours of the data existed in different places. TfGM upload monthly data to DataGM which lacked detail on casualties and vehicles involved. The complete RTC data gets forwarded it to the DfT who then make it available via the DfT website and and data.gov.uk with more detail but in two different versions. We are trying to find out why DataGM only holds a less detailed version.

January meeting with TfGM

January’s Open Data Manchester was a transport special, with Craig Berry and Dave Busby from TfGM giving an update as to the types of data that TfGM hold, and what they are trying to release. Open Data Manchester people may already know of Craig Berry as the Information Manager who has been tasked with identifying and releasing open data. Dave Busby’s brief is for integrated ticketing and real-time information.

TfGM reinforced its position with regard to open data at the meeting. There has been a number of rumours over the past twelve months as to what the organisation was trying to release to DataGM – Greater Manchester’s open data portal . TfGM are currently releasing data with regard to bus schedules, NaPTAN stop locations, fixed and mobile speed camera locations and monthly Road Traffic Collision updates. There had been mooted some realtime data would be released.

Greater Manchester has been crying out for an intelligent integrated ticketing system. To many a lack of such system has made travel by public transport around Greater Manchester more difficult than it should be. To this end TfGM are developing a specification that will go to tender in the 1st half of 2012. The system will initially cover Metrolink and then encompass Greater Manchester buses. The system will use contactless technologies in a similar vein to TfL’s Oyster Card but with the added functionality of being able to use contactless bankcards and NFC phones. It was interesting to note the certainty that NFC will be adopted, by most handset companies within the next year. Paying by Google Wallet was also mentioned as a possibility. The ticketing system will also have fare rules that will calculate the best price for journeys undertaken.

Although getting Integrated ticketing to work with Metrolink would be a relatively easy task and a useful test bed to prove the utility of the system, getting Greater Manchester’s 40+ independent commercial bus operators to adopt the system maybe more challenging and may need a certain amount of political will. Anonymised journey data from the system or personal access to journey history wasn’t discussed in detail, although the later seems to be fairly standard in smart ticketing systems, access to anonymised data could offer huge potential for applications and services that look at gate loading on routes, passenger density etc.

The advent of the oft mooted, realtime data from TfGM looks closer – although there was no specific timescale mentioned. There will be access to the Metrolink Passenger Information Displays data, although how this will manifest itself is uncertain. Developers present at the meeting suggested that JSON would be preferable. The main challenge with accessing real-time Metrolink location data is that the Tram Management System currently being implemented isn’t currently functioning throughout the network. The initial release of data will cover the South Manchester line and Eccles lines.

Although it doesn’t look like there will be any real-time bus data soon, TfGM would like to release the location information of the free Centreline buses that are being operated on TfGM’s behalf. This data will be location data that won’t identify the actual service the bus is running. It was suggested that as there are only three distinct Centreline routes it wouldn’t be that complicated to identify, even where the routes overlap. There is also an Informed Personal Traveller pilot that is being run in Bury by Logica, ACIS and First Bus. It uses a number of technologies including an AVL system that has been fitted to approximately 100 of their buses. The IPT application hasn’t been released yet and there are indications that the system is closed.

TfGM recently submitted a bid to the Local Sustainable Transport Fund and written into it is the provision of open data and the development of an intelligent multi-modal Journey Planner pulling all relevant data that TfGM has at it’s disposal, how developers could access the Journey Planner was discussed and whether it would exclude the provision of other types of journey data.

There is a move to make other data available through the LSTF, these include Car Park updates, real-time disruption data, journey down roads data and feeds off TfGM’s SCOOT adaptive traffic control system. SCOOT controls half of the approximately 2000 traffic control signals in Greater Manchester.

The lack of transparency with regard to bus fare structures within Greater Manchester has been a subject that has come up many times, especially regarding anecdotal evidence that dependant communities are charged more per mile than others having viable transport alternatives. TfGM stated that Greater Manchester is one of the few places where bus travel is generally more expensive than rail. To this end TfGM are interested in developing a project similar to one that Open Data Manchester was developing over a year ago that encouraged travelers to submit the details of their journey and price.

At the close of the discussion TfGM were encouraged to use the Open Data Manchester Google Group as a resource to ask questions and to highlight initiatives and challenges.

Open Data Manchester – November Meeting

November’s Open Data Manchester.

Paul Gallagher – Head of Online for Manchester Evening News gave a presentation regarding the role of the MEN during the Manchester Riots. He described how the Manchester Evening News had used Social Media during the riots and how his team had started to collect data regarding the riots and the subsequent court cases to give insight into some of the possible causes of the riots.

Most interesting was the resources that the MEN had put into reporting on the court cases following the riots and by having court reporters sitting in on each of the trials they created a schema and dataset show the areas that people lived in, mitigating circumstances, age, type of offence, sentence etc. This is data that can only be created if you attend the trail. This allowed them to map offences against depravation indices and changes in the way that sentencing was delivered over the course of the trials.

The discussion also touched on news organisations becoming huge archives of sentencing data and how this can effect people’s lives even after their convictions have been struck off. MEN does have a policy where certain details are redacted from the historical archive but this is done on a case by case basis.

There was also an update as to the preparations for the International Open Data Hackday and the responses to the Governments Open Data and Public Data Corporation consultations.