Welcome to Open Data Manchester

Open Data Manchester has been leading the way in open data practice in Manchester since 2010. It was set up as an advocacy group to encourage and share open data practice between public bodies, business and citizens alike. It has been instrumental in the setting up of DataGM and various other open data initiatives in Greater Manchester and beyond. It is most well known for bringing together communities of practice to help disseminate and share knowledge, participate in policy making and support organisations using data. Now, having taken up residence at Federation in the centre of Manchester we are looking at building a centre for state of the art data practice by pulling in the knowledge and skills of the data community both in Greater Manchester and further afield. Get in touch and join us on our journey.

Fare’s Fair – Why we need open fares data for public transport

Being able to understand how much your journey is going to cost is essential for encouraging mobility by public transport in our modern age. Not knowing how much a journey is going to cost before you make it, hinders forward planning and creates a barrier to use. How many people have stepped on to a bus only to find that the journey was more expensive than they first thought? Or that the fare charged yesterday was different than the fare you got charged today?

To this end transport campaigners have been vocal in their efforts to get public transport agencies and operators of public bus services to release fares data, so that people can make intelligent choices about the way that they get around. Transport Hack organised by the fantastic people at ODILeeds is one such example of this happening. Open Data Manchester was itself involved with opening up the bus fares data for all of Greater Manchester in 2010, only for TfGM to discontinue.

Yesterday we learn’t that TfGM had knocked back an FOI request for Manchester Metrolink fares data, citing issues of Commercial Interest.

We think this is wrong on a number of points.

  • Manchester Metrolink is the only tram operator in Greater Manchester – not counting the fantastic tramway at Heaton Park, which we don’t think is a competitor
  • The data is already in the public domain – therefore it wouldn’t take that much effort to aggregate it or get a picture of the fares structure
  • It is in the public interest to get as many people to understand the cost of mobility in Greater Manchester
  • Closed systems hinder the development of seamless ticketing and multi-modal travel by putting opaque commercial interests in front of public service delivery

To this end Open Data Manchester set about compiling the fares data for the Metrolink network. It did’t take that long – about a day – and we used programmatic as well as manual methods. The data is in tabular Excel form as well as a parsed text document. It is provided as is and we can’t be liable for any mistakes or inconsistencies – although we have checked it as much as we can. The data is available under a Creative Commons CC BY 4.0 licence Please let us know if you find any errors or create something interesting.

The data can be found here

Minor edits – addition of a link and additional bullet point were made at 14.00, 20.10.17

Open Data Manchester – Next Steps

Thursday 21st September 18.30-20.30
1st Floor
Federation House
Federation Street
Manchester
M4 2AH

Register here

After seven years Open Data Manchester has become a registered company and soon it will become a CIC. This will allow us to deliver better programmes and also work with others more effectively.

If you would like to hear more and suggest ideas for future activity. Join us

Internet of Things and Open Data Publishing

Tuesday October 3rd 10.30 – 13.30

FACT
88 Wood Street
Liverpool
L1 4DQ

Register for free here

If you have an interest in internet of things and how the data produced can contribute to the broader data economy, this is your chance to have a say.

The internet of things offers unparalleled means to create data from sensors, devices and the platforms behind them. This explosion of connectedness is creating huge opportunity for building new products and services, and enhancing existing ones. With these opportunities come some gnarly challenges. These exist around standards in data and protocols, security, discoverability, openness, ethics and governance. None of these are trivial but all of them need to be understood.

This workshop is for people involved in open data, Smart Cities and the internet of things who are starting to come up against and answer some of these challenges.

It is being run by Open Data Manchester and ODI Leeds for the Open Data Institute to look at the future of open data publishing and IoT

The Open Data Institute (ODI) is always working towards improvements in open data – from making it easier to find and use right through to refining and implementing standards. They are very keen to work with people who use open data to see what they can be doing to help and improve open data for everyone.

The workshops are open to everyone who wants to join in, contribute, or work with us. The output from the workshops will be put forward to the ODI and the UK government with recommendations on how open data should be published.

Refreshments and lunch will be provided.

If you can’t make it but would still like to contribute, we have an ‘open document’ available here. We encourage people to add their questions, comments, suggestions, etc.

After the workshop there is the launch of LCR Activate a £5m project led by Liverpool John Moores University with the Foundation for Art and Creative Technology (FACT) and the LCR Local Enterprise Partnership. A three-year European Regional Development Fund (ERDF) initiative using AI, Big Data/High Performance Computing, Merging Data and Cloud technologies for the benefit of SMEs in the Liverpool City Region. Register here.

OpenCorporates – Exploring the corporate world through data

Evening workshop looking at the data and tools for exploring the global corporate world.

18.30 – 21.00 Tuesday 27th June 2017
Federation House
Federation Street
Manchester
Register Here

If there is one thing that Panama Papers proved, it is that shell companies and opaque jurisdictions allow money and assets to be kept secret, making it difficult for investigators to detect corruption, money laundering and organized crime.

In 2010 OpenCorporates was founded as an effort to identify where companies were based and how they linked across the world. It is now the largest open database of companies and company data, with in excess of 100 million companies in a similarly large number of jurisdictions. Their primary goal is to make information on companies more usable and more widely available for the public benefit, particularly to tackle the use of companies for criminal or anti-social purposes, for example corruption, money laundering and organised crime.

This is a workshop that will enable people and organisations to harness the power of this huge pool of data. Whether you are an activist, organisation or just plain interested, this workshop will help give you the tools to explore the complex, connected world of corporate organisations.

Exploring Grant Awards in the UK – Open Data Manchester March 17

GrantNav brings together information about grants awarded by a variety of funders in the UK. Because the data is published with a common standard, it’s easy to create analyses and visualisations that a) work for any of the funders’ data and b) can compare grant portfolios across funders.

You can download the whole dataset as a csv file. It’s also available to browse in GrantNav, a 360Giving application released under the terms of the Creative Commons Attribution Sharealike License (CC-BY-SA). Please see GrantNav’s copyright and attribution list for details on the original data sources.

The grants.csv table has a row per award with columns describing a variety of attributes such as date of award, amount awarded, recipient, funder and beneficiary.

We created some exploratory visualisations of this data at last months’ Open Data Manchester workshop – “Getting to Grips with Data”. The idea with exploratory analysis is that you start with some data and you simply want to know what is there – to uncover the shape and scope. You can get to grips with a dataset by understanding what dimensions or variables it includes and what values those variables take. You would typically use summaries like frequency tables, cross tabulations, and distributional analysis. These statistical descriptions provide views into the data which quickly provoke questions about the patterns within.

One aspect we chose to explore was the size of grants awarded (in the Amount Awarded column). Two things soon became apparent: first, that each funder has a very different award portfolio, and second, that the amounts tended to cluster around certain values. This seemed intuitive – since funding is often offered with specific thresholds, we might expect applicants to design their projects with these in mind, asking for more or less money than they might otherwise have done.

We settled on an analysis called a “cumulative frequency distribution” as a way of visualising these aspects. We’ll explain this in detail below, but, since a picture speaks a thousand words, we invite you to take a look at the chart first. Feel free skip the technical description and jump to read about the conclusions we can draw from this data graphic.

Distribution of Grants by Value


What is a cumulative frequency distribution?

A frequency distribution tells us how common certain values are across a range. Whereas an average provides a summary of a set of values by telling us about the middle, a frequency distribution tells us about the middle, ends and all the values in between. The same average value can arise from many different distributions (e.g. a few very small values and many large ones or many small values and a few very large ones). The distribution is calculated by taking a range (e.g. £0 – £1,000,000), then dividing it into bins (e.g. £0-99, £100-999, £1,000-4,999 …), then counting the number of values that fall into each bin – aka the frequency (e.g. £0-99: 100 grants, £100-999: 363 grants, £1,000-4,999: 789 grants etc). These can be a bit tricky to interpret however, as the frequency depends upon the bin size.

A cumulative frequency distribution take a cumulative tally of frequencies. Whereas a frequency distribution might say “there were x awards between 10,000-15,000” a cumulative distribution would say “there were x awards up to 15,000”. This makes the interpretation slightly easier as we can say “x awards were less than £y”. In order to compare funders – who each make different numbers of awards – we’ve transformed the frequencies into a proportion by dividing by the total number of awards made by each – i.e. “x% of awards at were less than £y”.

How should I interpret the chart?

We can then plot the cumulative frequency distribution – here we’ve mapped the amount awarded on to the horizontal x-axis and the cumulative proportion of award by number on to the vertical y-axis.

Given that distribution is highly skewed (there are many small grants, and few large ones) we’ve transformed the x-axis using a logarithmic scale (with base 10). That means that each step along the scale represents a 10 fold increase in the £ amount (a typical linear scale would map an constant £ amount for each step). Practically speaking, this helps to spread the curves out across the graphic so that they’re easier to distinguish and not bunched-up on the left, making better use of the space available.

The curves show the proportion of each funder’s awards that were made up to a given size of award. Flatter vertical segments indicate many awards being made with that amount, flatter horizontal segments indicate fewer awards being made over a range of amounts. Where one curve is above another, that indicates that they focus more of their awards at that level (i.e. making more awards by number, relative to the total number in their portfolio).

What does this analysis tell us about grant funding in the UK?

Let’s return to the chart again. What can you see?

There are clear vertical segments around funding thresholds. This is most obvious in the case of the Big Lottery Fund, around the £5,000 and £10,000 mark.

We can also see some funders focus on a narrow range – the Lloyds Bank Foundation, for example, makes around 90% of it’s awards between £10,000 and £50,000 – whereas the Dulverton Trust and the Northern Rock Foundation have a much broader spread.

The BBC’s Children in Need fund does have an obvious threshold at £10,000, like the Big Lottery Fund, but actually makes most of its awards at a higher level (up to around £100,000).

The Esmée Fairbairn Foundations – the right-most curve across most of the range, focusses on larger awards with around a third over £100,000.

How do I make one of these?

You will no doubt be able to make other comparisons, and draw other conclusions from the graphic. Indeed it probably provokes more questions. What would this look like in terms of proportion of funding by value (instead of by number of awards)? How does this compare in absolute terms (i.e. overall number of awards, not proportion)? What about the smaller funders we removed to make the chart easier to read?!

If you’d like to find answers to those questions, or explore other parts of the dataset, then you can find the R code used to generate the analysis and graphics on github. We introduce the data.table library used to make summary tables and the ggplot2 library used to design and create the visualisations. You can also follow-along with the exploration process and find links to learning resources in the comments placed throughout the source-code.

Follow Robin Gower on Twitter @robsteranium

Data visualisation – getting to grips with data

Tuesday 28th March, 18.30 – 21.00
Federation
Federation Street
Manchester
M4 2AH

Tickets here

This is a rerun of Open Data Manchester’s popular data visualisation training.
This will be a hands-on event looking at visualisation and data cleaning tools such as Google Refine and Fusion Tables, R and leaflet.js. If you have experience of other platforms feel free to rock up and share with others.
The emphasis will be on working in small groups and sharing practice.
So if you have data that you are interested in visualising, a visualisation that you want to explore or are just want to get an idea of available tools – bring your laptop
If you are a data newbie or Processing pro – join us.

Many thanks to Federation for hosting this months event.

Echo Chambers and ‘Post-fact’ Politics – developing ideas

Half day workshop to build tools for a ‘post-fact’ world

Apparently we’ve ‘had enough of experts’. Increasingly online platforms quietly tailor what we encounter to fit our existing views- creating echo chambers out of our prejudices. We are worried that the role of evidence in politics is slipping- and we want to do something about it.

A preliminary workshop was held in November attracting a broad range of people from far and wide. Together a list of initiatives was created responding to these challenges. Click this link to read the list of initiatives and add your own thoughts.

Now we are running a follow-on event to allow people to develop these ideas. If you’re an activist, policy wonk, artist, or simply someone interested in this topic we’d love for you to join us. It doesn’t matter if you didn’t make the first event as we will get you up to speed with a chance to add new ideas on the day.

For more information regarding the Echo Chambers and ‘Post-fact’ Politics workshops go to www.postfactpolitics.com

2017 programme update

It has taken longer than expected but our 2017 programme is finally getting off the ground. The programme, highlighted in the last post, was a provisional one and we hope to track it as well as we can over the coming months.

February kicks off our programme with two events both related to last November’s Echo Chambers and ‘Post-fact’ Politics event. On Saturday 18th February, in partnership with The Democratic Society we will be running a workshop that develops the ideas from November’s event and turns them into action. The event is free and if you couldn’t make it to the first event and would like to attend, we will quickly get you up to speed. The evening of Tuesday 28th February will be a regular Open Data Manchester meeting where initiatives developed from the workshop will be showcased. As always if you want to add to the event in any way, contact us or just turn up.

The evening meeting will be a chance to look at national and international open data events taking place in the coming months.

Provisional programme for 2017

IMG_0066From last night’s planning meeting we now have a provisional programme for 2017 and it is quite an ambitious one. What is great from our perspective is that there is a continuation of a number of themes that we have been looking at over the last year and a resurfacing of perennial ones. Highlights include the ‘making and doing’ workshops that have been developed as part of the Echo Chambers and ‘Post-Fact’ Politics programme and the Visualising Data workshops. There are a number of sector and technically specific events but one to watch out for is alternative ways of looking at the world which will be a day of walks, talks and explorations. As always there is a large dose of how data and technology impact on society and much more.

This is a provisional programme and we are looking for as much input as possible (Dates and sessions are subject to change). Please click on the Google Doc and add comments. We are looking for people who can contribute, sponsors, venues and partners.

Link to Google Doc

Screen Shot 2016-11-30 at 10.37.12

Screen Shot 2016-11-30 at 10.37.27