Below is a whole bunch of information to help you prepare for GovHack!
Basically, the way your GovHack project is rated is based on your team page (where you register your team and project information, link to come) and on the presentation you give at the end of GovHack (Sunday 3pm). We don’t mind what developer, data or visualisation tools you use so long as the outcomes are awesome
We provide some development tools and access to virtual servers (if you want to roll your own). Otherwise below are links to other tools that you might want to use.
Check out the Data page for information about data sets.
Just remember, the competition judges will be focused on the outcomes of your project, so updating your team page with information and screenshots/screencasts of your project is really important, along with having an awesome presentation Sunday afternoon, so make sure you schedule some time to work on these before the deadline on Sunday
Welcome to the GovHack toolkit. This page provides all the information you need to prepare hackfest entries.
These tools can be used to make entries like mobile apps, web apps and data visualisations/infographics.
The text of this toolkit is open for reuse under a Creative Commons Attribution licence and improvements are encouraged via Git http://github.com/maxious/govhack-tools or via email patches to firstname.lastname@example.org
Registering your team
Coming Soon: how to use the website “Hacker Space” to register and find teams.
Preparing your submission
You should record a 3 minute speech and mix images/text to accompany.
[Screenr] (http://www.screenr.com/) , ActivePresenter Free Edition and other screencasting tools allow you to demo apps.
To mix together clips, you can use youtube video editor http://www.youtube.com/editor or local software like http://www.videolan.org/vlmc/ or http://www.lwks.com/
You also need to submit your “source material”. For an application this may be source code, for another work it might be your notes or prototypes.
The key thing here is that your source material demonstrates to the judges that some of the end result was your own work and that it is possible for another person to replicate that work.
The basics of being a data scientist
- Have a hypothesis – even if you’re making a tool/api that helps people with their questions too, remember what the objective of that is.
- Find the people and tools you need to prove/show/find. This rest of this page will help with the latter.
- Analyse and present results – were they what you expected? Do they help explain to others what you have found out?
Can present as a interactive data visualisation or a web/mobile application or just a infographic/motion graphics video that tells a story.
Illustration from Data Journalism Handbook, CC BY-SA 3.0
The best high level reference is the ‘Understanding Data’ and ‘Delivering Data’ chapters of the Data Journalism Handbook which is available online for free at
You can learn the technical skills from scratch in Visualize This: The FlowingData Guide to Design, Visualization, and Statistics by Nathan Yau or for more advanced
practical advice check out Data Analysis with Open Source Tools by Philipp K. Janert
For further reading in this space
A great guide to statistics is
Programming is valuable skill for manipulating and displaying data.
Following accessibility guidelines not only make a application accessible but make it a better experience for all users! Even if not making an app, good to consider these things to do and not do when designing for humans: http://www.w3.org/TR/WCAG/
No matter what kind of application you have for the data, there are many tools you can use to better collaborate and manage your project.
Using a version control system like Git or Subversion allows you to keep many different versions of what you have been working on so you can collaborate with others or simply back up your files so you don’t lose them!
There are tutorials on git and GUIs to help you like TortoiseGit for Windows and Atlassian SourceTree for Windows and OSX (or if you prefer the console tig)
There is also a manual for Subversion and a similar GUI for Subversion
Issue/task trackers allow you to outline the tasks required for your project and assign them to people to do.
Many free services to try out virtual/cloud servers before scaling up: https://www.chunkhost.com/ or heroku or https://www.appfog.com/pricing/
Hosted Developer Tools
Can get many tools (source control, issue tracking) combined into one service cloud hosted so there’s no setup required.
Github / BitBucket
Github provides Git but Subversion (svn) and Mercurial (hg) interfaces are also available. Github provide their own GUI for Windows/OSX or you can use a variety of Git capable tools https://github.com/
Similarly Atlassian provide BitBucket accessible via Git and Mercurial (hg) https://bitbucket.org/
Subversion, Git, Mercurial, Bazaar, CVS, issue tracker, wiki, release file downloads. Unlimited free use for open source projects.
You can create your own Sourceforge project at http://sourceforge.net/
Google Code Project Hosting
Git, Mercurial, and Subversion code. Issue tracker, wiki, release file downloads. Unlimited free use for open source projects.
You can host your Google Code project and get access to developer tools, APIs and documentation at http://code.google.com/
So an API isn’t just an XML file!
A good web based data API:
- Is logically organised
- Can filter returned data
- Can return results in different open formats (CSV/JSON etc.)
- Is efficient and responsive by using caching and databases appropriately
- Handles errors gracefully
- Monitors and controls access (to show benefit realised of API and prevent abuse)
- Provides appropriate documentation with examples
Some people like sensis http://developers.sensis.com.
Atlassian have a great page on what makes a good API https://developer.atlassian.com/display/REST/Atlassian+REST+API+Design+Guidelines+version+1)
HowTo.gov has a bunch of api resources about choosing SOAP vs. REST etc. http://www.howto.gov/mobile/apis-in-government
API documentation is important too! Traditionally for SOAP APIs, you use WSDL but for REST try Swagger or iodocs
Many web app frameworks can generate the documentation for you. For example Symfony for PHP http://symfony.com/ https://github.com/FriendsOfSymfony/FOSRestBundle http://williamdurand.fr/2012/08/02/rest-apis-with-symfony2-the-right-way/ https://github.com/nelmio/NelmioApiDocBundle https://github.com/liip/LiipHelloBundle
Or for Ruby on Rails there is is https://github.com/elc/rapi_doc https://github.com/Pajk/apipie-rails
Infographics and Data Visualisation
Infographics try to contextualise charts and graphs to tell a story. Data vis builds on this to find new ways to design insight.
Most of the categories to follow have visualisation tools specific to their purpose.
You can find some data visualisation tools below:
Drawing By Numbers Tools and Resources
– http://selection.datavisualization.ch/ data viz tools catalog
Also check out http://thejit.org & http://www.senchalabs.org/
A good infographic should use visual art concepts and good color schemes. See the data visualisation guidelines from the international journalism festival
For more information on the theory of data visualisation check out the Stanford CS448B notes or The Ultimate Collection of Data Storytelling Resources
With the rise of HTML5 technologies it is easier than ever to make a web application for engaging use of data.
It’s easy to quickly make a good looking and accessible webpage if you use a CSS framework like Bootstrap or Zurb Foundation.
There are a variety of bootstrap themes like Flat-UI
Check out the visualisation tools listed in the data sections for web application tools like these CSS Dashboard gauges
Programming Language: Ruby
Source Control: Git
Issue Tracking: Atlassian JIRA
Description: Displays connections between government contracts, business details, politician responsibilities, lobbyists, clients of lobbyists, political donors and the location of these entities.
Programing Language: PHP
Source Control: SVN (Subversion)
Issue Tracking: A whiteboard
Description: Online Canberra Bus Timetables and Trip Planner.
Programing Language: PHP/Ruby
Source Control: Git
Issue Tracking: Github
If you want to get stared quickly with mobile application development, it’s worth considering cross platform frameworks like http://www.sencha.com/products/touch http://phonegap.com/ http://cordova.apache.org/
For a simple mobile app, a web application with a framewrok like jQuery Mobile can work quite well (as used on directory.gov.au)
For data visualisation, there are a variety of graph widgets http://code.google.com/p/afreechart/ http://code.google.com/p/snowdon/ http://code.google.com/p/chartdroid/ http://androidplot.com/ http://code.google.com/p/achartengine/
You may wish to consider backend frameworks like http://helios.io/ or https://www.parse.com/
Bureau of Meteorology Water Storage App http://icelab.com.au/work/bureau-of-meteorology/
NZ Gov budget http://www.treasury.govt.nz/budget/app
Check out the GeoRabble Boundary Mapper’s Cookbook to see how you can tie all these things together!
There are a variety of base layers like AGRI aerial imagery of Australia http://agri.openstreetmap.org/ or WMS services like http://irs.gis-lab.info/ wms or http://www.gdal.org/frmt_wms_openstreetmap_tms.xml
Check out the Geoscience Australia Geo Dataset search and preview
ASGS from ABS including suburbs/postcodes andrewharvey4.wordpress.com postgis/asgs tutorial
You can also get KML layers for various statistical measures on the ABS TableBuilder tool.
There are many spatial data formats and often the one your tool requires is not the one the dataset is provided in.
You can convert spatial datasets online with http://converter.mygeodata.eu/vector or locally using GDAL (which better for >10 megabyte datasets)
See this introduction to geocoding
Google Maps APIs allow you to convert an address to map co-ordinates (geocoding) but you must display on a Google Map. The easiest way to do is with a Google Spreadsheet/Fusion Table http://schoolofdata.org/2013/02/19/geocoding-part-ii-geocoding-data-in-a-google-docs-spreadsheet/
If you need geocoding for more than display (working out the distance between points etc) or you don’t want to use Google Maps, Cloudmade offers free OpenStreetMap based geocoding http://developers.cloudmade.com/projects/show/geocoding-http-api
PostGIS is an extension for the PostgreSQL database server that allows you to store and manipulate geospatial data on a large scale. For example finding which points are in an area or what points are closest . It is also very useful for storing geospatial data because it can convert between all major formats including ESRI Shape files and Google Earth/Maps KML.
QGIS is a graphical desktop application that allows viewing and editing of geospatial data. Some good base maps are available by adding the WMS layer/server http://irs.gis-lab.info/
See this Creating a Map in QGIS tutorial
Layar and other augmented reality tools
Layar provides a platform for exploring a dataset by travelling to the actual locations of the data and looking through a smartphone. Custom markers (2D or 3D) seem to float in the air and can be clicked on for more information. You can even trigger an event like playing music when within a certain range of a location.
Google Fusion Tables/ChartsBin/OpenHeatMap
http://www.peteraldhous.com/CAR/Making_maps_with_Google_Fusion_Tables.pdf tutorial or http://support.google.com/fusiontables/topic/2592754?hl=en&ref_topic=27020 for google help files
Display points and different layers. Leaflet is the easiest to use if you just want to show points with popups when clicked on.
There are wrappers for Google maps like http://hpneo.github.com/gmaps/examples.html and Mapstraction that can make it easier to use too.
If you need to customise the base map, try TileMill. See the THE INSANELY ILLUSTRATED GUIDE TO YOUR FIRST DATA-DRIVEN TILEMILL MAP
NASA World Wind/Google Earth
Google Earth provides 3. viewing of KML/GML files which represent points and shapes, both through a desktop application and a web plugin. These can be extended with interactive features that allow you to view by timeline or have animated tours between different points. You can also develop and customise your own viewer with the open source NASA World Wind toolkit.
Relational IO platform
- Datasets from the new data.gov.au CKAN repository
- Datasets from data.act.gov.au Socrata repository
- Access to NLAs Trove API
- Select data from data.nsw.gov.au (csv based)
- Datasets from data.vic.gov.au (csv based)
- Datasets from data.qld.gov.au (csv based)
- Access to web services such as Flickr image search, Twitter Search API, Bing search API, Google Search API, Google geocoding, Textrazor language analysis.
Teams will get their own read-only SQL-powered workspace that will give them access to all the above datasets / services allowing them to join and mashup data quickly and easily.
Converting between formats like json/xml or csv can be done online with http://shancarter.com/data_converter/
Tabular data may have duplicate entries or incorrect formats (varying ways to enter dates/phonenumbers etc.). There are tools to quickly fix common problems:
For the more adventureous, Dedupe allows you to train a computer to deduplicate similarly named entities automatically.
You can also use general purpose file manipulation tools like grep/awk/sed. These work best when you instruct them what search/change you need using Regular Expressions (RegEx) which you can learn more about at http://www.regexper.com/ and http://www.debuggex.com/?re=&str=
Excel / Google Docs
Great basic analysis and viewing but older versions can be limited to 6500 rows. Eg http://www.tcij.org/training-material/car/data-mining/3474 or http://training.sunlightfoundation.com/module/data-visualizations-google-docs/
Next step up, large datasets can be manipulated/extracted efficiently for example http://www.postgresql.org/docs/8.4/static/tutorial-window.html , no built-in data visualisation though.
R Statistical Language
R provides a platform for advanced data analysis which can find and visualise trends even in large datasets. Some reference resources to learn the language R basic statistics and graphs http://cran.r-project.org/doc/manuals/R-intro.html There are also some addons that provide graphical interfaces that make it easier to use such as Rattle http://rattle.togaware.com/ , RStudio http://rstudio.org/ or Deducer http://www.deducer.org/pmwiki/pmwiki.php?n=Main.DeducerManual
R’s value lies in the wide array of libraries and addons you can use. For example BigVis lets you visualise 10 Million data points in 5 seconds on an ordinary computer.
Be sure to checkout the list of “10 R packages I wish I knew about earlier”
ggplot2 is the typical graphical output of R and is very powerful. See these tutorials for instructions: http://chartsnthings.tumblr.com/post/36978271916/r-tutorial-simple-charts http://flowingdata.com/2012/12/17/getting-started-with-charts-in-r/
You can do some very creative plotting for example putting pictures of Pokemon where their power level is on an X/Y axis or a 2D plot with histograms for each dimension
For advanced interactive visualisation you can use Shiny which allows visitors to you page to adjust the R charts.
Examples of Shiny use include:
http://blog.ouseful.info/2012/11/28/quick-shiny-demo-exploring-nhs-winter-sit-rep-data/ https://github.com/timelyportfolio/shiny-d3-plot https://github.com/trestletech/shiny-sandbox/tree/master/grn
See this Tableau Desktop Tutorial
D3.js (Data-Driven Documents)
See these tutorials to get started: http://datadrivenjournalism.net/resources/data_driven_documents_defined http://bost.ocks.org/mike/chart/
Most of the world’s data isn’t structured because it is contained in documents (webpages, tweets etc.). Sometimes it is possible to structure it, sometimes there are tools that are better suited it unstructured data.
Text analysis can be very valuable for transparency
For extracting data from webpages, checkout Scraperwiki pytemplate scrapy
PDFs – http://source.mozillaopennews.org/en-US/articles/introducing-tabula/ for text PDFs or http://www.reporterslab.org/dochive/ for images (common in scanned document PDFs)
If there is no way to form a table structure to be able to apply tabular data techniques , you need a more sophisticated analysis as detailed below.
Natural Language Processing libraries like OpenNLP for Java or NLTK / Pattern for Python allow you to extract information from text. For example, finding the important keywords in a sentence automatically
One of the most useful techniques found in these libraries is Named entity recognition which extracts the subjects named in a piece of text. TextRazor lets you analyse up to 500 documents a day online.
A search engine just for your dataset can also help. Tools like Apache Lucene/Solr or ElasticSearch can help you index and search large datasets in new ways.
For light weight analysis, try R or Ruby: http://www.r-bloggers.com/simple-text-mining-with-r/ http://blog.josephwilk.net/ruby/latent-semantic-analysis-in-ruby.html
You can make word trees of blocks of text, webpages or twitter account and share them http://www.jasondavies.com/wordtree/
“Overview automatically sorts thousands of documents into topics and sub-topics, by reading the full text of each one.” Simply make a CSV file with two columns, id and text. 10,000 documents is a good limit for the current state of the system. https://www.overviewproject.org/
For larger document sets or for alternative visualisations, try Jigsaw a desktop based application. http://www.cc.gatech.edu/gvu/ii/jigsaw/
Graph data can be very valuable for finding communities, hubs and connections between entities (the 6 degrees of separation). This is through the techniques of Social Network Analysis.
Help understand relationships – how is X connected to Y and via what other entities they both are connected to.
Imports and exports can be done by writing a java program or spreadsheet (for example, Gmail contacts). The fastest way to import data into Neo4j is the REST batch import API
There are other graph databases worth considering like OrientDB or Titan
Major graph databases like these can be accessed using a common syntax called Gremlin or by writing a simple Java/Python/Ruby application. Queries can be tested in the built in data browser.
NetworkX is a social network analysis library for python. Many advanced analyses built in like finding communities within a graph. Also good for converting data into graphs.
Proper visualisation of networks can be hard as described in this presentation Visualising Networks: Beyond the Hairball
Sometimes when you analyse a network what you actually have is a tree/hierarchy with no interconnections.
In these cases, it’s faster and more visually effective to use a Tree visualisation.
You can run TreeViz locally or use d3 on a website, step by step instructions for creating tree data for d3
d3 also includes treemaps – bubbles inside bubbles
Sometimes it’s more about the magnitude (money? amount of communication?) of the connections between nodes.
A sankey diagram can easily visualise this http://bost.ocks.org/mike/sankey/
NodeXL for Microsoft Excel
NodeXL allows you to visualise networks/graphs quickly inside Excel.
Many tools can produce input files for Gephi including Graph Databases and a Excel Spreadsheet to map twitter social networks
It’s also possible to filter/search the displayed network in sigma.js