How do we promote relevant scientific information in a way that can be trusted very quickly (i.e. analytically)?
This was one of the central topics we discussed at DataJamDays 2017 run at the EPFL together with datascience.ch (SDSC). A bunch of fascinating and just awesome research datasets were presented in the morning, and there were introductions to the new Renga platform for reproducible data science.
During the morning introductions, I briefly explained the mission of Opendata.ch and Open Knowledge, our work with public and science institutions over the years, drew a line back to the research hackdays with FORS that took place on campus in 2015, and a look forward to Core Data.
During the day, I downloaded and looked into the Renga project, discussed data publishing standards with their team, met some really interesting people, learned more about the architecture and interfaces of the Zenodo and Figshare open science platforms.
I also attended a workshop run by Jan Krause from the EPFL Library who introduced Jupyter and Python in a format inspired by Data Carpentry. This engaging session gave me an excuse to put together a Julia notebook to complement the Python notebooks examined. You can view it here and download the source, or just fork it on GitHub.
Read Exploring the oceans around Antarctica for more impressions from another participant.
A recurrent piece of feedback I have heard in recent times, is that the open data movement needs to partner more with science and not reinvent wheels. Judging by today’s event, the science of data is moving along very quickly in academia, and it is in everyone’s interest that bridges are built and maintained. Improving the discoverability and reusability of open science data is definitely something that we can be more involved in.