In collaboration with Rolex Awards for Enterprise, Proudfoot Media and I have produced a documentary film explaining the latest research into the surprising hidden biology shaping Earth’s ice. The story is told by young UK Arctic scientists with contributions from guests including astronaut Chris Hadfield and biologist Jim Al-Khalili. We went to great lengths to make this a visually striking film that we hope is a pleasure to watch and communicates the otherwordly beauty and incredible complexity of the Arctic glacial landscape. We aim to educate, entertain and inspire others into exploring and protecting this most sensitive part of our planet in their own ways.
We think the film is equally suited to the general public as school and university students, and we are delighted to make this a free-to-all teaching resource. Please watch, share and use!
Click through to the Vimeo page to watch full screen and full resolution!
As an Arctic scientist I am privileged to be able to explore the coldest parts of our planet, making observations and measurements and helping others to understand how these areas function by writing papers and giving talks, lectures and writing for magazines and newspapers. But to truly understand an environment, we must also explore the intangible and immeasurable. To communicate it to diverse audiences, we must use not only facts and observations, but aesthetics and emotion. The piece above is a bridge connecting music and science – an effort to understand and communicate the hidden beauty, complexity and sensitivity of the Greenland Ice Sheet through sound. I hope that projects like this will bring new audiences to Arctic science, using music, art and aesthetics to pique their curiosity.
This project arose from a chance encounter in 2017. I was a guest on Radio 4’s Midweek program, along with musician Hannah Peel. As I listened to her explain her art on air, and later listening to her music, especially the new album, ‘Mary Casio’, I was struck by the depth of thought and analysis underpinning her work. I reached out to see if she would be interested in applying the same process to exploring the changing Arctic.
To my surprise and delight, Hannah agreed to make a new composition. We chatted about Arctic science – ice sheet dynamics, albedo feedbacks and microbiology in particular, and I provided footage and images from our field sites in Greenland and Svalbard. Hannah then went away and composed a piece of music inspired by the intricate processes, nested feedbacks and hidden complexity of this environment. I then cut the music to drone footage I filmed on site in 2017. I am overjoyed with the result, because I think Hannah’s music communicates perfectly the almost paradoxical sense of grandeur and intricacy, power and vulnerability of the ice.
Sheffield SME ‘Gradconsult’ have just opened their second annual Microgrant scheme. It is open for applications from early career researchers in any discipline based anywhere in the UK.
This could be a great way to get the ball rolling with grant capture and could support travel, consumables, logistics, science communication, etc. It’s a very open and flexible scheme with the focus being on funding individuals with passion and purpose who will make the most of every penny and benefit from getting ‘on the ladder’ with research funding.
The application deadline is 31st March and further details can be found here
Several journals now request data and/or code to be made openly available in a permanent repository accessible via a digital object identifier (doi), which is – in my opinion – generally a really good thing. However, there are associated challenges. First, because the expectation that code and data are made openly available is quite new (still nowhere near ubiquitous), many authors do not know of an appropriate workflow for managing and publishing their code. If code and data has been developed on a local machine, there is work involved in making sure the same code works when transferred to another computer where paths, dependencies and software setup may differ, and providing documentation. Neglecting this is usually no barrier to publication, so there has traditionally been little incentive to put time and effort into it. Many have mad great efforts to provide code to others via ftp sites, personal webpages or over email by request. However, this relies on those researchers maintaining their sites and responding to requests.
I thought I would share some of my experiences with curating and publishing research code using Git, because actually it is really easy and feeds back into better code development too. The ethical and pragmatic arguments in favour of adopting a proper version control system and publishing open code are clear – it enables collaborative coding, it is safer, more tractable and transparent. However, the workflow isn’t always easy to decipher to begin with. Hopefully this post will help a few people to get off the ground…
Version control is a way to manage code in active development. It is a way to avoid having hundreds of files with names like “model_code_for _TC_paper_v0134_test.py” in a folder on a computer, and a way to avoid confusion copying between machines and users. The basic idea is that the user has an online (‘remote’) repository that acts as a master where the up-to-date code is held, along with a historical log of previous versions. This remote repository is cloned on the user’s machine (‘local’ repository). The user then works on code in their local repository and the version control software (VCS) syncs the two. This can happen with many local repositories all linked to one remote repository, either to enable one user to sync across different machines or to have many users working on the same code.
Changes made to code in a local repository are called ‘modifications’. If the user is happy with the modifications, they can be ‘staged’. Staging adds a flag to the modified code, telling the VCS that the code should be considered as a new version to eventually add to the remote repository. Once the user has staged some code, the changes must be ‘committed’. Committing is saving the staged modifications safely in the local repository. Since the local repository is synced to the remote repository by the VCS, I think of making a commit as “committing to update the remote repository later”. Each time the user ‘commits’ they also submit a ‘commit message’ which details the modifications and the reasons they were made. Importantly, a commit is only a local change. Staging and committing modifications can be done offline – to actually send the changes to the remote repository the user ‘pushes’ it.
Sometimes the user might want to try out a new idea or change without endangering the main code. This can be achieved by ‘branching’ the repository. This creates a new workflow that is joined to the main ‘master’ code but kept separate so the master code is not updated by commits to the new branch. These branches can later be ‘merged’ back onto the master branch if the experiments on the branch were successful.
These simple operations keep code easy to manage and tractable. Many people can work on a piece of code, see changes made by others and, assuming the group is pushing to the remote repository regularly, be confident they are working on the latest version. New users can ‘clone’ the existing remote repository, meaning they create a local version and can then push changes up into the main code from their own machine. If a local repository is lagging behind the remote repository, local changes cannot be pushed until the user pulls the changes down from the remote repository, then pushes their new commits. This enables the VCS and the users to keep track of changes.
To make the code useable for others outside of a research group, a good README should be included in the repository, which is a clear and comprehensive explanation of the concept behind the code, the choices made in developing it and a clear description of how to use and modify it. This is also where any permissions or restrictions on usage should be communicated, and any citation or author contact information. Data accompanying the code can also be pushed to the remote repository to ensure that when someone clones it, they receive everything they need to use the code.
One great thing about Git is that almost all operations are local – if you are unable to connect to the internet you can still work with version control in Git, including making commits, and then push the changes up to the remote repository later. This is one of many reasons why Git is the most popular VCS. The name refers to the tool used to manage changes to code, whereas Github is an online hosting service for Git repositories. With Git, versions are saved as snapshots of the repository at the time of a commit. In contrast, many other VCSs log changes to files.
There are many other nuances and features that are very useful for collaborative research coding, but these basic concepts are sufficient for getting up and running. It is also worth mentioning BitBucket too – many research groups use this platform instead of GitHub because repositories can be kept private without subscribing to a payment plan, whereas Github repositories are public unless paid for.
To publish code, a version of the entire repository should be made immutable and separate from the active repository, so that readers and reviewers can always see the precise code that was used to support a particular paper. This is achieved by minting a doi (digital object identifier) for a repository that exists in GitHub. This requires exporting to a service such as Zenodo.
Zenodo will make a copy of the repository and mint a doi for it. This doi can then be provided to a journal and will always link to that snapshot of the repository. This means the users can continue to push changes and branch the original repository, safe in the knowledge the published version is safe and available. This is a great way to make research code transparent and permanent, and it means other users can access and use it, and the authors can forget about managing files for old papers on their machines and hard drives and providing their code and data over email ‘by request’. It also means the authors are not responsible for maintaining a repository indefinitely post-publication, as all the relevant code is safely stored at the doi, even if the repository is closed down.
The ubiquitous smartphone contains millions of times more computing power than was used to send the Apollo spacecraft to the moon. Increasingly, scientists are repurposing some of that processing power to create low-cost, convenient scientific instruments. In doing so, these measurements are edging closer to being feasible for citizen scientists and under-funded professionals, democratizing robust scientific observations. In our new paper in the journal ‘Sensors’, led by Andrew McGonigle (University of Sheffield) we review the development of smartphone spectrometery.
Abstract: McGonigle et al. 2018: Smartphone Spectrometers
Smartphones are playing an increasing role in the sciences, owing to the ubiquitous proliferation of these devices, their relatively low cost, increasing processing power and their suitability for integrated data acquisition and processing in a ‘lab in a phone’ capacity. There is furthermore the potential to deploy these units as nodes within Internet of Things architectures, enabling massive networked data capture. Hitherto, considerable attention has been focused on imaging applications of these devices. However, within just the last few years, another possibility has emerged: to use smartphones as a means of capturing spectra, mostly by coupling various classes of fore-optics to these units with data capture achieved using the smartphone camera. These highly novel approaches have the potential to become widely adopted across a broad range of scientific e.g., biomedical, chemical and agricultural application areas. In this review, we detail the exciting recent development of smartphone spectrometer hardware, in addition to covering applications to which these units have been deployed, hitherto. The paper also points forward to the potentially highly influential impacts that such units could have on the sciences in the coming decades