Eyes in the Sky 2: Airspace

Just like the land and oceans, the sky is divided into regulated regions. This makes sense, as it prevents unauthorised flights over sensitive and/or dangerous areas like airports, military zones, power stations, private land etc. Knowing the airspace classification is a fundamental prerequisite for making safe and legal flights with an unmanned aerial system (UAS).

In the UK, the Civil Aviation Authority defines airspace classes from A to G. There are specific permissions and restrictions associated with each class and they are mapped on VFR charts, for example here.

Class A airspace is the most heavily restricted and is less relevant for small UAS operators because only aircraft operating under IFR (instrument flight rules) are permitted to fly – limiting users mainly to commercial and private jets. Generally Class A starts from 18000 feet above mean sea level.

There is no Class B airspace in the UK, but it is commonly used to restrict airspace around large airports in the US.

Class C airspace usually extends vertically from 19,500 feet to 60,000 feet. It is permitted to fly using both instrument and visual flight rules (IFR and VFR) but clearance from air traffic control is necessary to enter. It is unlikely that a small UAS operator could end up in Class C airspace for several reasons, but especially because it would be very difficult to climb to 19500 feet!

Class D airspace is also available for VFR and IFR flights with clearance from air traffic control and at a speed less than 250 knots when flying below 10,000 feet. Typically the airspace around aerodromes (any location where flight operations occur) are Class D.

Class E airspace is also available for IFR and VFR use. Aircraft flying under VFR do not need clearance or two-way radio communication with air traffic control to enter but the pilot must comply with instructions from air traffic control.

Class G airspace is unregulated, meaning UAS pilots can fly as they please as long as the flight is within visual line of sight up to a maximum of 400 ft vertical and 500 m horizontal distance from the pilot in command and in accordance with the regulations set out in CAP 393 Articles 94 and 95 and CAP 722, for example being at least 50 m from any person, obstacle or vessel not under the direct control of the pilot, and at least 150 m from congested areas or open air gatherings of <1000 people.

 

 

Advertisements

Eyes in the Sky 1: METAR

I’m currently studying for my CAA permission for commercial operations (PfCO) – what is commonly thought of as the UK drone pilot’s license. Flying small unmanned aerial systems (SUAS) is an increasingly common part of field science especially in polar science where a) scaling in-field observations over space is critical, b) we rely heavily on satellite observations that require sub-pixel validation, and c) it is often hazardous to manually survey areas that can be easily surveyed using a UAS. We an achieve so much more when we have eyes in the sky as well as feet on the ground. Legislation covering SUAS users is also changing rapidly and is likely to (rightly) become much stricter in the near future. I plan to write about several aspects of the PfCO that are relevant to UAS flights in polar regions, partly for interest and partly as a revision tool for myself in preparation for the PfCO assessment!

One of the most important aspects of flying anywhere, and especially in polar regions, is up to date and accurate information about the weather. Airports and many weather stations report current and forecast weather conditions in a condensed format known as METAR, or the slightly more in-depth TAF. METAR stands for Meteorological Aerodrome Report, and TAF stands for Terminal Aerodrome Forecast. As well as being a standard aeronautical system, I think using METAR symbology would make an excellent way to log detailed meteorological observations in metadata for field scientists.

Below is a METAR forecast for 27th March 2019 for the airport in Longyearbyen, Svalbard:

ENSB 270850Z 13020KT 9999 FEW025 SCT080 M05/M12 Q0994 NOSIG RMK WIND 1400FT 12017KT

 

The METAR starts with a four letter location code: ENSB is the code for the Longyearbyen airport. Then the date and time that the forecast was posted, starting with the day of the month and the time in HHMM format followed by ‘Z’. The Z indicates that the forecast is in Greenwich Mean Time or simply “Zulu”, so this forecast was posted on 27th march at 0850 GMT. As pointed out by @arwynedwards on Twitter, this is a deviation from the NATO standard format (270850Z MAR19) by omitting the month and year information. I suspect this is because the high frequenc of METAR updates makes this information largely redundant. For recording field meteorology it might be more useful to use the NATO standard – while the month and year of the field work may usually be obvious on an individual project basis, it could be crucial info when collating data from a range of sources.

The third block of characters describes the wind speed, with the first three numbers showing the direction the wind is coming from (in this case 130 degrees, or approximately South East). The last two digits are the speed (20) and KT shows that the speed is measured in knots.

The next set of four digits shows the visibility in statute miles or kilometers. In this case the visibility is greater than the maximum shown on a METAR so it is recorded as 9999, which can be interpreted as greater than 10km.

Next is cloud conditions. This is achieved using a set of three letter codes offering a qualitative description of the cloud cover. In this case FEW means there are a few clouds and their height in feet/100 is shown. Here the few clouds sit at 2500ft (025*100). there are also scattered clouds sitting at 8000 ft (080*100).

The temperature and dew point are described using M for minus, so here the temperature is -5 C an the dew point is -12 C. When these values are similar (say, within 3 C) we need to worry about mist, fog or precipitation.

QO994 shows that the pressure is 994 hPa. NOSIG is a flag to suggest that no significant changes to these conditions are expected over the forecast period.

The rest of the information in the METAR is classified as “remarks” as signified by the code RMK. In this case the remark is that the winds aloft at 1400 ft are much stronger and coming from a different direction to the winds at surface. In this case, 17 knots coming from 120 degrees.

Overall, this looks like a good day to fly in terms of high visibility and low chance of precipitation, but the low temperatures will reduce the battery life and risk icing, and the wind speed is just outside of the safe flight envelope for many small UASs including the DJI Mavic and Phantom series. For those reasons I’d call a NO-GO for a small quadcopter flight.

The power of the METAR is that all that information can be conveyed in a simple string of unambiguous characters. They are frequently updated to reflect changing conditions and forecasts. This METAR was sourced from allmetsat.com who provide up to date METAR for over 4000 stations.

 

AI Adventures in Azure

A lot of my work at the moment requires quite computationally heavy geospatial analysis that stretches the processing capabilities of my laptop. I invested in a pretty powerful machine – i7-7700GHz processor, 32GB RAM – and sped things up by spreading the load across cores and threads, but it can still be locked up for hours when processing very large datasets. For this reason, I have started exploring cloud computing. My platform of choice is Microsoft Azure. Being new to Azure and cloud computing in general, I thought it would be helpful for me to keep notes of my learning as I climb onboard, and also thought it could be useful to make the notes public for others who might be following the same path.

I’ll be blogging these notes as “Adventures in Azure”. I’m predominantly a Linux user and the notes will focus on Linux virtual machines on Azure. My programming will almost all be in Python. The end-goal is to be proficient with machine learning applied to remote sensing image analysis in the cloud.

 

I’m certain I will find fugly ways to do things and I will be grateful for any suggestions for refinements!

 

1. Setting Up Linux Data Science Virtual Machine

I’m not going to write up notes for this as it was so easy! I created an Azure account with a Microsoft email address, then I chose to use a virtual machine image preloaded with the essentials – Ubuntu, Anaconda (2.7 and 3.5), JupyterHub, Pycharm, Tensorflow and NVIDIA drivers – amongst a range of other useful software designed specifically for data science. Microsoft call it the “Data Science Virtual Machine and the link is here and the instructions are simple to follow. I opted for a standard NC6 (which has 6 vCPUs and 56GB memory) as this is a significant step up in terms of processing power from my local machine, but comes at an affordable hourly rate.

Once the virtual machine is established, there is still a fair amount of configuring to do before using it for geospatial projects. The next post will contain info about ways to work with Python on the virtual machine.

Upernavik Field Work 2018

2018 saw the Black & Bloom postdocs exploring a new field site in the north western sector of the Greenland Ice Sheet. After two seasons working in the south west near Kangerlussuaq, the team migrated north to investigate dark ice where the melt seasons are shorter and the temperatures lower.

DSC03461
Beautiful Upernavik, viewed from the airport (ph J Cook)

We soon learned that there were additional challenges to working up here beyond the colder weather. Upernavik itself is on a small island in an archipelago near where the ice sheet flows and calves into the sea. While this produces spectacular icebergs, it also means access to the ice sheet is possible only by helicopter. The same helicopter serves local communities elsewhere in the archipelago with food, transport and other essential services. While we were in Upernavik, a huge iceberg floated into the harbour in nearby Inarsuit, threatening the town with the potential for a huge iceberg-induced tsunami. The maritime Arctic weather also played havoc with the flight schedules, and resupplying local communities (rightly) took priority over science charters.

DSC03547
Iceberg near the harbour in Upernavik (ph. J Cook)_

These factors combined to prevent us from leaving Upernavik for 3.5 weeks. It seemed like we would never make it onto the ice. However, we finally got a weather window that coindided with heli and pilot availablity. With the difficulty of getting on to the ice weighing on our minds, we had to consider the risk of similar difficulties getting back out. We repacked to ensure we had several weeks of emergency supplies to make sure we would not be flying in to a potential search and rescue disaster.

Once on the ice, we quickly built a camp and started recording measurements quickly. The albedo measurements and paired drone flights went very smoothly, with refined methods developed over the past two seasons. However, we only saw exposed glacier ice for 1.5 days, and continuous snowfall kept it buried for the rest of the season.

DSC03792
Air Greenland’s Bell 212 sling loading our field kit (ph. J Cook)

Overall it was an interesting site, and the important thing is that we can confirm that the algal bloom we studied in the south west is also present in the northern part of the ice sheet, is composed of the same species and also makes the ice dark. We have sampled the mineral dusts too, to see how they compare with the more southern site.

ASD spectra processing with Linux & Python

I’m sharing my workflow for processing and analysing spectra obtained using the ASD Field Spec Pro, partly as a resource and partly to see whether others have refinements or suggestions for improving the protocols. I’m specifically using Python rather than any proprietary software to keep it all open source, transparent and to keep control over every stage of the processing..

Working with .asd files

By default the files are saved as a filetype with the extension .asd which can be read by the ASD software ‘ViewSpec’. The software does allow the user to export the files as ascii using the “export as ascii” option in the dropdown menus. My procedure is to use this option to resave the files as .asd.txt. I usually keep the metadata by selecting the header and footer options; however I deselect the option to output the x-axis because it is common to all the files and easier to add once later on. I choose to delimit th data using a comma to enable the use of Pandas ‘read_csv’ function later.

To process and analyse the files I generally use the Pandas package in Python 3. To read the files into Pandas I first rename the files using a batch rename command in the Linux terminal:

cd /path/folder/

rename “s/.asd.txt/.txt/g”**-v

Then I open a Python editor – my preference is to use the Spyder IDE that comes as standard with an Anaconda distribution. The pandas read_csv function can then be used to read the .txt files into a dataframe. Put this in a loop to add all the files as separate columns in the dataframe…

import pandas as pd

import os

spectra = pd.DataFrame()

filelist = os.listdir(path/folder/)

for file in filelist:

spectra[file] = pd.read_csv(‘/path/folder/filename’, header=None, skiprows=0)

If you chose to add any header information to the file exported from ViewSpec, you can ignore it by skipping the appropriate number of rows in the read_csv keyword argument ‘skiprows’.

Usually each acquisition comprises numerous individual replicate spectra. I usually have 20 replicates as a minimum and then average them for each sample site. Each individual replicate has its own filename with a dequentially increasing number (site1…00001, site1….00002, site1…00003 etc). My way of averaging these is to cut the extension and ID number from the end of the filenames, so that the replicates from each sample site are identically named. Then the pandas function ‘groupby’ can be used to identify all the columns with equal names and replace them with a single column containing the mean of all the replicates.

filenames = []

for file in filelist:

file = str(file)

file = file[:-10]

filenames.append(file)

#rename dataframe columns according to filenames

filenames = np.transpose(filenames)

DF.columns = [filenames]

# Average spectra from each site

DF2 = DF.transpose()

DF2 = DF2.groupby(by=DF2.index, axis=0).apply(lambda g: g.mean() if isinstance(g.iloc[0,0],numbers.Number) else g.iloc[0])

DF = DF2.transpose()

Then I plot the dataset to check for any errors or anomalies, and then save the dataframe as one master file organised by sample location

spectra.plot(figsize=(15,15)),plt.ylim(0,1.2)

spectra.to_csv(‘/media/joe/FDB2-2F9B/2016_end_season_HCRF.csv’)

Common issues and workarounds…

Accidentally misnamed files

During a long field season I sometimes forget to change the date in the ASD software for the first few acquisitions and then realise I have a few hundred files to rename to reflect the actual date. This is a total pain, so here is a Linux terminal command to batch rename the ASD files to correct the data at the beginning of the filename.

e.g. to rename all files in folder from 24_7_2016 accidentally saved with the previous day’s date, run the following command…

cd /path/folder/

rename “s/23_7/24_7/g” ** -v

Interpolating over noisy data and artefacts

On ice and snow there are known wavelengths that are particularly susceptible to noise due to water vapour absorption (e.g. near 1800 nm) and there may also be noise at the upper and lower extremes of the spectra range measured by the spectrometer. Also, where a randomising filter has not been used to collect spectra, there can be a step feature present in the data at the crossover point between the internal arrays of the spectrometer (especially 1000 nm). This is due to the spatial arrangement of fibres inside the fibre optic bundle. Each fibre has specific wavelengths that it measures, meaning if the surface is not uniform certain wavelengths are over sampled and others undersampled for different areas of the ice surface. The step feature is usually corrected by raising the NIR (>1000) section to meet the VIS section (see Painter, 2011). The noise in the spectrum is usually removed and replaced with interpolated values. I do this in Pandas using the following code…

for i in DF.columns:

# calculate correction factor (raises NIR to meet VIS – see Painter 2011)

corr = DF.loc[650,i] – DF.loc[649,i]

DF.loc[650:2149,i] = DF.loc[650:2149,i]-corr

# interpolate over instabilities at ~1800 nm

DF.loc[1400:1650,i] = np.nan

DF[i] = DF[i].interpolate()

DF.loc[1400:1600,i] = DF.loc[1400:1600,i].rolling(window=50,center=False).mean()

DF[i] = DF[i].interpolate()

The script is here for anyone interested… https://github.com/jmcook1186/SpectraProcessing

CASPA at EGU 2018

The EGU annual meeting in Vienna is one of the major events in the earth science calendar, where the latest ideas are aired and discussed and new collaborations forged. My talk this year was in the “Remote Sensing of the Cryosphere” session. Here’s an overview:

Albedo is a primary driver of snow melt. For clean snow and snow with black carbon, radiative transfer models to an excellent job of simulating albedo, yet there remain aspects of snow albedo that are poorly understood. In particular current models do not take into account algal cells that grow and dramatically discolour ice in some places (except our 1-D BioSNICAR model) and few take into account changes in albedo over space and time.

This led me to wonder about using cellular automata as a mechanism for distributing albedo modelling using radiative transfer over three spatial dimensions and time, and also enabling a degree of stochasticity to be introduced to the modelling (which is certainly present in natural systems).

Cellular automata are models built on a grid composed of individual cells. These individual cells update as the model progresses through time according to some function – usually a function of the values of the neighbouring cells. Cellular automata have been used extensively to study biological and physical systems in the past – for examples Conway’s Game of Life, Lovelock’s DaisyWorld and Bak’s Sandpile Model not only gave insight into particular processes, but arguably changed the way we think about nature at the most fundamental level. Those three models were epoch-changing for the concepts of complexity and chaos theory.

gof
An implementation of Conway’s Game of Life, showing the grid updating in a complex fashion, driven by simple rules, by Jakub Konka

For the snowpack, I developed a model I am calling CASPA -an acronym for Cellular Automaton for SnowPack Albedo. CASPA draws on a cellular automaton approach with a degree of stochasticity to predict changes in snowpack biophysical properties over time

At each timestep the model updates the biomass of each cell. This happens according to a growth model (an initial inoculum doubles in biomass). This biomass has a user-defined probability of growing in situ (darkening that cell) or spreading to a randomly selected adjacent cell. Once this has occurred, the radiative transfer model BioSNICAR is called and used to predict the albedo, and the energy absorbed per vertical layer. The subsurface light field is visualised as the planar intensity per vertical layer, per cell. The energy absorbed per layer is also used to define a temperature gradient which is used to drive a grain evolution model. In the grain evolution model, wet and dry grain growth ca occur, along with melting, percolation and refreezing of interstitial water. This is consistent with the grain evolution model in the Community Land Model. The new grain sizes are fed back into SNICAR ready for the albedo calculation at the next timestep.

At the same time, inorganic impurities can be incorporated into the model. These include dust and soot. These can be constant throughout the model run, or can vary according to a user-defined scavenging or deposition rate. They can also melt-out from beneath, by having the inorganic impurities rising up through successive vertical layers per timestep.

CASPA_map
The 2D albedo map output by CASPA showing the albedo decline due to an algal bloom growing on the snowpack

In this way, the albedo of a snowpack can be predicted in three spatial dimensions plus time. Taking the incoming irradiance into account, the radiative forcing can be calculated at each vertical depth at each cell per timestep. Furthermore, the energy available as photosynthetically active radiation in each layer can be quantified. Ultimately. these values can feed back into the growth model. Coupling the CASPA scheme wit a sophisticated ecological model could therefore be quite powerful.

By default the model outputs a 2D albedo map and a plot of biomass against albedo. It is interesting to realise that the subtle probabilistic elements of the cellular model can lead to drastically different outcomes for the biomass and albedo of the snowpack even with identical initial conditions. This is also true of natural systems and the idea that an evolving snowpack can be predicted using a purely deterministic model seems, to me, erroneous. There are interesting observations to make about the spatial ecology of the system. Even this simplified system can runaway into dramatic albedo decline or almost none. It makes me wonder about natural snowpacks and the Greenland dark zone – how much of the interannual variation emerges from internal stochasticity rather than being a deterministic function of meteorology or glaciology?

Fig2B
A plot of albedo and biomass against time for CASPA. Each individual run is presented as a dashed line, the mean of all runs is represented as the solid line. The divergence in evolutionary trajectory between individual runs is astonishing since these were all run with identical initial conditions – a result of emergent complexity and subtle imbalances in the probabilistic functions in the model.

In terms of quantifying biological effects on snow albedo, CASPA can be run with the grain evolution and inorganic impurity scavenging models turned ON or OFF. Comparing the albedo reduction taking into account the physical evolution of the snow with that when the snow physics remain constant provides an estimate of the indirect albedo feedbacks and the direct albedo reduction due to the algal cells.

This modelling approach opens up an interesting opportunity space for remote sensing in the cryosphere. In parallel to this modelling I have been working hard on a supervised classification scheme for identifying various biological and non-biological ice surface types using UAV and satellite remote sensing products. Coupling this scheme with CASPA offers an opportunity to upsample remote sensing imagery in space and time, or to set the initial conditions for CASPA using real aerial data and then experimenting with various future scenarios. At the moment, I lack any UAV data for snow with algal patches to actually implement the workflow, but it is proven using multispectral UAV data from bare ice on the Greenland ice sheet. When I obtain multispectral data for snow with algal blooms, it is possible to automate the entire pipeline from loading the image, classifying it using a supervised classifier, converting it into an n-dimensional array that can be used as an initial state for the CASPA cellular automaton, whose conditions can be tweaked to experiment with various environmental scenarios.

Therefore, the limiting factor for CASPA at the moment is availability of multispectral aerial data and field spectroscopy for training data for algal blooms on snow. In the spirit of open science and to try to stimulate a development community, I have made this code 100% open and annotated despite being currently unpublished, and I’d be delighted to receive some pull requests!

In summary, I suggest coupling radiate transfer with cellular automata and potentially remote sensing imagery is a promising way to push albedo modelling forwards into spatial and temporal variations and an interesting way to build a degree of stochasticity into our albedo forecasting and ecological modelling.