Auto-align time-series of spherical images

From time to time the spherical Theta S camera used in my virtualforest.io project needs maintenance, such as a reset after a power outage or lens cleaning. These manipulations guarantee consistency in the quality of the recorded images but sometimes cause misalignment from one image to the next.

Using the internal gyroscopic data of the camera one can correct the deformations due to an inclination of the camera, which deform the horizon. Sadly, the included digital compass data is mostly unreliable without an external GPS connected. With no sensor data to rely on the challenge remains in aligning these images (automatically).

Two spherical images with an offset along the x-axis.

In the above image corrections were made to straighten the horizon. Yet, there still is a clear shift along the x-axis of the images (or a rotation along the vertical axis of the camera). Although the images are fine, they can’t be used in a time series or a movie as their relative position changes. Due to this misalignment it also becomes really hard to monitor a specific part of the image for research purposes. One solution would be to manually find and correct these shifts. But, with ~100Gb of virtualforest.io data on file (2017-12-25) this is not a workable solution.

A standard way of dealing with misaligned images which are only translated (movement along x and y axis) is by applying phase correlation. Phase correlation is based upon aligning the phase component (hence the name) of a (dicrete) fourier transform of the image. A fourier transform translates (image) data and expresses it as a sum of sinus waves (plus their intensities / frequencies) and the relative position of these waves (or phase).  In more technical terms it translates data from the time / space domain in to the frequency / phase domain in order to among others speed up convolutional calculations or filter data based frequency (noise reduction). The use of a fourier transform in this case can be seen as a way to speed up calculating a cross-correlation between two images.

A selection of an image to use in determining lateral shifts (along the x-axis).

In general, the phase correlation algorithm is fairly robust with respect to noise but self-similarity of vegetation corrupts the algorithm non the less. As such, I decided to use only a portion of the original spherical images to determine lateral shifts. I extracted the stems of the trees out of the image as this provides the most information with respect to lateral shifts. In a way the stems are a barcode representing the orientation of the camera. (In other use cases, where more man-made structures are included, this extra step is probably not needed.)

Using only these stem barcode sections I was able to successfully align one season of images. The result is an image time series which can be stacked for movies, analysis of the same portion of the image or used in interactive displays without any visible jumps!

The same two spherical images aligned using an offset calculated with phase correlation.

An interactive temporal spherical display covering one year of virtualforest.io imagery can be seen at this link:

https://my.panomoments.com/u/khufkens/m/vr-forest-one-year

 

Why scientists should learn from Aaron Swartz.

“He wanted openness, debate, rationality and critical thinking and above all refused to cut corners.” — Lawrence Lessig

Aaron Swartz helped draft the RDF Site Summary (RSS) standard at age 13 and was in many respects a prodigy. As Lawrence Lessig wrote about Aaron: “He wanted openness, debate, rationality and critical thinking and above all refused to cut corners.” Sadly, he perished by his own hand after particularly severe legal action against his person for copyright infringements. The documentary The Internet’s Own Boy: The Story of Aaron Swartz  provides a homage to his  life and work.

He left a legacy of writings which excel in clarity and brilliance I’ve rarely encountered. This is further contrasted by the age at which a lot of these blog posts or essays were written. Few people come close to the the way Aaron articulated his ideas in writing.

In a series of blog posts I’ll summarize some of his ideas with respect to technology, politics and media within the context of contemporary scientific (ecological) research. The fact that his ideas and his vision remain key to what I consider solid scientific practice reflect his genius and insight.

release late, release rarely (release early, release often)

In a blog post written on July 5, 2006 (release late, release rarely) Aaron outlines how to develop software. Yet, this essay could as well apply to scientific research, going from idea to publication.

Similarly to software (pet) projects, the subject of this blog post, science projects often have strong emotions attached to it. While these emotions are truthful the content or quality of the research might not pass muster.

“When you look at something you’re working on … you can’t help but see past the actual thing to the ideas that inspired it… But when others look at it, all they see is a piece of junk.”

In science, this basically means that you should do your homework and don’t oversell your research. In peer-review reviewers will see past these claims and, rightfully so, reject manuscripts because of it. So when you publish, release late, aim for quality not quantity.  This will raise the chance of getting your work published, while at the same time increasing the likelihood of stumbling on errors. Raising the true quality, or making it look good, often highlights inconsistencies you can’t move past in good conscious.

“Well, it looks great but I don’t really like it” is a lot better then “it’s a piece of junk”.

Releasing work late means that no one knows what you are doing and you might miss out on key feedback. So, informally, research benefits from releasing early.

“Still, you can do better. Releasing means showing it to the world. There’s nothing wrong with showing it to friends or experts or even random people in a coffee shop. The friends will give you the emotional support you would have gotten from actual users, without the stress. The experts will point out most of the errors the world would have found, without the insults. And random people will not only give you most of the complaints the public would, they’ll also tell you why the public gave up even before bothering to complain.”

Releasing early, means that you get valuable feedback that might otherwise would not make it into a high quality paper (released late). This feedback does not only come from experts, but as correctly observed, from everyone within a larger (research) community.

In short, scientific communication and progress requires a split approach where manuscripts should be released as late as possible, with ideas mature and solidly supported by open code and data, which was released as early as possible.

Note: Although the argument can be made that conferences serve the purpose of “early releases” I have yet to see a conference where people present truly early work. Most of the time either published or nearly published work is presented.

 

Want to get published, show me your code.

All too often one is still confronted with a statement at the end of the manuscript reading: “Code is available from the authors at reasonable request”.

The last few years there has been a strong focus on open data and open access journals. This is in part stimulated by a reproducibility crisis in science, often in the biomedical sciences. However, the strong focus on data and journal access alone is misplaced.

Many fields such as ecology, remote sensing and elsewhere rely increasingly on ever more complex software (models). Furthermore, they use ever larger amounts of data. Yet, there isn’t the same demand for releasing code and / or open coding practices. All too often one is still confronted with a statement at the end of the manuscript reading: “Code is available from the authors at reasonable request”.

What reasonable means is often unclear, but it clearly does not stimulate reproducibility (e.g., a critical request might not be “reasonable”). It also actively interferes with the task of reviewers who make assumptions (in good faith) that the analysis was correctly executed. However, with the amount of data (sources) used as well as the number of lines of code produced errors are far from unlikely.

With services such as Github and Docker containers there should be a requirement for any study heavy on the modelling side, and which relies on open data, to be fully reproducible if not through a small worked example if the full dataset would be prohibitive in size or when ethically not desired.

More so, when it comes to model comparisons there should be an active effort to formalize these comparisons in community driven frameworks (e.g., an R package, a python package, docker images, or a formalized workflow). Such rigorous efforts are required to truly assess model performance and quantify model errors at all levels (from source data to model structure). Alas, such efforts are few and far between in ecology, as are open and good coding practices.

The lack of this transparency is in part fueled by a gatekeeper effect. It is profitable not to share the code, as it is profitable not to share data. Not sharing code puts other scientists at a disadvantage, as similar studies or incremental advance upon the original code can’t easily be made. Provided that not sharing code constitutes a breakdown of any reproducibility, and actively slows down scientific process, I’m inclined not to consider studies fit for publication without accessible source code.

note 1: The active sharing of algorithms if far more common in computer science and physics.

note 2: I got pushback on the notion that there is a gatekeeper effect in science. Yet, the fact that a “reasonable request” is mentioned, not merely any request, implies a gatekeeper effect. It is up to the authors to decide how and who will get access to code (and applications thereof) and who doesn’t. But what about licensing? Although, licensing might require citations (CC-BY), release under the same license (GPL) or prohibiting commercial applications (CC-NC), this still guarantees access to the code to begin with.

 

Google Earth Engine time series subset tool

Google Earth Engine (GEE) has provided a way to massively scale a lot of remote sensing analysis. However, more than often time series analysis are carried out on a site by site basis and scaling to a continental or global level is not required. Furthermore, some applications are hard to implement on GEE or prototyping does not benefit from direct spatial scaling. In short, working on a handful of reference pixels locally is often still faster than Google servers. I hereby sidestep the handling of large amounts of data (although sometimes helpful) to get to single location time series subsets with a GEE hack.

I wrote a simple python script / library called gee_subset.py which allows you to extract time series for a particular location or it’s neighbourhood. This tool is similar to my MODIS subset or daymetr tools, which all facilitate the extraction of time series of remote sensing or climatological data respectively.

My python script expands this functionality to all available GEE products, which include high resolution Landsat and Sentinel data, includes climatological data among others Daymet, but also representative concentration pathway (RCP) CMIP5 model runs.

Compared to the ORNL DAAC MODIS subset tool performance is blazing fast (thank you Google). An example query, calling the python script from R, downloaded two years (~100 data points) of Landsat 8 Tier 1 data for two bands (red, NIR) in ~8 seconds flat. Querying a larger footprint (1×1 km) only creates a small overhead (13 sec. query). The resulting figure for the point location with the derived NDVI values is shown below. The demo script to recreate this figure is included in the example folder of the github repository.

NDVI values from Landsat 8 Tier 1 scenes. black lines depicts a loess fit to the data, with the gray envelope representing the standard error.

Level Theta S images

My Virtual Forest project is still running strong and generates tons of spherical images (currently ~50GB). However, the post on which the camera sits is not perfectly level.  The Theta S camera normally compensates for this using an internal gyroscope which detects pitch and roll of the camera.  Yet, when downloading images directly from the camera no adjustments are made and the pitch and roll data is merely recorded in the EXIF data of the image.

As such I wrote a small bash script which rectifies (levels the horizon) in Theta S spherical images using this internal EXIF data. This is an alternative implementation to the THETA EXIF Library by Regen. I use his cute Lama test images for reference. All credit for the funky images go to Regen. Below is the quick install guide to using my script. I hope it helps speed up people’s Theta S workflow.

Install

Download, fork or copy paste the script from my github repository to your machine and make it executable.

Use

The above command will rectify the image.jpg file and output a new file called image_rectified.jpg.

Visual comparison between my results and those of Regen’s python script show good correspondence.

Requirements

The script depends on a running copy of exiftools, imagemagick and POVRay. These tools are commonly available in most Linux distros, and can be installed on OSX using tools such as homebrew. I lack a MS Windows system, but the script should be easily adjusted to cover similar functionality.