Jungle Rhythms made it into The Guardian

A cache of decaying notebooks found in a crumbling Congo research station has provided unexpected evidence with which to help solve a crucial puzzle – predicting how vegetation will respond to climate change. . . . (by Dan Grossman)

My Jungle Rhythms has made some waves as of late. The project sparked the interest of dr. Dan Grossman, a science journalist, and his nice summary of all the Jungle Rhythms work was published in The Guardian. As a result of this IFLscience picked it up as well. Especially in the comments section of The Guardian the response was really positive. I’m happy to see some global exposure of the project, and the larger context and importance of similar work. I also hope that this exposure might bring about more funding to safeguard historic collections and capacity building within this context in DR Congo.

reviz.in – peer-review annotations with hypothes.is

A few months ago I was flooded with review requests. And I figured that it might be time to look around for solutions and code something up allow me to annotate peer-review PDFs easily,  and generate a review report with a click of a button (as proposed years ago).

Enter, Hypothes.is. A few years ago this initiative started to facilitate the semantic or annotated web. A way to annotate web pages separate from the original creator.  Hypothes.is did exactly what I needed to annotate any given PDF (also locally stored items). However, I could not extract the data easily in a standardized way. In addition, the standard mode for the Hypothes.is client is a public one, with personal groups being private. In short, although the whole framework had all the pieces the output wasn’t optimal for peer-review, if not dangerous to reputations when accidentally leaking reviews to the web.

As such, I created Reviz.in, a simple hack of the original Hypothes.is client and Google Chrome extension which makes sure you can’t escape the group which holds your peer-review revision notes and generates a nice review report (see image below). In addition, I added a fancy icon and renamed the original labels (not consistently however), to differentiate the original interface from my copy to avoid confusion. I hope over time this functionality will be provided by the original Hypothes.is client, in the mean time you can read more on the installation process on the Reviz.in website:

http://reviz.in 

or download

the Google Chrome Extension.

I hope this simple hack will help people speed up their review process as to free up some time. I also hope that publishers will take note, as the lack of their innovation on this front is rather shameful.

Google Earth Engine time series subset tool

Google Earth Engine (GEE) has provided a way to massively scale a lot of remote sensing analysis. However, more than often time series analysis are carried out on a site by site basis and scaling to a continental or global level is not required. Furthermore, some applications are hard to implement on GEE or prototyping does not benefit from direct spatial scaling. In short, working on a handful of reference pixels locally is often still faster than Google servers. I hereby sidestep the handling of large amounts of data (although sometimes helpful) to get to single location time series subsets with a GEE hack.

I wrote a simple python script / library called gee_subset.py which allows you to extract time series for a particular location or it’s neighbourhood. This tool is similar to my MODIS subset or daymetr tools, which all facilitate the extraction of time series of remote sensing or climatological data respectively.

My python script expands this functionality to all available GEE products, which include high resolution Landsat and Sentinel data, includes climatological data among others Daymet, but also representative concentration pathway (RCP) CMIP5 model runs.

Compared to the ORNL DAAC MODIS subset tool performance is blazing fast (thank you Google). An example query, calling the python script from R, downloaded two years (~100 data points) of Landsat 8 Tier 1 data for two bands (red, NIR) in ~8 seconds flat. Querying a larger footprint (1×1 km) only creates a small overhead (13 sec. query). The resulting figure for the point location with the derived NDVI values is shown below. The demo script to recreate this figure is included in the example folder of the github repository.

NDVI values from Landsat 8 Tier 1 scenes. black lines depicts a loess fit to the data, with the gray envelope representing the standard error.

WG3 CLIMO COST action Short Term Scientific Mission

The CLImate Smart Forestry in MOuntain Regions (CLIMO) COST action focusses on adaptation to climate change through climate-smart forestry practices. The CLIMO COST action is currently looking for candidates for short term scientific missions (STSM). These missions have a focus on early career scientists who want to work within the context of the COST action.

In particular, working group (WG) 3, which focuses on technological aspects of measuring forest processes, is looking for interested candidates to study wireless technology and sensor networks (e.g. temperature of the canopy, visual parameters through phenocams, etc) as well as physiological aspects of forests disturbances such as (persistent) droughts using stable isotope or other dendrochronological measurements. There are also opportunities to work with existing data within the context of data visualization and development of web-based tools to monitor canopy phenology responses to climate change. Different topics within the framework of WG3 are also welcome. Candidates will work out of INRA Bordeaux under guidance of Dr. Lisa Wingate (WG3 lead) and myself.

Currently there are two calls in 2017 (see below) with the deadline for the first call approaching fast. Apply while you still can !!

Calls for 2017

  • First STSM call submission deadline: 24, July 2017
    • call opens: 24, June 2017
  • Second STSM call submission deadline: 15, October 2017
    • call opens: 15, September 2017

Application procedure

Full details on the application procedure can be found on the STSM webpage of the CLIMO COST action.

Level Theta S images

My Virtual Forest project is still running strong and generates tons of spherical images (currently ~50GB). However, the post on which the camera sits is not perfectly level.  The Theta S camera normally compensates for this using an internal gyroscope which detects pitch and roll of the camera.  Yet, when downloading images directly from the camera no adjustments are made and the pitch and roll data is merely recorded in the EXIF data of the image.

As such I wrote a small bash script which rectifies (levels the horizon) in Theta S spherical images using this internal EXIF data. This is an alternative implementation to the THETA EXIF Library by Regen. I use his cute Lama test images for reference. All credit for the funky images go to Regen. Below is the quick install guide to using my script. I hope it helps speed up people’s Theta S workflow.

Install

Download, fork or copy paste the script from my github repository to your machine and make it executable.

Use

The above command will rectify the image.jpg file and output a new file called image_rectified.jpg.

Visual comparison between my results and those of Regen’s python script show good correspondence.

Requirements

The script depends on a running copy of exiftools, imagemagick and POVRay. These tools are commonly available in most Linux distros, and can be installed on OSX using tools such as homebrew. I lack a MS Windows system, but the script should be easily adjusted to cover similar functionality.