A few months ago I was flooded with review requests. And I figured that it might be time to look around for solutions and code something up allow me to annotate peer-review PDFs easily, and generate a review report with a click of a button (as proposed years ago).
Enter, Hypothes.is. A few years ago this initiative started to facilitate the semantic or annotated web. A way to annotate web pages separate from the original creator. Hypothes.is did exactly what I needed to annotate any given PDF (also locally stored items). However, I could not extract the data easily in a standardized way. In addition, the standard mode for the Hypothes.is client is a public one, with personal groups being private. In short, although the whole framework had all the pieces the output wasn’t optimal for peer-review, if not dangerous to reputations when accidentally leaking reviews to the web.
As such, I created Reviz.in, a simple hack of the original Hypothes.is client and Google Chrome extension which makes sure you can’t escape the group which holds your peer-review revision notes and generates a nice review report (see image below). In addition, I added a fancy icon and renamed the original labels (not consistently however), to differentiate the original interface from my copy to avoid confusion. I hope over time this functionality will be provided by the original Hypothes.is client, in the mean time you can read more on the installation process on the Reviz.in website:
I hope this simple hack will help people speed up their review process as to free up some time. I also hope that publishers will take note, as the lack of their innovation on this front is rather shameful.
Google Earth Engine (GEE) has provided a way to massively scale a lot of remote sensing analysis. However, more than often time series analysis are carried out on a site by site basis and scaling to a continental or global level is not required. Furthermore, some applications are hard to implement on GEE or prototyping does not benefit from direct spatial scaling. In short, working on a handful of reference pixels locally is often still faster than Google servers. I hereby sidestep the handling of large amounts of data (although sometimes helpful) to get to single location time series subsets with a GEE hack.
My python script expands this functionality to all available GEE products, which include high resolution Landsat and Sentinel data, includes climatological data among others Daymet, but also representative concentration pathway (RCP) CMIP5 model runs.
Compared to the ORNL DAAC MODIS subset tool performance is blazing fast (thank you Google). An example query, calling the python script from R, downloaded two years (~100 data points) of Landsat 8 Tier 1 data for two bands (red, NIR) in ~8 seconds flat. Querying a larger footprint (1×1 km) only creates a small overhead (13 sec. query). The resulting figure for the point location with the derived NDVI values is shown below. The demo script to recreate this figure is included in the example folder of the github repository.
The CLImate Smart Forestry in MOuntain Regions (CLIMO) COST action focusses on adaptation to climate change through climate-smart forestry practices. The CLIMO COST action is currently looking for candidates for short term scientific missions (STSM). These missions have a focus on early career scientists who want to work within the context of the COST action.
In particular, working group (WG) 3, which focuses on technological aspects of measuring forest processes, is looking for interested candidates to study wireless technology and sensor networks (e.g. temperature of the canopy, visual parameters through phenocams, etc) as well as physiological aspects of forests disturbances such as (persistent) droughts using stable isotope or other dendrochronological measurements. There are also opportunities to work with existing data within the context of data visualization and development of web-based tools to monitor canopy phenology responses to climate change. Different topics within the framework of WG3 are also welcome. Candidates will work out of INRA Bordeaux under guidance of Dr. Lisa Wingate (WG3 lead) and myself.
Currently there are two calls in 2017 (see below) with the deadline for the first call approaching fast. Apply while you still can !!
Calls for 2017
First STSM call submission deadline: 24, July 2017
call opens: 24, June 2017
Second STSM call submission deadline: 15, October 2017
My Virtual Forest project is still running strong and generates tons of spherical images (currently ~50GB). However, the post on which the camera sits is not perfectly level. The Theta S camera normally compensates for this using an internal gyroscope which detects pitch and roll of the camera. Yet, when downloading images directly from the camera no adjustments are made and the pitch and roll data is merely recorded in the EXIF data of the image.
As such I wrote a small bash script which rectifies (levels the horizon) in Theta S spherical images using this internal EXIF data. This is an alternative implementation to the THETA EXIF Library by Regen. I use his cute Lama test images for reference. All credit for the funky images go to Regen. Below is the quick install guide to using my script. I hope it helps speed up people’s Theta S workflow.
Download, fork or copy paste the script from my github repository to your machine and make it executable.
The above command will rectify the image.jpg file and output a new file called image_rectified.jpg.
The script depends on a running copy of exiftools, imagemagick and POVRay. These tools are commonly available in most Linux distros, and can be installed on OSX using tools such as homebrew. I lack a MS Windows system, but the script should be easily adjusted to cover similar functionality.
Recently I came across Publons as a way to get credit for your reviewing efforts. At first I was rather intrigued. It does sound like a good idea as I often bemoan the burden which is reviewing at times, and the little reward it brings (often because of the quality of the work). I was even more intrigued as five years ago I was runner-up in Elsevier’s peer-review challenge, trying to resolve the ailing peer-review system. The winner suggested something along the lines of Publons.
However, Publons or any badge system want to make you believe that your review is worth something outside it’s academic context. Yet, it does not contribute to a workable solution for the core problem of the academic peer-review process which is ease-of-use and the quality of the review. It perverts the peer-review system with false incentives and a race to the bottom if poorly executed. Publons claims for publishers state a decrease in review times and accepted reviews. This suggests that Publons change incentives to accept reviews, and potentially the number of accepted manuscripts. Assuming that time is a limited resource for most scientists, increasing the number of reviews should decrease the time spend on them, letting errors slip through.
Even less surprising is that given that this is a for profit venture and they do not go lightly when it comes to privacy. Checking the privacy statement (below) basically states that all the data you submit (full reviews if possible) can be used for data mining, and reselling to advertisers or publishers alike.
In short, Publons is a niche data broker, contrary to sweeping approach Google and Facebook use. Added value is generated in the form of a virtual badge, with little or no real world value, providing only an extra account to track and performance anxiety that goes with it and the privacy you sign away. The badges potentially shift the reviewer acceptance rates due to time restrictions and moral hazard. We should not be speeding up science, we should be increasing rigour, reproducibility and quality.