Why scientists should learn from Aaron Swartz.

“He wanted openness, debate, rationality and critical thinking and above all refused to cut corners.” — Lawrence Lessig

Aaron Swartz helped draft the RDF Site Summary (RSS) standard at age 13 and was in many respects a prodigy. As Lawrence Lessig wrote about Aaron: “He wanted openness, debate, rationality and critical thinking and above all refused to cut corners.” Sadly, he perished by his own hand after particularly severe legal action against his person for copyright infringements. The documentary The Internet’s Own Boy: The Story of Aaron Swartz  provides a homage to his  life and work.

He left a legacy of writings which excel in clarity and brilliance I’ve rarely encountered. This is further contrasted by the age at which a lot of these blog posts or essays were written. Few people come close to the the way Aaron articulated his ideas in writing.

In a series of blog posts I’ll summarize some of his ideas with respect to technology, politics and media within the context of contemporary scientific (ecological) research. The fact that his ideas and his vision remain key to what I consider solid scientific practice reflect his genius and insight.

release late, release rarely (release early, release often)

In a blog post written on July 5, 2006 (release late, release rarely) Aaron outlines how to develop software. Yet, this essay could as well apply to scientific research, going from idea to publication.

Similarly to software (pet) projects, the subject of this blog post, science projects often have strong emotions attached to it. While these emotions are truthful the content or quality of the research might not pass muster.

“When you look at something you’re working on … you can’t help but see past the actual thing to the ideas that inspired it… But when others look at it, all they see is a piece of junk.”

In science, this basically means that you should do your homework and don’t oversell your research. In peer-review reviewers will see past these claims and, rightfully so, reject manuscripts because of it. So when you publish, release late, aim for quality not quantity.  This will raise the chance of getting your work published, while at the same time increasing the likelihood of stumbling on errors. Raising the true quality, or making it look good, often highlights inconsistencies you can’t move past in good conscious.

“Well, it looks great but I don’t really like it” is a lot better then “it’s a piece of junk”.

Releasing work late means that no one knows what you are doing and you might miss out on key feedback. So, informally, research benefits from releasing early.

“Still, you can do better. Releasing means showing it to the world. There’s nothing wrong with showing it to friends or experts or even random people in a coffee shop. The friends will give you the emotional support you would have gotten from actual users, without the stress. The experts will point out most of the errors the world would have found, without the insults. And random people will not only give you most of the complaints the public would, they’ll also tell you why the public gave up even before bothering to complain.”

Releasing early, means that you get valuable feedback that might otherwise would not make it into a high quality paper (released late). This feedback does not only come from experts, but as correctly observed, from everyone within a larger (research) community.

In short, scientific communication and progress requires a split approach where manuscripts should be released as late as possible, with ideas mature and solidly supported by open code and data, which was released as early as possible.

Note: Although the argument can be made that conferences serve the purpose of “early releases” I have yet to see a conference where people present truly early work. Most of the time either published or nearly published work is presented.

 

Want to get published, show me your code.

All too often one is still confronted with a statement at the end of the manuscript reading: “Code is available from the authors at reasonable request”.

The last few years there has been a strong focus on open data and open access journals. This is in part stimulated by a reproducibility crisis in science, often in the biomedical sciences. However, the strong focus on data and journal access alone is misplaced.

Many fields such as ecology, remote sensing and elsewhere rely increasingly on ever more complex software (models). Furthermore, they use ever larger amounts of data. Yet, there isn’t the same demand for releasing code and / or open coding practices. All too often one is still confronted with a statement at the end of the manuscript reading: “Code is available from the authors at reasonable request”.

What reasonable means is often unclear, but it clearly does not stimulate reproducibility (e.g., a critical request might not be “reasonable”). It also actively interferes with the task of reviewers who make assumptions (in good faith) that the analysis was correctly executed. However, with the amount of data (sources) used as well as the number of lines of code produced errors are far from unlikely.

With services such as Github and Docker containers there should be a requirement for any study heavy on the modelling side, and which relies on open data, to be fully reproducible if not through a small worked example if the full dataset would be prohibitive in size or when ethically not desired.

More so, when it comes to model comparisons there should be an active effort to formalize these comparisons in community driven frameworks (e.g., an R package, a python package, docker images, or a formalized workflow). Such rigorous efforts are required to truly assess model performance and quantify model errors at all levels (from source data to model structure). Alas, such efforts are few and far between in ecology, as are open and good coding practices.

The lack of this transparency is in part fueled by a gatekeeper effect. It is profitable not to share the code, as it is profitable not to share data. Not sharing code puts other scientists at a disadvantage, as similar studies or incremental advance upon the original code can’t easily be made. Provided that not sharing code constitutes a breakdown of any reproducibility, and actively slows down scientific process, I’m inclined not to consider studies fit for publication without accessible source code.

note 1: The active sharing of algorithms if far more common in computer science and physics.

note 2: I got pushback on the notion that there is a gatekeeper effect in science. Yet, the fact that a “reasonable request” is mentioned, not merely any request, implies a gatekeeper effect. It is up to the authors to decide how and who will get access to code (and applications thereof) and who doesn’t. But what about licensing? Although, licensing might require citations (CC-BY), release under the same license (GPL) or prohibiting commercial applications (CC-NC), this still guarantees access to the code to begin with.

 

Jungle Rhythms made it into The Guardian

A cache of decaying notebooks found in a crumbling Congo research station has provided unexpected evidence with which to help solve a crucial puzzle – predicting how vegetation will respond to climate change. . . . (by Dan Grossman)

My Jungle Rhythms has made some waves as of late. The project sparked the interest of dr. Dan Grossman, a science journalist, and his nice summary of all the Jungle Rhythms work was published in The Guardian. As a result of this IFLscience picked it up as well. Especially in the comments section of The Guardian the response was really positive. I’m happy to see some global exposure of the project, and the larger context and importance of similar work. I also hope that this exposure might bring about more funding to safeguard historical collections and capacity building within this context in DR Congo.

reviz.in – peer-review annotations with hypothes.is

A few months ago I was flooded with review requests. And I figured that it might be time to look around for solutions and code something up allow me to annotate peer-review PDFs easily,  and generate a review report with a click of a button (as proposed years ago).

Enter, Hypothes.is. A few years ago this initiative started to facilitate the semantic or annotated web. A way to annotate web pages separate from the original creator.  Hypothes.is did exactly what I needed to annotate any given PDF (also locally stored items). However, I could not extract the data easily in a standardized way. In addition, the standard mode for the Hypothes.is client is a public one, with personal groups being private. In short, although the whole framework had all the pieces the output wasn’t optimal for peer-review, if not dangerous to reputations when accidentally leaking reviews to the web.

As such, I created Reviz.in, a simple hack of the original Hypothes.is client and Google Chrome extension which makes sure you can’t escape the group which holds your peer-review revision notes and generates a nice review report (see image below). In addition, I added a fancy icon and renamed the original labels (not consistently however), to differentiate the original interface from my copy to avoid confusion. I hope over time this functionality will be provided by the original Hypothes.is client, in the mean time you can read more on the installation process on the Reviz.in website:

http://reviz.in 

or download

the Google Chrome Extension.

I hope this simple hack will help people speed up their review process as to free up some time. I also hope that publishers will take note, as the lack of their innovation on this front is rather shameful.