πŸ§ͺ Step 1 - TestΒΆ

Meet RoboNeuro! Our preprint submission bot is at your service 24/7 to help you create a NeuroLibre preprint.

https://github.com/neurolibre/brand/blob/main/png/preview_magn.png?raw=true

We would like to ensure that all the submissions we receive meet certain requirements. To that end, we created a test page, where you point RoboNeuro to a public GitHub repository, enter your email address, then sit back and wait for the results.

Note

RoboNeuro book build process has two stages. First, it creates a virtual environment based on your runtime descriptions. If this stage is successful, then it proceeds to build a Jupyter Book by re-executing your code in that environment.

  • On a successful book build, we will return you a preprint that is beyond PDF!
  • If the build fails, we will send two log files for your inspection.

Warning

A successful book build on RoboNeuro preview service is a prerequisite for submission.

Please note that RoboNeuro book preview is provided as a public service with limited computational resources. Therefore we encourage you to build your book locally before requesting our online service. Instructions are available here.

⏩ Quickstart: Preprint templates¢

To give you a head-start, we created preprint template repositories:

Preprint Programming language GitHub repository
Link Python neurolibre/template
Link C++ neurolibre/cpp
  1. Choose your template from the list below and create a new repository into your (or to an organization) account.
  2. Follow the instructions in the README file.
  3. Test your book build using RoboNeuro preview service.

The following section provides further detail about the structure of a NeuroLibre preprint repository.

πŸ—‚ Preprint repository structureΒΆ

We expect to find all the submission material in a public GitHub repository that has the following structure:

neurolibre_submission
β”œβ”€β”€ /binder
β”‚ β”œβ”€β”€ data_requirement.json
β”‚ β”œβ”€β”€ requirements.txt
β”‚ └── …
β”œβ”€β”€ /content
β”‚ β”œβ”€β”€ _toc.yml
β”‚ β”œβ”€β”€ _config.yml
β”‚ └── …
β”œβ”€β”€ paper.md
└── paper.bib

πŸ“ The binder folderΒΆ

https://github.com/neurolibre/brand/blob/main/png/binder_folder.png?raw=true

βš™οΈRuntimeΒΆ

1 - Preprint-specific runtime dependencies

The execution runtime can be based on any of the (non-proprietary) programming languages supported by Jupyter. NeuroLibre looks at the binder folder to find some configuration files such as a requirements.txt (Python), R.install (R), Project.toml (Julia) or a Dockerfile.

See also

The full list of supported configuration files is available here.

2 - Environment configuration for NeuroLibre

You should try to make your environment clean and concize, that is why the prefered configuration file for NeuroLibre are the requirements.txt.

It should be small (to keep environment building and loading as short as possible), and versionnized (so your environment is fully reproducible, and cache-able).

For example this requirement is bad because it has lot of unnecessary dependencies:

numpy
scipy
jupyter
matplotlib
Pillow
scikit-learn
tensorflow

On the other hand, this one is concise, reproducible and will take much less time to build:

tensorflow==2.4.0

3 - NeuroLibre dependencies

Our test server creates a virtual environment in which your content is re-executed to build a Jupyter Book. To enable this, we need some Python packages.

  • If you are using configuration files, we need the following in a requirements.txt file:
jupyter-book
jupytext
  • If your environment is described by a Dockerfile you can use our base image:
FROM neurolibre/book:latest
...

πŸ’½ DataΒΆ

NeuroLibre offers generous data storage and caching to supercharge your preprint. If your executable content consumes input data, please read this section carefully.

To download data, NeuroLibre looks for a repo2data configuration file: data_requirement.json. This file must point to a publicly available dataset, so the data will be available at the data folder during preprint runtime.

See also

Repo2data can download data from several resources including OSF, datalad, zenodo or aws. For details, please visit repo2data, where you can also find the instructions to use repo2data on your local computer before requesting RoboNeuro preview service.

Example preprint templates using repo2data for caching data on NeuroLibre servers:

Download Resource GitHub repository
Nilearn neurolibre/repo2data-nilearn
OSF neurolibre/repo2data-osf

Warning

RoboNeuro may fail downloading relatively large datasets (exceeding 5GB) as the book build process times out in 60 minutes. If you have data that exceed 5GB, please create an issue in your github repository so a Neurolibre admin can check it.

Help RoboNeuro find your data during book build

Repo2Data downloads your data to a folder named data/PROJECT_NAME, which is created at the base of your repository. PROJECT_NAME is the field that you configured in your data_requirement.json.

  • A code cell in a content/my_notebook.ipynb would access data by:

    import nibabel as nib
    import os
    img = nib.load(os.path.join('..', 'data', 'PROJECT_NAME', 'my_brain.nii.gz'))
    
  • A code cell in a content/01/my_01_notebook.ipynb would access data by:

    import nibabel as nib
    img = nib.load(os.path.join('..', '..', 'data', 'PROJECT_NAME', 'my_brain.nii.gz')) # In this case, 2 upper directories
    

If the data directories in your code cells are not following this convention, RoboNeuro will fail to re-execute your notebooks and interrupt the book build.

Note

We suggest testing repo2data locally before you request a RoboNeuro preview service. Instructions are available here. Matching your data loading convention with that of RoboNeuro will increase your chances of having a successful NeuroLibre preprint build.

Warning

If you are a Windows user, manually defined paths (e.g. .\data\my_data.txt) won’t be recognized by the preprint runtime. Please use an operating system agnostic convention to define file paths or file separators. For example use os.path.join in Python.

πŸ“ The content folderΒΆ

https://github.com/neurolibre/brand/blob/main/png/content_folder.png?raw=true

Executable & narrative contentΒΆ

NeuroLibre accepts the following file types to create a preprint that is beyond PDF:

  • βœ… Jupyter Notebooks,
  • βœ… MyST formatted markdown.
  • βœ… Plain text markdown files.
  • βœ… A mixture of all above

Warning

❌ We don’t accept markdown files with narrative content only, that is not really beyond PDF :)

Note

βœ… You can organize your content in sub-folders.

Writing narrative content

Jupyter Book provides you with an arsenal of authoring tools to include citations, equations, figures, special content blocks and more into your notebooks or markdown files.

See also

Please visit the corresponding Jupyter Book documentation page for guidelines.

Writing executable content

Based on the powerful Jupyter ecosystem, NeuroLibre preprints allow you to interleave computational material with your narrative. You can add some directives and metadata to your code cell blocks for Jupyter Book to determine the format and behavior of the outputs, such as interactive data visualization.

See also

Please visit the corresponding Jupyter Book documentation page for guidelines.

There are two mandatory files that we look for in the content folder: _config.yml and _toc.yml. These files help RoboNeuro structure your book and configure some settings.

⑆Table of contentsΒΆ

The _toc.yml file determines the structure of your NeuroLibre preprint. It is a simple configuration file specifying a table of content from all the executable & narrative content found in the content folder (and in subfolders).

See also

The complete reference for the _toc.yml can be found here.

⚑︎Book configuration¢

The _config.yml file governs all the configuration options for your Jupyter Book formatted preprint, such as adding a logo, enable/disable interactive buttons or control notebook execution and caching settings. Few important points:

  • Please ensure that the title and the list of authors matches those specified in the paper.md.
title:  "NeuroLibre preprint template"  # Add your title
author: John Doe, Jane Doe  # Add author names
  • Please ensure that the repository address is accurate.
repository:
  url: https://github.com/username/reponame  # The URL to your repository

See also

The complete reference for the _config.yml can be found here.

πŸ“ Static summaryΒΆ

https://github.com/neurolibre/brand/blob/main/png/paper.png?raw=true

The front matter of paper.md is used to collect meta-information about your preprint:

---
title: 'White matter integrity of developing brain in everlasting childhood'
tags:
  - Tag1
  - Tag2
authors:
  - name: Peter Pan
    orcid: 0000-0000-0000-0000
    affiliation: "1, 2"
  - name: Tinker Bell
    affiliation: 2
affiliations:
- name: Fairy dust research lab, Everyoung state university, Nevermind, Neverland
  index: 1
- name: Captain Hook's lantern, Pirate academy, Nevermind, Neverland
  index: 2
date: 08 September 1991
bibliography: paper.bib
---

The corpus of this static document is intended for a big picture summary of the preprint generated by the executable and narrative content you provided (in the content) folder. You can include citations to this document from an accompanying BibTex bibliography file paper.bib.

To check if your PDF compiles, visit RoboNeuro preprint preview page, select NeuroLibre PDF option and enter your repository address.

See also

For more information on how to format your paper, please take a look at JOSS documentation.

πŸ’» Testing book build locallyΒΆ

Assuming that:

  • you already installed all the dependencies to develop your notebooks locally
  • your preprint repository follows the NeuroLibre preprint structure

you can easily test your preprint build locally.

Step 1 - Install Jupyter Book

pip install jupyter-book

Step 2 - Data

First, make sure that your data is available online and can be downloaded publicly.

You can now install Repo2Data, and configure the data_requirement.json with "dst": "./data". Navigate into your repo and run repo2data, your data will download to the data folder.

Finally, modify respective code lines in your notebooks to set data (relative) path to the data folder.

Warning

Please make sure that you ignore your local ./data folder and all of its contents from Git history, by adding the following in the .gitignore file:

data/

Step 3 - Book build

  • Navigate to the repository location in a terminal
cd /your/repo/directory
  • Trigger a jupyter book build
jupyter-book build ./content

See also

Please visit reference documentation on executing and caching your outputs during a book build.

πŸ’Œ Step 2 - SubmitΒΆ

Warning

A successful book build on RoboNeuro preview service is a prerequisite for submission.

Your submission request will go through only if we can find a built preprint on our test server for your preprint repository.

Submission is as simple as:

πŸ”Ž Technical screeningΒΆ

Our editorial team will start a technical screening process on GitHub to ensure the functionality of your preprint.