From Readme (might be redundant)

SEALS is under active development and will change much as it moves from a personal research library to supported model. We have submitted this code for publication but are waiting on reviews. For installation and other details, see the SEALS documentation.

To run a minimal version of the model, open a terminal/console and navigate to the directory where is located. Then, simply run: > python

In order for the above line to work, you will need to set the project directory and data directory lines in To obtain the base_data necessary, see the SEALS manuscript for the download link.

To run a full version of the model, copy to a new file (i.e., and set p.test_mode = False. You may also want to specify a new project directory to keep different runs separate.


Release Notes

Update v0.5.0

Downloading of base data now works.

Update v0.4.0

Now all project flow objects can be set via a scenario_definitions.csv file, allowing for iteration over multiple projects.

If no scenario_definitions.csv is present, it will create the file based on the parameters set in the run file.

Project Flow

One key component of Hazelbean is that it manages directories, base_data, etc. using a concept called ProjectFlow. ProjectFlow defines a tree of tasks that can easily be run in parallel where needed and keeping track of task-dependencies. ProjectFlow borrows heavily in concept (though not in code) from the task_graph library produced by Rich Sharp but adds a predefined file structure suited to research and exploration tasks.

First run walkthrough tutorial

The simplest way to run SEALS is to clone the repository and then open in your preferred editor. Then, update the values in ENVIRONMENT SETTINGS near the top of for your local computer (ensuring this points to directories you have write-access for and is not a virtual/cloud directory).

    ### ------- ENVIRONMENT SETTINGS -------------------------------

    # Users should only need to edit lines in this ENVIRONMENT SETTINGS section
    # Everything is relative to these (or the source code dir).
    # Specifically,
    # 1. ensure that the project_dir makes sense for your machine
    # 2. ensure that the base_data_dir makes sense for your machine
    # 3. ensure that the data_credentials_path points to a valid credentials file
    # 4. ensure that the input_bucket_name points to a cloud bucket you have access to

    # A ProjectFlow object is created from the Hazelbean library to organize directories and enable parallel processing.
    # project-level variables are assigned as attributes to the p object (such as in p.base_data_dir = ... below)
    # The only agrument for a project flow object is where the project directory is relative to the current_working_directory.
    user_dir = os.path.expanduser('~')
    script_dir = os.path.dirname(os.path.realpath(__file__))

    project_name = 'test_seals_project'
    project_dir = os.path.join(user_dir,  'seals', 'projects', project_name)
    p = hb.ProjectFlow(project_dir)

The project name and the project dir will define the root directory where all files will be saved. This directory is given hb.ProjectFlow() to initalize the project (which will create the dirs). Once these are set, you should be able to run in your preferred way, ensuring that you are in the Conda environment discussed above. This could be achieved in VS Code by selecting the Conda environment in the bottom-right status bar and then selecting run. Alternatively, this could be done via the command line with the command python in the appropriate directory.

When SEALS is run in this way, it will use the default values for a test run on a small country (Rawanda). All of these values are set (and documented) in the run file ( in the SET DEFAULT VARIABLES section. For your first run, it is recommended to use the defaults. When run, a configuration file will be written into your project’s input_dir named scenario_definitions.csv. This file is a table where each row is a “scenario” necessary to be defined for SEALS to run. In this minimal run, it must have 2 rows: one for the baseline condition (the starting LULC map) and one for a scenario of change that will indicate how much change of each LU class will happen in some coarse grid-cell or region/zone. Inspecting and/or modifying this file may give insights on how to customize a new run.

### ------- SET DEFAULT VARIABLES --------------------------------

# Set the path to the scenario definitions file. This is a CSV file that defines the scenarios to run.
# If this file exists, it will load all of the attributes from this file and overwrite the attributes
# set above. This is useful because adding new lines to to the scenario definitions file will allow
# you to run many different scenarios easily. If this file does not exist, it will be created based
# on the attributes set above and saved to the location in scenarios_definitions_path.
p.scenario_definitions_path = os.path.join(p.input_dir, 'scenario_defininitions.csv')

# IMPORTANT NOTE: If you set a scenario_definitions_path, then the attributes set in this file (such as p.scenario_label below)
# will be overwritten. Conversely, if you don't set a scenario_definitions_path, then the attributes set in this file will be used
# and will be written to a CSV file in your project's input dir.

# If you did not set a p.scenarios_definitions_path, the following default variables will be used
# and will be written to a scenarios csv in your project's input_dir for later use/editing/expansion.

# String that uniquely identifies the scenario. Will be referenced by other scenarios for comparison.
p.scenario_label = 'ssp2_rcp45_luh2-globio_bau'

# Scenario type determines if it is historical (baseline) or future (anything else) as well
# as what the scenario should be compared against. I.e., Policy minus BAU.
p.scenario_type = 'bau'

This computing stack also uses hazelbean to automatically download needed data at run time. In the code block below, notice the absolute path assigned to p.base_data_dir. Hazelbean will look here for certain files that are necessary and will download them from a cloud bucket if they are not present. This also lets you use the same base data across different projects.

In addition to defining a base_data_dir, you will need to For this to work, you need to also point SEALS to the correct data_credentials_path. If you don’t have a credentils file, email The data are freely available but are very, very large (and thus expensive to host), so I limit access via credentials.

p.base_data_dir = os.path.join('G:/My Drive/Files/base_data')

p.data_credentials_path = '..\\api_key_credentials.json'

NOTE THAT the final directory has to be named base_data to match the naming convention on the google cloud bucket.

Running the model

After doing the above steps, you should be ready to run Upon starting, SEALS will report the “task tree” of steps that it will compute in the ProjectFlow environment. To understand SEALS in more depth, inspect each of the functions that define these tasks for more documention in the code.

Once the model is complete, go to your project directory, and then the intermediate directory. There you will see one directory for each of the tasks in the task tree. To get the final produce, go to the stitched_lulc_simplified_scenarios directory. There you will see the base_year lulc and the newly projected lulc map for the future year:


Open up the projected one (e.g., lulc_ssp2_rcp45_luh2-message_bau_2045.tif) in QGIS and enjoy your new, high-resolution land-use change projection!