Code Activity Configuration Guide

To create an automatically graded code activity, you must upload a single ZIP file. This package contains your grading scripts, test files, and a config.json file that defines how the activity runs and reports results.

1. Package Structure

Your ZIP file represents the entire environment for the code execution. It uses the following files in the root directory:

  1. config.json (Required): The configuration file controlling setup, limits, and grading feedback.
  2. run (Required): A Bash script that acts as the entry point for execution. It must have no file extension.
  3. compile (Optional): A Bash script used for compiled languages (C, C++, Java, etc.). It must have no file extension.

Example Zip Contents:

my_activity.zip
├── config.json            <-- REQUIRED
├── run                    <-- REQUIRED (Execution Script)
├── compile                <-- OPTIONAL (Build Script)
├── tests/                 <-- EXAMPLE  (Example test files)
│   ├── test_main.py
│   └── data.csv
├── src/                   <-- WILL BE REMOVED (Replaced with a learner's submission)
└── solution_template.py   <-- WILL BE REMOVED

Note: solution_template.py is given as an example of a file that will be removed before execution. It doesn't have any specific purpose.

Simple JSON Configuration

Instead of providing a ZIP file, a single JSON configuration file may be provided instead. When this is the case it is converted to the following ZIP:

my_config.json => my_config.zip
├── config.json            <-- copied from my_config.json
├── run                    <-- copied from the 'run' property in my_config.json
└── compile                <-- copied from the 'compile' property in my_config.json

If compile and run properties are provided within a ZIP file, the contents of these files in the ZIP are overwritten by the values in config.json.

Example Configuration File

{
  "version": "1.0",
  "setup": {
    "submission_dir": "student_submission",
    "remove_paths": [
      "sample_solution.cpp"
    ]
  },
  "compile": "g++ -O3 -o autograder main.cpp tests.cpp",
  "run": "./autograder",
  "execution": {
    "cpu_time_limit": 15.5,
    "wall_time_limit": 20.0,
    "memory_limit": 512000000,
    "max_file_size": 10000,
    "enable_network": false,
    "redirect_stderr_to_stdout": false
  },
  "score": {
    "source": "stdout",
    "default": 0,
    "max_points": 100
  },
  "status": {
    "source": "file",
    "path": "results/status.txt",
    "success_matcher": "^(Passed|Completed)$",
    "on_missing": {
      "source": "static",
      "content": "Crashed",
      "format": "text"
    }
  },
  "feedback": [
    {
      "label": "Compiler Output",
      "source": "stderr",
      "format": "terminal",
      "visible_if_empty": false
    },
    {
      "label": "Test Results",
      "source": "file",
      "path": "results/report.html",
      "format": "html"
    },
    {
      "label": "Image Output",
      "source": "file",
      "path": "results/output.png",
      "format": "image",
      "on_missing": {
        "source": "static",
        "content": "No image output generated.",
        "format": "text"
      }
    }
  ]
}

2. The Lifecycle

When a learner submits their work, the system processes the activity in three phases:

  1. Setup Phase:

  2. Execution Phase:

  3. Reporting Phase:


3. The Scripts (run and compile)

These files must be Bash scripts. They serve as the bridge between the system and your specific language tools.

The run Script (Required)

This script executes the code or tests.

Example (Python):

#!/bin/bash

# 1. Run the tests
python3 -m pytest tests/ --color=yes -v --tb=short --no-header > output/report.txt 2>&1

# 2. Write status/score based on exit code
if [ $? -eq 0 ]; then
    echo "Passed" > output/status.txt
    echo "100" > output/score.txt
else
    echo "Failed" > output/status.txt
    echo "0" > output/score.txt
fi

4. Configuration File (config.json)

This file is the "brain" of the activity. It is divided into three sections: Setup, Execution, and Reporting.

A. Setup (setup)

Optional. Controls how the file system is prepared before your scripts run.

"setup": {
  "submission_dir": "src",
  "remove_paths": ["solution_template.py", "tests/__pycache__"]
}

B. Execution (compile, run, execution)

compile Optional.

run

execution Optional.

"execution": {
  "cpu_time_limit": 5.0,
  "wall_time_limit": 10.0,
  "memory_limit": 128000,
  "enable_network": false
}
Setting Type Default Description
cpu_time_limit Float 5.0 Max CPU time allowed (seconds).
cpu_extra_time Float 1.0 Grace period (seconds) before killing the program after limit.
wall_time_limit Float 10.0 Max clock time allowed (seconds).
memory_limit Int 128000 Max RAM usage in Kilobytes (128 MB).
stack_limit Int 64000 Max stack size in Kilobytes (64 MB).
max_processes_and_or_threads Int 60 Max number of threads/processes allowed.
enable_network Bool false If true, allows the script to access the internet.
redirect_stderr_to_stdout Bool false If true, merges error logs into standard output.
max_file_size Int 1024 Max size of created files in Kilobytes (1 MB).

C. Reporting

These sections tell the system how to interpret the results of your run script.

1. Status (status)

Optional. A short text summary of the result. If omitted, defaults to "Completed".

"status": {
  "source": "file",
  "path": "output/status.txt",
  "default": "Completed",
  "success_matcher": "^(Passed|Success)"
}

2. Score (score)

Optional. The numerical grade to assign.

"score": {
  "source": "file",
  "path": "output/score.txt",
  "default": 0,
  "max_points": 100
}

3. Thumbnail (thumbnail)

Optional. An image to represent the submission in the grading interface.

Constraint: The final result (including any fallback/on_missing content) must be an image.

"thumbnail": {
  "source": "file",
  "path": "output/plot.png",
  "format": "image"
}

4. Feedback (feedback)

Optional. A list of detailed outputs to display to the learner.

"feedback": [
  {
    "label": "Test Results",
    "source": "file",
    "path": "output/report.txt",
    "format": "text"
  },
  {
    "label": "Console Output",
    "source": "stdout",
    "format": "text",
    "visible_if_empty": false
  }
]

5. Data Retrieval Options

Source Options (source)

For status, score, thumbnail, and feedback, you may specify Source.

Source Description Extra Field Required
"file" Reads content from a specific file. "path": "path/to/file"
"stdout" Captures the standard output stream. None
"stderr" Captures the standard error stream. None
"static" Returns a hardcoded string. "content": "My Text"

Format Options (format)

You may additionally specify Format.

Format Description
"text" Content/source data displayed as raw text (code block). ANSI-coloured text will be shown in a styled terminal.
"terminal" Content/source data displayed as ANSI-coloured text in a styled terminal.
"markdown" Content/source data rendered as Markdown.
"html" Framed/Sandboxed. Content/source data rendered in an interactive iframe. JavaScript is allowed but restricted.
"image" Displayed as an image; content is a url (e.g. base64 encoded), source data is an image file.
"ppm" Displayed as a PPM image viewer; content is image/x-portable-pixmap (text) data, source data is a .ppm/.pgm/.pbm file.

Note:

Handling Failures (on_missing)

If your script crashes, output files might not exist. You can define a fallback behavior using on_missing.

{
  "label": "Performance Graph",
  "source": "file",
  "path": "output/graph.png",
  "format": "image",
  "on_missing": {
    "source": "static",
    "content": "data:image/png;base64,encoded_default_image_string_here...",
    "format": "image"
  }
}

Notes:

The execution environment is the "Judge0 Extra CE" runtime. Only packages/tools installed in this environment are available for use.