Ml MODEL
RELEASE GATE

A deterministic release system for machine learning models.
THE ML MODEL RELEASE GATE ISSUES A SIGNED PASS/FAIL RELEASE RECORD.

Evaluate locally.
Enforce thresholds.
Seal evidence.
ISSUE the release record.
PASS or FAIL.
Download the Evaluation Runner

The evaluation runner is the local Benchmark lane.

Run it to produce a deterministic receipt bundle for a signed PASS/FAIL release record.

Enter your evaluation code to download:

lokogate_runner-0.1.2-py3-none-any.whl

lokogate_runner-0.1.2-py3-none-any.whl.sha256
Offline verification: no network required during the run.

Integrity checks: SHA256 published for every download.

Binary trust: code-signing status stated clearly (SBOM available on request).

Safe to share: the receipt bundle contains only verification outputs when using the sample package.

Return options: secure upload.

Enter your evaluation code to mint time-limited downloads.

RUN THE EVALUATION

Run the evaluation runner locally to generate a deterministic receipt bundle.


Time to run: 10-15 minutes.

Accepted model: YOLO object detection (.pt weights).

Labels: YOLO bounding box format (class_id 0 only).

Inference size: Fixed at 640 (Benchmark scope).

Evidence directory must follow standard YOLO val structure (images + labels).

Linux (Ubuntu 22.04+, x86_64)

1. Create a workspace and enter it.

mkdir -p ~/lokogate
cd ~/lokogate

2. Place the downloaded wheel and .sha256 file in this folder and verify integrity.

sha256sum -c *.sha256

3. Create a virtual environment and install the runner.

python3 -m venv .venv
source .venv/bin/activate

pip install -U pip
pip install ./*.whl

lokogate --version

4. Initialize the workspace and run the Benchmark evaluation and point to your model and validation/evidence set.

lokogate init --workspace "$(pwd)"

lokogate run \
  --workspace "$(pwd)" \
  --gate "$(pwd)/policies/gate_benchmark_v1.yaml" \
  --weights /ABS/PATH/TO/weights.pt \
  --evidence-dir /ABS/PATH/TO/evidence_root \
  --evidence-id customer_val_01 \
  --device 0 \
  --model-id model_v1

5. The upload artifact will be created under:

$(pwd)/records/<gate_id>/<model_id>/<RUN_ID>/bundle/<RUN_ID>_upload_payload.zip
UPLOAD THE RECEIPT BUNDLE

Upload exists for one reason: to verify the signed release record.

Local outputs

Full HTML report (stays local):

report/report.html

Receipt bundle (upload this):

bundle/<RUN_ID>_upload_payload.zip

Upload

Enter evaluation code + RUN_ID.

RUN_ID format: YYYYMMDDThhmmssZ

Enter your evaluation code to mint time-limited upload & download links.

Upload: bundle/<RUN_ID>_upload_payload.zip

Download

When ready, download:<RUN_ID>_release.loko.zip
<RUN_ID>_release.loko.zip.sha256

ML MODEL Release Gate Lanes
Benchmark is a free, deterministic evaluation lane.

Need more than fixed-scope evaluation?

Pilot and Production expand governance authority.
Capability Benchmark (Free) Pilot Production
Class scope 1 class Multi-class Full class map
Resolution Fixed 640 Declared Enforced
Stress Diagnostic Expanded Gating optional
Delta Informational Optional enforce Enforced
Policy Fixed Custom Organizational standard
Certification Yes Yes Yes (governed)
Governance posture Demo Controlled Release authority


Benchmark

Free evaluation lane. Fixed scope. Deterministic certification.

Pilot

Expanded validation with custom policy and regression control.

Production

Formal release authorization infrastructure.