EU AI Act enforcement begins Aug 2, 2026

Is Your AI Legal Under
the EU AI Act?

Scan any GitHub repo. Get instant risk classification. Know your compliance obligations before the EUR 35M fines kick in.

Free for public repos. No signup required.

days until Aug 2, 2026 deadline

Scan Results

/ 100

Compliance Score

Risk Summary

Frameworks Detected

Detailed Classifications

File Risk Level

Compliance Badge

Add this badge to your README to show your compliance status:

How It Works

Three steps to EU AI Act compliance. No legal team required.

Step 1

Paste Your GitHub URL

Drop any public or private repository URL into the scanner. We support GitHub, GitLab, and Bitbucket.

Step 2

We Scan Every File

Our engine analyzes every file for AI frameworks, model usage patterns, and high-risk use cases defined in the Act.

Step 3

Get Your Compliance Roadmap

Receive a risk classification per Annex III, compliance score, and a clear roadmap of what to fix before enforcement.

What We Detect

Our scanner identifies AI frameworks, classifies risk levels per the EU AI Act, and maps obligations you must fulfill. No code leaves our servers. Pure static analysis.

30+ AI Frameworks Detected

PyTorch, TensorFlow, Keras, scikit-learn, Hugging Face Transformers, OpenAI, LangChain, LlamaIndex, ONNX, JAX, Caffe, MXNet, PaddlePaddle, spaCy, NLTK, Gensim, FastAI, Ray, MLflow, DVC, Weights & Biases, OpenCV, Detectron2, MediaPipe, DeepFace, InsightFace, Stable Diffusion, DALL-E, Anthropic, Cohere, and more.

11+ High-Risk Use Cases

Biometric identification, critical infrastructure, education scoring, employment decisions, credit scoring, law enforcement, migration control, judicial systems, and more per Annex III.

EU AI Act Risk Classification

Every detected component is classified as Unacceptable, High-Risk, Limited, GPAI, or Minimal per the official Act text and Annex III categories.

Zero Data Exposure

No code leaves our servers. Pure static analysis. Your source code is never stored, shared, or sent to third parties.

UNACCEPTABLE RISK

Social scoring, real-time remote biometric ID, manipulative AI. Banned outright.

HIGH RISK

Critical infrastructure, hiring, credit scoring, law enforcement. Requires conformity assessment, documentation, human oversight.

LIMITED RISK

Chatbots, deepfakes, emotion recognition. Requires transparency: users must know they're interacting with AI.

GENERAL-PURPOSE AI

Foundation models and GPAI systems. Requires technical documentation, copyright compliance, and transparency reports.

MINIMAL RISK

Spam filters, recommendation engines, game AI. No mandatory obligations beyond voluntary codes of conduct.

0
Repos Scanned
0
Days Until Deadline
EUR 35M
Maximum Fine

Simple, Transparent Pricing

Start free. Scale when you need to. Cancel anytime.

Free

$0 /forever
  • Public repo scans
  • Compliance score
  • Compliance badge
  • Risk classification
Get Started

Team

$99 /month
  • Everything in Pro
  • Unlimited repo scans
  • Team dashboard
  • CI/CD integration
  • Slack alerts

Enterprise

$499 /month
  • Everything in Team
  • Custom compliance rules
  • Dedicated support
  • On-premise deployment
  • SLA guarantee
Terminal

Same Engine. Your Terminal. Offline.

Run the exact same compliance scanner locally. Perfect for CI/CD pipelines, air-gapped environments, and developers who live in the terminal.

View on GitHub
terminal
# Install the CLI
$ pip install isologic
# Scan a local project
$ isologic audit ./my-project
# Output
Scanning 847 files...
Detected: pytorch, transformers, opencv
Risk: HIGH (Annex III, Category 6)
Compliance Score: 72/100
3 obligations require attention.
Code Audit Lab