Skip to main content

Command Palette

Search for a command to run...

AI changed the software supply chain

Published
27 min read
AI changed the software supply chain

Most developers still think of security as something that happens inside their own code.

That used to be a useful mental model. You wrote code, tested it, scanned it, shipped it, and patched it when a vulnerability appeared. The boundary was clear enough. Your repository was your responsibility. Everything outside it was a dependency, a vendor, or an infrastructure problem.

That model is breaking.

Modern software is not written from scratch. It is assembled. A small web app can pull hundreds or thousands of transitive packages. A backend service can depend on base images, GitHub Actions, build scripts, cloud roles, code generators, secrets, registries, container layers, Terraform modules, model APIs, and now AI-generated code.

The supply chain is no longer a line from developer to production. It is a graph.

AI makes that graph bigger. It helps developers write code faster, but it also changes where code comes from, who reviews it, how dependencies are chosen, and how quickly risky changes move through the pipeline. AI agents can open pull requests, add packages, write workflows, call tools, and sometimes touch production systems. That is powerful. It is also dangerous when the pipeline was built for a slower world.

This article is a detailed look at the new software supply chain problem. It is written for developers, platform engineers, security engineers, and technical founders who want to understand what changed and what to do about it.

What changed

A few years ago, supply chain security felt like a topic for large companies, government vendors, and security teams. Then SolarWinds, Log4Shell, dependency confusion, package hijacking, CI leaks, and registry attacks made the lesson obvious.

You can write secure code and still ship compromised software.

That is the core idea. The attacker does not need to find a bug in your application if they can change what your application is built from.

A modern release is made from many parts:

  • Your source code

  • Open source dependencies

  • Transitive dependencies

  • Package manager behavior

  • Lockfiles

  • Build scripts

  • CI/CD workflows

  • Secrets and tokens

  • Container base images

  • Infrastructure modules

  • Artifact registries

  • Deployment permissions

  • Generated code

  • AI-written changes

  • Third-party actions and plugins

Every one of these can become the weak point.

The OWASP Top 10 for Large Language Model Applications lists supply chain vulnerabilities as a core LLM risk. OWASP also calls out prompt injection, insecure output handling, sensitive information disclosure, and insecure plugin design. Those are not separate from software supply chain security. They are becoming part of it.

Here is the simple version.

Before AI, the main question was:

Can I trust this package, build, artifact, and deployment pipeline?

Now we also have to ask:

Can I trust how this code was suggested, generated, reviewed, approved, and connected to tools?

That extra layer matters.

GitHub's 2025 Octoverse report says AI, agents, and typed languages are driving one of the biggest shifts in software development in more than a decade. That is not just a productivity story. It is a security story too. More code is being created faster, with more automation around it, and with more tools acting on behalf of developers.

Speed is useful. Blind speed is not.

The old model

The old model looked roughly like this.

It was never truly this simple, but many teams still reason this way.

The problem is that the real model now looks closer to this.

There are more paths into the release. There are more identities. There are more machines making changes.

The attack surface has moved from "the code" to "the system that creates the code."

Why attackers like the supply chain

Supply chain attacks scale.

If an attacker compromises one small application, they get one target. If they compromise a package, a build action, a maintainer token, or a popular library, they get every downstream project that trusts it.

That is the appeal.

A single compromised package can reach:

  • Developer laptops

  • CI runners

  • Internal repositories

  • Cloud credentials

  • Production deployment systems

  • Customer applications

  • Other packages maintained by the same account

This is why package ecosystems are so attractive. The trust is already there. Developers install packages quickly. CI systems pull dependencies automatically. Build scripts run with high access. Many teams still use long-lived tokens.

The attacker does not need to knock on the front door. They can become part of the build.

Where attacks happen now

Supply chain security is easier to understand when you separate the attack paths. Most incidents are not magic. They follow patterns.

The modern software supply chain has several weak spots.

Package maintainers

Open source maintainers hold a lot of power. Many popular packages are maintained by one person, a small team, or a tired group of volunteers.

If an attacker gets a maintainer account, they may be able to publish a malicious version of a package. That version can then flow into real applications.

Common paths include:

  • Phishing maintainers

  • Stealing npm, PyPI, or GitHub tokens

  • Taking over abandoned packages

  • Social engineering a maintainer

  • Getting added as a co-maintainer

  • Compromising a maintainer's laptop

  • Publishing a lookalike package

This is not theoretical.

In 2025, the Shai-Hulud npm campaign showed how bad this can get. Unit 42 described it as a self-replicating worm that compromised npm packages and targeted developer credentials. The campaign looked for secrets such as npm tokens, GitHub personal access tokens, and cloud keys, then used stolen npm tokens to modify other packages maintained by the same developer. Unit 42 also assessed with moderate confidence that an LLM helped generate part of the malicious script, based on comments and emojis in the code.

That detail matters.

It means attackers are not just attacking AI systems. They may be using AI to scale attacks against traditional package ecosystems.

The Cyber Security Agency of Singapore also warned that Shai-Hulud used a self-propagating payload. Their advisory said the campaign began with the compromise of @ctrl/tinycolor and that researchers had identified more than 180 compromised npm packages during the attack window.

Package install scripts

Package install scripts are one of the scariest parts of the ecosystem.

In npm, packages can run lifecycle scripts during install. These scripts are useful for legitimate build steps. They are also useful for attackers because they execute when a developer or CI system installs dependencies.

That changes the risk.

A malicious package does not always need to be imported by your application. It may only need to be installed.

Here is the basic flow.

This is why CI dependency installation is a high risk moment.

A build runner often has access to:

  • GitHub tokens

  • Cloud deployment credentials

  • npm or PyPI publish tokens

  • Docker registry credentials

  • Private package registry tokens

  • Signing keys

  • Environment variables

  • Internal network access

If a dependency install script can read those, the build server becomes a treasure chest.

CI/CD workflows

CI/CD is where trust becomes action.

Your CI pipeline can test code, build containers, publish packages, deploy infrastructure, rotate versions, sign artifacts, and push to production. That means a compromised CI workflow can do real damage.

Common CI/CD risks include:

  • Overpowered default tokens

  • Long-lived cloud secrets

  • Pull request workflows that expose secrets

  • Unpinned third-party actions

  • Actions pinned only by tag instead of commit SHA

  • Workflow injection through user-controlled input

  • Build scripts that run untrusted code

  • Self-hosted runners with broad network access

  • Missing separation between build and deploy jobs

GitHub's own Actions security docs tell teams to use least privilege for credentials. Their OIDC docs also explain that GitHub Actions workflows can use OIDC tokens instead of stored long-lived cloud secrets.

That is a major improvement.

A long-lived cloud key in CI is a standing target. An OIDC token is short-lived and tied to a workflow identity. It is not a silver bullet, but it reduces the blast radius when something leaks.

Dependency confusion and typosquatting

Dependency confusion happens when internal package names overlap with public package names. If your package manager resolves the public version first, an attacker can publish a package with the same name and wait for your build to download it.

Typosquatting is simpler. The attacker publishes a package with a name that looks like a real one.

Examples of the pattern:

Real package Malicious style
requests requestss
react-dom reactdom
lodash lodashts
company-internal-auth Public package with same internal name

Package managers have improved. Teams have also learned. But these attacks still work because developers move fast and package names are easy to misread.

AI can make this worse.

When a coding assistant suggests a package name that looks plausible, a developer may trust it. The package might be real. It might be abandoned. It might be malicious. It might not exist, which creates another problem: attackers can publish the suggested package later and wait for people to install it.

That is not science fiction. It is a natural side effect of generated code meeting public registries.

Container images

Dependencies are not only application packages.

A container image includes:

  • Operating system packages

  • Language runtimes

  • Shell tools

  • System libraries

  • Certificates

  • Package manager caches

  • Build layers

  • Application artifacts

If your base image is stale or untrusted, your application inherits that risk.

The same is true for container build steps. A Dockerfile can download scripts from the internet, install packages without pinning versions, and copy secrets into layers by mistake.

A bad image can pass code review because the risky part is not in application code. It is in the build environment.

Infrastructure as code

Terraform modules, Helm charts, Kubernetes manifests, and GitOps configs are part of the supply chain.

An attacker who changes infrastructure code may be able to:

  • Open a storage bucket

  • Add a public ingress

  • Create an admin role

  • Disable logging

  • Expose a secret

  • Change an image tag

  • Route traffic to a malicious endpoint

  • Add a privileged sidecar

AI agents can now edit infrastructure code too. That makes review even more important.

A generated Terraform change can look clean while quietly widening permissions. A generated Kubernetes manifest can work in staging while adding a risky security context. A generated GitHub Actions workflow can pass tests while exposing secrets to a pull request.

The risk is not that AI is evil.

The risk is that AI is confident, fast, and often unaware of your security boundaries.

The AI layer changes risk

AI-generated code is not automatically bad. Most of the time, it is just code.

That is exactly why it needs the same checks as human code.

The problem is not that AI writes insecure code every time. The problem is that AI changes the economics of code creation. It lowers the cost of generating code, glue, scripts, tests, configs, and workflows. That means more changes enter the system.

More changes need better gates.

Software engineering and automation workspace

AI creates code without provenance

Open source has a visible history. You can inspect commits, authors, reviews, releases, tags, issues, and maintainers.

AI-generated code is different.

The model gives you an answer. You may not know which examples influenced it. You may not know whether the pattern came from old code, vulnerable code, deprecated docs, or a Stack Overflow answer from ten years ago. You may not know whether the license risk is clean.

That does not mean you should never use AI-generated code. It means you should not treat generated code as magically original or safe.

A useful rule:

AI-generated code should be treated like a pull request from a fast junior developer who had access to the entire internet but no context about your production risk.

That sounds harsh. It is also fair.

The code might be good. It still needs review.

AI chooses dependencies

A lot of supply chain risk enters through dependency choice.

When an AI assistant suggests a package, it may optimize for convenience. It may not check:

  • Maintenance activity

  • Known vulnerabilities

  • Package ownership

  • Download source

  • License

  • Typosquatting risk

  • Whether the package exists

  • Whether the package is still recommended

  • Whether the package runs install scripts

  • Whether a built-in API would be enough

That is a real problem.

A human developer might ask for "a package to parse JWTs" or "a library for PDF extraction." The assistant may produce a working answer with a dependency. If the developer copies it, the supply chain changed.

One dependency can pull many more.

The package suggestion is not a small detail. It is a design decision.

AI writes build and deployment scripts

Build scripts are dangerous because they run in privileged places.

A coding agent may create:

  • GitHub Actions workflows

  • Dockerfiles

  • Bash scripts

  • Terraform plans

  • Helm charts

  • Release scripts

  • Package publishing scripts

  • Database migration scripts

These files often receive less careful review than application code. That is backwards.

A one-line workflow change can expose secrets. A Dockerfile can leak credentials into layers. A Terraform policy can grant broad access. A release script can publish the wrong artifact.

AI-generated infrastructure code deserves extra review, not less.

AI agents can take actions

The next step is not AI autocomplete. It is agents.

Agents can plan, call tools, read files, edit code, run commands, create pull requests, and sometimes deploy. That connects AI behavior to real systems.

OWASP's LLM risk list includes insecure plugin design and excessive agency. These risks fit agentic development tools very well. If a model can act through tools, the tool boundary becomes part of your supply chain.

The risk looks like this.

A prompt injection in an issue or README can influence the agent. If the agent has write access and poor tool limits, that influence can become a real change.

This is why AI supply chain security is not only about scanning generated code. It is about controlling what agents can do.

AI helps attackers too

Defenders use AI. Attackers use it as well.

AI can help attackers:

  • Generate phishing emails for maintainers

  • Write package descriptions

  • Create convincing fake documentation

  • Generate malware variants

  • Analyze public repositories for secrets

  • Find weak CI workflows

  • Create typo packages at scale

  • Translate attacks across ecosystems

  • Write exploit glue faster

The Unit 42 Shai-Hulud analysis is a useful signal because it links a large npm supply chain campaign with suspected LLM assistance in the malicious script. The important point is not whether the whole attack was AI-driven. The point is that AI reduces the cost of attacker work.

That changes the defender's job.

Manual review alone cannot keep up with automated attack paths. You need automated guardrails too.

What a hardened pipeline looks like

A secure supply chain is not one tool. It is a set of controls that reinforce each other.

The goal is not to make attacks impossible. That is not realistic. The goal is to make risky changes visible, hard to land, hard to exploit, and easy to roll back.

A hardened pipeline answers five questions:

  1. What code changed?

  2. Who or what changed it?

  3. What dependencies entered the build?

  4. What artifact was produced?

  5. Can we prove that production is running the artifact we reviewed?

Here is the shape.

Dependency control

The first layer is dependency control.

You should know when dependencies change. You should know why they changed. You should know whether the change adds risky behavior.

Practical controls:

  • Use lockfiles.

  • Review dependency diffs in pull requests.

  • Block known malicious packages.

  • Use private registries or proxy registries for internal packages.

  • Prefer direct dependencies with healthy maintenance.

  • Remove unused dependencies.

  • Avoid packages that run install scripts unless you need them.

  • Pin versions for application dependencies.

  • Pin GitHub Actions to commit SHAs for high risk workflows.

  • Use package manager settings that reduce script execution in CI where possible.

A dependency review should not only ask "does it have a CVE?"

It should ask:

  • Who maintains it?

  • How often is it updated?

  • Does it have many transitive dependencies?

  • Does it execute install scripts?

  • Does it request network access during install?

  • Is it replacing a small function we could write ourselves?

  • Is it a package suggested by AI without verification?

  • Is the license acceptable?

  • Is this package allowed in our organization?

Some teams create an "approved dependency path." That does not mean every package needs a week-long security review. It means the pipeline treats new packages differently from routine code changes.

That is smart.

Secret control

Secrets are the fuel of supply chain attacks.

A compromised package is bad. A compromised package with access to CI secrets is much worse.

Start with the obvious:

  • Do not commit secrets.

  • Scan repositories for secrets.

  • Scan pull requests for secrets.

  • Rotate secrets quickly when exposed.

  • Remove long-lived credentials from CI where possible.

  • Use OIDC for cloud access from CI.

  • Scope tokens to the minimum required permissions.

  • Separate build credentials from deploy credentials.

  • Do not expose production secrets to pull request workflows.

  • Keep self-hosted runners isolated.

GitHub's docs recommend least privilege for workflow secrets. Their OIDC documentation explains how Actions can authenticate to cloud providers without storing long-lived secrets in GitHub.

This is one of the best upgrades a team can make.

A short-lived token tied to a workflow is not perfect, but it is much safer than a cloud key that sits in a secret store for years.

Build isolation

A build should be clean, repeatable, and isolated.

That means:

  • Build on controlled infrastructure.

  • Avoid building release artifacts on developer laptops.

  • Keep build runners patched.

  • Use ephemeral runners for sensitive builds.

  • Limit network access where possible.

  • Keep deploy credentials out of build jobs.

  • Separate test jobs from release jobs.

  • Do not let untrusted pull requests access secrets.

  • Do not reuse dirty workspaces for release builds.

The build environment is part of the artifact. If the build runner is compromised, the output can be compromised even when the source code is clean.

This is where SLSA becomes useful.

SLSA, short for Supply-chain Levels for Software Artifacts, is a framework from OpenSSF. The SLSA site describes it as a checklist of standards and controls to prevent tampering, improve integrity, and secure packages and infrastructure.

That framing is helpful because SLSA is not a single product. It is a maturity path.

At a high level:

Control area What it protects
Source integrity The code being built is the code that was reviewed
Build integrity The build process was not quietly changed
Provenance The artifact can be traced to source and build steps
Isolation Attackers cannot easily tamper with the build
Verification Consumers can check that artifacts match policy

You do not need to reach the highest maturity level on day one. Even basic provenance is useful because it gives incident responders a way to answer hard questions.

Where did this artifact come from? Which commit produced it? Which workflow built it? Which builder ran it? Was it signed? Was it altered after the build?

Without provenance, you are guessing.

SBOMs

A Software Bill of Materials, or SBOM, is an inventory of software components.

Think of it as an ingredient list for software. It does not make the software safe by itself. It tells you what is inside, so you can respond when something inside becomes risky.

CISA published updated draft guidance for SBOM minimum elements in 2025. CISA describes SBOMs as a software transparency tool that helps manage software risk more effectively.

A useful SBOM should answer:

  • What components are included?

  • What versions are included?

  • Which packages are direct dependencies?

  • Which are transitive dependencies?

  • What licenses apply?

  • What package manager or ecosystem do they come from?

  • What hashes identify the components?

  • Which tool generated the SBOM?

  • When was it generated?

  • Which artifact does it describe?

SBOMs are most useful when they are generated during the build and attached to the artifact. A manually created SBOM gets stale fast.

SBOMs also need monitoring.

If a new vulnerability appears in a dependency, you should be able to ask:

Which services include this component?

That is the operational value.

Signing and verification

Signing turns trust into something machines can check.

Sigstore is one of the most important projects in this space. The Sigstore project describes itself as a collection of open source tools that improve software supply chain security. Cosign, one of its tools, supports signing container images and other artifacts, including keyless signing.

The idea is simple.

Build an artifact. Sign it. Store the signature. Verify the signature before deployment.

This gives you a strong rule:

Production should run only artifacts that were built by approved systems, from approved repositories, under approved workflows.

That is much better than trusting anyone who can push an image tag.

Tags are mutable. Signatures and provenance give you better proof.

CI/CD hardening

CI/CD deserves its own checklist because it is often the highest value target.

For GitHub Actions, strong defaults include:

permissions:
  contents: read

Then grant more permissions only where needed.

For cloud deployment, prefer OIDC instead of stored keys. For third-party actions, pin sensitive workflows to commit SHAs. For pull requests from forks, never expose secrets. For self-hosted runners, treat public pull request execution as risky.

A safer GitHub Actions pattern looks like this.

name: build

on:
  pull_request:
  push:
    branches: [main]

permissions:
  contents: read

jobs:
  test:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4

      - name: Install dependencies without running scripts
        run: npm ci --ignore-scripts

      - name: Run tests
        run: npm test

For release workflows, use separate permissions.

name: release

on:
  push:
    tags:
      - "v*"

permissions:
  contents: read
  id-token: write
  packages: write

jobs:
  release:
    runs-on: ubuntu-latest
    environment: production

    steps:
      - uses: actions/checkout@v4

      - name: Build
        run: npm ci --ignore-scripts && npm run build

      - name: Authenticate with cloud using OIDC
        run: echo "Use cloud provider login action here"

      - name: Publish signed artifact
        run: echo "Build, sign, and publish"

This is not a complete workflow. It shows the shape.

The build job does not need deployment power. The release job gets only what it needs. Production requires an environment boundary. OIDC replaces static cloud keys.

AI guardrails

AI-generated changes need guardrails that match how AI works.

A practical policy might say:

  • AI can draft code, but a human must own the change.

  • AI can suggest dependencies, but dependency additions require review.

  • AI cannot modify release workflows without CODEOWNERS approval.

  • AI cannot create or rotate production secrets.

  • AI cannot deploy to production directly.

  • AI-generated infrastructure changes require plan review.

  • Agent tool access must be scoped by task.

  • Agent actions must be logged.

  • Prompt context from external systems must be treated as untrusted.

This is not anti-AI. It is normal engineering.

You would not give a new contractor unrestricted production access on their first day. You should not give an agent that access either.

Policy as code

Manual review is not enough.

Use policy as code to stop common mistakes before merge. Tools vary by stack, but the idea is the same.

Examples of policies:

  • Block public S3 buckets unless explicitly approved.

  • Block Kubernetes privileged containers.

  • Require pinned base images.

  • Require signed container images.

  • Require SBOMs for production artifacts.

  • Require CODEOWNERS review for workflow changes.

  • Block new dependencies with known critical vulnerabilities.

  • Block deploy jobs without OIDC.

  • Block containers running as root.

  • Block Terraform changes that grant wildcard admin access.

The value is consistency.

A tired reviewer can miss a risky permission. A policy check should not.

A practical roadmap

The hard part about supply chain security is not knowing what good looks like. The hard part is making progress without stopping engineering work.

You do not need to do everything at once.

Here is a roadmap that works for small teams and can grow with larger ones.

Phase one: know what you ship

Start with visibility.

If you do not know what you ship, you cannot secure it.

Do this first:

  • Turn on dependency alerts.

  • Generate SBOMs for release artifacts.

  • Store SBOMs with builds.

  • Track direct and transitive dependencies.

  • Identify production repositories.

  • Identify who can publish packages.

  • Identify who can deploy.

  • Inventory CI secrets.

  • Inventory third-party GitHub Actions.

  • Find self-hosted runners.

  • Map which services use which base images.

This phase is not glamorous. It is useful.

The output should be a basic map.

Do not skip this.

Most supply chain incidents become worse because teams do not know where the affected package is used, which token leaked, or which build produced the artifact.

Phase two: reduce easy risk

Next, remove obvious dangerous defaults.

Focus on the things attackers love:

  • Long-lived cloud keys in CI

  • Broad GitHub tokens

  • Unpinned deployment workflows

  • Secrets exposed to pull requests

  • Unreviewed workflow changes

  • Overly broad package publish rights

  • Unused dependencies

  • Old base images

  • Install scripts running in CI without need

Quick wins:

Risk Safer move
Long-lived cloud keys Use OIDC
Broad CI token permissions Set least privilege permissions
Unreviewed workflow changes Add CODEOWNERS
Unknown dependencies Add dependency review
Secret leaks Add secret scanning and rotation playbooks
Unsigned artifacts Sign release artifacts
Mutable image tags Deploy by digest
AI adding packages freely Require review for dependency changes

You will not fix everything. That is fine.

Fix the paths that lead to production first.

Phase three: protect the build

After visibility and quick wins, focus on build integrity.

A strong build process should be:

  • Automated

  • Isolated

  • Repeatable

  • Provenanced

  • Signed

  • Verifiable

That means release artifacts should come from CI, not laptops. The build should generate provenance and an SBOM. The artifact should be signed. Deployment should verify policy before running it.

A healthy release chain looks like this.

The key idea is traceability.

If someone asks, "Why is this running in production?" you should have a better answer than "because the tag says latest."

Phase four: govern AI-generated code

AI governance should be boring.

That is good.

Start with a written policy that developers can actually follow.

A practical policy might be:

  1. AI-generated code is allowed.

  2. The human who submits the pull request owns the code.

  3. New dependencies suggested by AI require dependency review.

  4. Security-sensitive files require CODEOWNERS review.

  5. AI tools cannot receive secrets.

  6. AI agents cannot deploy without explicit approval.

  7. Generated code must pass the same tests and scans as human code.

  8. Agent actions must be logged when they touch repositories or tools.

Then make the pipeline enforce parts of it.

Add CODEOWNERS for:

  • .github/workflows/*

  • Dockerfile

  • docker-compose.yml

  • terraform/**

  • helm/**

  • k8s/**

  • package.json

  • package-lock.json

  • pyproject.toml

  • requirements.txt

  • go.mod

  • Cargo.toml

This does not block AI. It puts review where risk enters.

Phase five: prepare for incidents

You will still have incidents.

The question is how fast you can respond.

For supply chain incidents, every team should know how to:

  • Find where a package is used.

  • Remove or pin a package version.

  • Rebuild affected artifacts.

  • Rotate leaked tokens.

  • Disable compromised workflows.

  • Revoke package publish tokens.

  • Audit recent releases.

  • Search logs for suspicious CI behavior.

  • Check which artifacts reached production.

  • Notify customers if required.

Write the playbook before you need it.

A simple incident flow:

The teams that handle supply chain incidents well are not always the teams with the most tools. They are the teams with the clearest ownership and the fastest path from alert to rebuild.

What this means for developers

This whole topic can sound like a security team problem.

It is not.

Developers make supply chain decisions every day.

You make one when you install a package. You make one when you copy a GitHub Actions snippet. You make one when you accept an AI-generated dependency. You make one when you add a Docker base image. You make one when you approve a pull request that changes a workflow.

That does not mean every developer must become a security engineer.

It means every developer should learn a few instincts:

  • Fewer dependencies are easier to defend.

  • Lockfiles matter.

  • Build scripts deserve review.

  • CI secrets are production risk.

  • AI-generated code still needs ownership.

  • Package install is code execution.

  • A signed artifact is better than a trusted tag.

  • SBOMs help you answer questions during an incident.

  • Provenance tells you where software came from.

  • Fast code without traceability becomes expensive later.

The future of software supply chain security is not one big scanner. It is a culture where code, dependencies, builds, artifacts, and AI agents are all treated as part of the same system.

That is the shift.

AI did not create the software supply chain problem. Open source scale, automation, and cloud delivery already did that.

AI made the problem faster.

So the answer is not to stop using AI. The answer is to build pipelines that can handle speed without losing trust.

The best teams will use AI to write faster, test faster, review better, and detect risk earlier. They will also put hard boundaries around credentials, builds, artifacts, and production changes.

That is where professional software engineering is going.

Not less process. Better process.

Not less trust. Verifiable trust.

Practical checklist

Use this as a starting point for your own project.

Area Minimum useful control
Dependencies Lockfiles, dependency review, remove unused packages
AI-generated code Human owner, normal tests, dependency review
CI secrets Least privilege, OIDC, no secrets for untrusted PRs
Workflows CODEOWNERS, pinned actions, limited permissions
Builds Clean CI builds, no release artifacts from laptops
Artifacts Signed images, deploy by digest, registry access controls
SBOM Generate during build and store with artifacts
Provenance Attach build provenance to release artifacts
Containers Pin base images, scan images, rebuild often
Infrastructure Policy checks for risky permissions
Incidents Token rotation and rebuild playbooks

The point is not perfection.

The point is reducing blind trust.

References

V

This is a strong point. AI is no longer just helping write code, it’s changing how code enters the supply chain. That means teams need better review, dependency checks, secrets scanning, and provenance tracking. Faster code is useful, but only if the supply chain stays trustworthy.