Velocity Without Awareness
Your team shipped more code last quarter than ever before. You already know that. What you probably do not know is how much of it was written by a machine.
GitHub Copilot now generates 46 percent of the code developers write. Over 90 percent of Fortune 100 companies have adopted it. More than 50,000 organizations use it daily. And that is just one tool. Add Cursor, Claude Code, Amazon CodeWhisperer, Gemini Code Assist, and a growing list of autonomous coding agents, and you start to see the shape of the problem. The volume of code reaching production has increased dramatically. The volume of human attention available to review it has not.
This is not a future problem. It is happening now.
We talk to engineering and IT operations leaders every week. The pattern is consistent. Deployment frequency is up. Pull request volume is up. But the teams downstream, the ones who operate the infrastructure, manage the incidents, and answer the 2 AM pages, have no better visibility into what changed than they did two years ago. The velocity increased. The awareness did not.
The Data Is Already Bad
The 2024 DORA State of DevOps Report measured this directly. As AI adoption increased across surveyed organizations, delivery stability dropped by an estimated 7.2 percent. Throughput declined by 1.5 percent. That is not a rounding error. That is a measurable regression in the reliability of software delivery, correlated with the very tools that were supposed to make teams faster.
The explanation is not complicated. AI makes it easy to write more code. More code means larger changesets. DORA’s data has consistently shown that larger batch sizes introduce more risk. When you can generate a 400-line pull request in ten minutes, the temptation to ship it as one unit is real. Small, incremental commits become less appealing when the machine can produce an entire feature in one pass.
GitClear’s analysis of 211 million changed lines of code tells a similar story from a different angle. Code churn, defined as newly added lines revised or deleted within two weeks, rose from 5.5 percent in 2020 to 7.9 percent in 2024. Code duplication linked to AI-assisted development increased fourfold. And for the first time in the history of their dataset, developers are copy-pasting code more often than they are refactoring it. Refactoring dropped from 25 percent of changed lines in 2021 to under 10 percent in 2024.
Read those numbers again. More code is being written. Less of it is being refactored. More of it is being revised shortly after it ships. The code is moving faster, but the quality signals are moving in the wrong direction.
And 39 percent of developers in the DORA survey reported little to no trust in AI-generated code. The people writing with these tools do not fully trust the output. The people operating the systems running that output often do not even know it exists.
Three Risks Nobody Is Tracking
The velocity problem creates specific, concrete risks that most organizations are not equipped to detect.
1. AI-Assisted Pull Requests That Skip Real Review
Research on agent-generated pull requests found a bifurcated pattern: 28.3 percent are instant merges with no iterative review. The rest require back-and-forth, but the review itself is often performed by another AI tool. A CodeRabbit analysis of 470 open-source PRs found that AI-generated code creates 1.7 times more issues than human-written code. Meanwhile, companies with higher AI usage are pushing larger PRs. Larger PRs get less careful review. This is a well-documented pattern from before AI existed. The difference now is scale.
When an engineer uses Copilot to generate a Terraform module and another AI tool reviews the PR, the infrastructure change reaches production with minimal human scrutiny. Nobody told the SRE team. The change does not appear in the CMDB. The ITSM system has no record. It is a shadow change by default.
2. AI-Generated Infrastructure as Code
This is the risk that keeps us up at night. AI coding tools do not just write application code. They generate Kubernetes manifests, Terraform configurations, Helm charts, and CI/CD pipeline definitions. A prompt like “create a Terraform module for a multi-region RDS setup” produces working IaC in seconds. That IaC defines security groups, networking rules, backup policies, and access controls. It ships through the same pipeline as any other code change.
The difference is the blast radius. A bug in application code might break a feature. A bug in IaC can expose a database to the public internet, disable encryption at rest, or remove a firewall rule that was protecting an entire subnet. When that IaC was generated by an AI tool and reviewed by a developer who is not an infrastructure specialist, the risk compounds.
3. Automated Hotfixes and Agent-Driven Changes
Autonomous coding agents are already submitting pull requests, fixing bugs, and deploying changes with minimal human involvement. 57 percent of companies report having AI agents running in production. Some of those agents are modifying feature flags in response to performance metrics. Others are generating and applying config changes during incidents. OWASP released a Top 10 for Agentic Applications in late 2025, identifying tool misuse, privilege abuse, and unexpected code execution as primary risks.
When an AI agent applies a hotfix at 3 AM because a latency threshold was breached, who gets notified? Which downstream teams know about the change? What system of record captured it? In most organizations today, the answers are: nobody, none, and nothing.
The Gap in Your Toolchain
Here is the core problem. Every tool in the modern delivery pipeline was designed for the previous era of change velocity.
Your CI/CD system tracks deployments it runs. It knows nothing about feature flag toggles, manual cloud console changes, or agent-driven modifications. Your ITSM tool tracks changes that someone manually creates a ticket for. Your CMDB reflects whatever someone last updated it with, which is often months out of date. Your monitoring stack tells you something broke. It does not tell you what changed.
None of these tools answer the question that matters during an incident: “What changed in the last two hours across all systems that could have caused this?”
And none of them are designed to handle the volume of changes that AI tools now produce. When deployment frequency doubles or triples because AI is accelerating development, a weekly Change Advisory Board becomes an absurdity. A manual change request process that worked at 10 deployments per week collapses at 50. The tools and processes that were adequate for human-speed development are not adequate for AI-speed development. The gap between what is changing and what your organization is aware of grows wider every month.
This is the gap that causes outages to last hours instead of minutes. Not because teams are slow to respond, but because they spend the first 45 minutes of every incident trying to figure out what changed. When 62 percent of major outages trace back to change failures, that diagnostic blind spot is the most expensive gap in your stack.
Change Intelligence as the Coordination Layer
The answer is not slowing AI down. That ship has sailed. 90 percent of engineering teams use at least one AI coding tool. Half use two or more. The productivity benefits are real. Nobody is going back.
The answer is a change intelligence layer that sits across every change source and provides the awareness that AI velocity has stripped away. Not more process. Not more tickets. Better observation.
A change intelligence platform ingests change events from every source: CI/CD pipelines, feature flag platforms, cloud audit logs, IaC drift detection, and agent activity logs. It correlates those changes with service dependencies. It risk-scores them automatically. And it routes notifications to the teams that need to know, before the incident starts, not during it.
When an AI agent generates a Terraform change that modifies a security group, the platform detects the change, identifies which services are affected, scores the risk based on historical patterns, and alerts the security and SRE teams. When a developer merges a Copilot-generated PR that touches a shared API contract, the platform notifies the downstream consumers. When deployment frequency spikes because AI is accelerating a team’s output, the platform surfaces the pattern so operations can prepare.
This is the coordination layer that AI-accelerated development requires. Not a gate that slows things down, but an awareness system that keeps pace. The DORA data showing a 7.2 percent stability drop is not an argument against AI. It is an argument for building the visibility infrastructure that AI-speed development demands.
We built citk for exactly this problem. Because reducing change failure rate at AI speed requires a different approach entirely than reducing it at human speed. The changes are faster, more numerous, and generated by tools that do not attend standup meetings or read the team Slack channel. If your awareness system cannot keep pace with your generation system, you are flying blind.
Here is the question that should make you uncomfortable: how many AI-generated changes hit your production last week? Do you know? Can you list them? Can you tell me which downstream teams were notified?
If the answer is no, the gap is already there. And it is getting wider every sprint.