Interpretive note: this is not a technical post-mortem or a blame exercise.
It is a digital trust reading of the incident: what the movement of access reveals about
delegated trust, persistent credentials, unofficial tooling, and the way modern systems fail across boundaries.
Minimal context
The incident originated outside of Vercel, with the compromise of Context.ai through infostealer malware.
This led to the theft of access tokens associated with integrated Google Workspace environments.
Those tokens enabled access without triggering authentication controls such as multi-factor authentication.
From there, attackers were able to move laterally - into a Vercel employee’s environment -
and into internal systems used to support customers. Data not classified as sensitive was accessible
with fewer protections.
No single system failed in isolation. The system operated as designed.
Surface reading
A third-party compromise
- External tool compromised
- Tokens stolen
- Access reused
- Customer-support systems reached
Accurate, but incomplete.
Trust interpretation
Delegated trust moving through a system
- Trust extended across integrations
- Access persisted beyond authentication
- Unofficial tooling widened the boundary
- Classification did not match attacker behaviour
This is the more useful signal.
System behaviour observed
Trust was delegated across systems
The incident did not exploit a single vulnerability.
It moved through a chain of trusted integrations: an AI tool, a workspace, a platform,
and into customer environments.
Each connection was legitimate. The aggregate was not fully understood.
Modern systems are not simply integrated - they are interdependent. Trust is extended with each connection,
often without a clear model of where it ultimately resides.
Authentication was bypassed by design
Access was not gained through login. It was gained through reuse of tokens issued after authentication.
This is not a failure of authentication controls. It is a consequence of how modern systems persist access.
Controls continue to focus on entry. The system is increasingly controlled after entry.
Tooling extended the trust boundary
The originating platform was not part of a formally approved toolset.
It was adopted to improve productivity, with broad permissions granted in the process.
This is no longer an edge case. The effective trust boundary of an organisation is now defined
as much by its unofficial tooling as its sanctioned systems.
Sensitivity classification failed quietly
Data that was not marked as sensitive was accessible with fewer protections.
The system differentiated. The attacker did not.
Classification models assume that risk aligns neatly with predefined categories. In practice,
attackers move across whatever is reachable.
| Observed behaviour |
Trust implication |
Governance question |
| Integrated tool access |
Trust extended outside the formal platform boundary |
Which tools can confer meaningful access to core environments? |
| Token reuse |
Authentication became a prior event, not the active control |
How is access governed after authentication has already occurred? |
| Lateral movement |
Legitimate connections formed a usable pathway |
Where can failure propagate across connected systems? |
| Lower protection for non-sensitive data |
Classification did not reflect operational or attacker value |
Does our sensitivity model match how exposed data can be used? |
Implications for digital trust
This incident highlights a shift in how trust operates in modern environments.
Control is no longer centralised. Boundaries are no longer clear. Authentication is no longer the primary gate.
Instead, trust is distributed:
- across integrations
- across identities
- across systems that were never designed to be considered together
What matters is not only whether controls exist, but:
- where they sit
- how access persists beyond them
- how failure propagates across connected systems
The question is not only: “Was a sensitive system breached?”
The better questions are:
Where was trust delegated?
Where did access persist?
Which unofficial or adjacent systems became part of the effective boundary?
How far could the failure move before anyone saw it?
Closing
The incident did not break a single system.
It moved through a chain of trusted connections - quietly, and largely as designed.
That is the more useful signal.
Related: TrustSurface Framework · What Owning the Status Surface Looks Like