Who can you trust?
- RG
- Apr 16
- 5 min read
Updated: Apr 23

“Trust everybody, but cut the cards.”
Trust is a strange thing. I knew I wanted to talk about trust, so I started searching for quotes about it. Most of them were variations of “trust yourself”, “trust no one fully”, “trust only in god”, or “once trust is broken, it’s gone forever”. I found a few which came close to what I wanted, but the sources were sufficiently problematic that I didn’t want to use them. (For example, Reagan’s “trust, but verify” makes me uncomfortable, particularly since we now see a later president who apparently doesn’t bother to follow the “verify” part of this Russian proverb, though he does appear to follow the “Russian” part...)
The major problem I found with many of the quotes was that they were binary in nature, with the implication that trust is either absolute or non-existent. Like most things, trust is a continuum, like respect. I think that everyone is due courtesy and dignity, but respect is something which develops over time. The idea that people should “respect their elders” simply because they are older is ridiculous.
But I digress.
Most people with even a peripheral exposure to computers have probably heard the term “zero trust”, but usually as a marketing term, to make a system seem more secure. It is rarely defined clearly when it’s used, and the actual meaning of the term is often left to the imagination.
Terms like “Zero trust architecture (ZTA)” or “perimeterless security” refer to the principle that users and devices should not be “trusted" by default. Traditionally, most networks followed a “perimeter-based” model where a user connected to a server or network, and was then “trusted” across that network.
The term “zero trust” was coined in 1994 by Stephen Paul Marsh, in his doctoral thesis on computer security, where he noted the fact that the traditional “M&M” model of security was inadequate to the needs of modern networks. More than a decade later, in 2010, the term became “mainstream” in the famous paper “No more Chewy Centers: Introducing The Zero Trust Model Of Information Security”, by John Kindervag of Forrester Research. As the paper notes, “trust” in the InfoSec context is inherently different from “trust” in the human context, and a network should, in theory, “Never trust. Always verify.”
One of the earliest major implementations of a zero trust architecture is Google’s BeyondCorp, which was created in response to Operation Aurora, a series of cyber attacks generally assumed to have been associated with the Chinese government.
In contrast to traditional networks, which generally provided access to data and services on a network to any users on that network without further authentication, BeyondCorp grants no inherent trust to the network, uses a “Trust Inferrer” to check the user’s device to help determine the degree of trust to be granted, checks the “Device Inventory Database” to uniquely identify the device, and then uses the “Access Control Engine” to determine whether access should be granted.
The idea behind zero-trust is that a given device and/or user account might be compromised at any time, so each service on the network makes a determination before access is granted, and will periodically request re-authentication for any of a number of reasons. It could be that it’s been a while since the user was last authenticated, or the user is connecting from an unexpected location, or doing something they do not normally do.
Easier said than done, but this model is spreading across a great many services, and is improving the overall security of the internet.
But what about supply chains? Can we trust them?
Modern software depends on a variety of tools, libraries, and services. These include programming language features and libraries, databases, frameworks, and various other tools which are essentially “trusted” by default (in practice, if not necessarily in theory). The security and reliability of the software supply chain depends on tools such as code signing and SBOMs (Software Bill Of Materials), but we have a long way to go before we can fully assess the trustworthiness of our tools. Signing the software is only the first step, and we need to establish additional processes and policies for third-party review and confirmation.
In principle, Al could develop a tool, sign it, and then have Bob review it and confirm that it passes a set of criteria. Bob would then counter-sign the tool. When Cat is evaluating the tool, they can confirm that Al built and signed it, and that Bob reviewed and counter-signed. Depending on the degree of trust that Cat has in Al and/or Bob, they would be able to evaluate the risk of using the tool. Also, Al and Bob will have reputations which can influence the degree of trust an organization might grant, depending on their risk management policies.
While signing of software is becoming more and more common, I’m not aware of anyone performing third-party security reviews and counter-signing in the way described above. This would likely require significant changes to the software ecosystem, but I think it could be an extraordinarily effective and useful thing to do.
And what about hardware?
There is signing of firmware and such, but I am not aware of any industry-wide standards for managing this across all devices. That said, Apple’s new “Private Cloud Compute” (basically, cloud-based AI processing) offers “verifiable transparency” of both software and hardware.
On the hardware side, Apple provides a cryptographic key to identity each Apple silicon SoC (System on Chip), and scans each circuit-board at high resolution (both optically and by X-ray) to create permanent images for reference, comparison, and to validate that the hardware has not been altered. They then create a signed manifest of the keys of each system and microcontroller, activate a built-in tamper-switch, and record all of this information.
I think this approach may be a model on which broader standards can be built for widespread validation of hardware, and a similar process for security review and counter-signing could be applied as well.
This may seem needlessly complicated and esoteric, but it is vitally important since our software and hardware supply chains are now so complicated and widespread that we desperately need tools to verify that both software and hardware are valid, secure, reviewable, and tamper-resistant. Our current approach is utterly inadequate, both because of the complexity, and because both software development and hardware manufacture/assembly are scattered among countries all over the world. It is a solvable problem, I think, but I worry that we won’t pick the best solution for financial and/or political reasons, leading to a great many issues down the road.
If we can’t trust our computers, who can we trust?
Cheers!
Comments