Quote from “Securing AI Model Weights: Preventing Theft and Misuse of Frontier Models“
“As frontier [AI] models — that is, models that match or exceed the capabilities of the most advanced models at the time of their development — become more capable, protecting them from theft and misuse will become more important.”
Um, no; they will become less important. You see…
…those of us who are old enough have been around this loop some two or three times before:
- Control over the Unix Source Code
- Control over proliferation of Cryptography
- Control over proliferation of ripped copyrighted music & movies
I’ve watched all of these things, close-up; see the Wikipedia links for the other ones, but the first one (Unix) is probably the most comparable to AI, because literally in less than two years we have gone from:
“This marks the first time a major tech firm’s proprietary AI model has leaked to the public”
…to a new world of people running 671-billion-parameter models on $1500 PCs, just for teh lulz.
Why is this comparable to Unix?
When I got my start in “big” computing in 1985, the Unix operating system was cutting edge, like someone had just invented jazz or rock-and-roll in a world filled with austere classical music of computer operating systems which were otherwise glorified databases.
But: Unix was proprietary and a trademark of AT&T; the details are complex but the short version is: university computers ran Unix but hardly anyone was allowed, or supposed, to be looking at the kernel and tool source code… except that universities needed the source code in order to do research… so AT&T and other vendors offered it to universities under severe legal restrictions which were largely ignored. Code was copied samizdat-style and hoarded by the geeks, conferring soft power and influence to those who had access.
I think I still have some of the printouts, even.
The Great Dying of Unix
This situation, this massive and pointless hassle — with increasing pain — lasted from around ~1975 to around ~1995, which was when open-source Linux finally became a credible multi-platform Unix replacement for all the people who wanted a “Unix-like” operating system, and by virtue of being open-source Linux suddenly forced the entire Unix operating system ecosystem to compete on qualities of transparency and capability, rather than by “secret sauce”.
The competition never came.
Attempts to compete only really got going in the mid-2000s, but by then it was already too late. The result was a great dying – a few “true” Unixes survived in small ecological niches such as “Network Attached Storage” or “Enterprise Computing” or “High Availability“, or else evolved in hidden ways to (e.g.) become the invisible foundations of macOS.
Unix stopped being a holy edifice, and became a commodity.
How Not To Control Model Weights
I could go on for ages about Unix, but that would be a digression. The point is that the attached advice (from the paper) is great infosec policy, and makes a lot of good points, but it is utterly useless in the real world for something as big and conceptual as “frontier” (?) model weights:
Recommendations
Developers of AI models should have a clear plan for securing models that are considered to have dangerous capabilities.
Organizations developing frontier models should use the threat landscape analysis and security level benchmarks detailed in the report to help assess which security vulnerabilities they are already addressing and focus on those they have yet to address.
Develop a security plan for a comprehensive threat model focused on preventing unauthorized access and theft of the model’s weights.
Centralize all copies of weights to a limited number of access-controlled and monitored systems.
Reduce the number of people authorized to access the weights.
Harden interfaces for model access against weight exfiltration.
Implement insider threat programs.
Invest in defense-in-depth (multiple layers of security controls that provide redundancy in case some controls fail).
Engage advanced third-party red-teaming that reasonably simulates relevant threat actors.
Incorporate confidential computing to secure the weights during use and reduce the attack surface.
Yeah, this is fine, and it’s how I would treat a corporate strategy document or a company’s HR database, maybe.
But the lesson of Unix is: the way to compete is to give your shit away as broadly and transparently as possible, and to run as fast as you can to stay ahead of the rest of the pack in terms of innovation; and if politicians and lawyers think that they can protect your technological leadership by means of regulation, and if you partner with and depend upon them to achieve this, then you will lose.
A strategy based upon openness will eventually win; but (Meta, aside) none of the USA, EU, or UK states nor commentariat appear to be happy embracing it yet.
To me this looks like a massive opportunity for China, Japan, and India, to name but three.
Leave a Reply