Analysis: #VoCO #AgeAssurance & #AgeVerification by @GCHQ & @DCMS – at least now they are being honest about “we must make this legally obligatory in order to force market adoption”

A few years ago, I was part of an Open Rights Group campaign on Age Verification for porn and adult-content websites – specifically:

All of these criticisms are still valid. None of these criticisms have gone away.

However: the idea, having been kicked into the long grass in 2019 is now resurfacing as “VoCO” / “Verification of Children, Online” (PDF), and… nothing has changed, in fact the proposals are technically blue-sky, more invasive, and the illiberal suggestions are coming straight from DCMS & GCHQ and some startup- and academic-led think-tanks, rather than from overly-enthusiastic industry.

To quote part of the Government Report of the IICSA: <begin quote>

  1. Over the last 12 months DCMS has supported the ‘Verification of Children Online’ (VoCO) child safety research project. The report on the project’s second phase was published this month. This joint GCHQ, DCMS and Home Office project responds to the technical challenges associated with providing children with a greater level of protection from online harms and explores the feasibility of identifying child users online. The VoCO project has attempted to capture the experiences and needs of children, parents and platforms to identify what support and incentives are needed for platforms to start age assuring their users.
  2. Knowing which users are children, and what age band they are in, is critical to protecting children online. If online services know which of their users are children, they are able to provide them with a higher level of protection, including preventing access to age inappropriate content. Age assurance is the term given to the broad range of technical measures that can be used by an online service to establish the age of their users. The VoCO project showed that there are a range of age assurance methods, supported by different data sources, that companies can use to assess the age of their users. Different age assurance methods provide differing levels of confidence in the age of the child. Age verification is one type of age assurance measure and provides the highest level of confidence in the assessed age of the child. Current age verification measures are effectively a full identity check.

<end quote> ; my observations of these points, include:

  • From November 2019 to November 2020 they supported the VoCO programme, but in the first 10 months they apparently only had a single industry roundtable, and that was with people from the “supply” rather than “demand/mandated” side. The second roundtable may have been this one on November 7th – I am not sure. It seems a small number for a matter so momentous.
  • The research has focused on whether there are technologies which can identify users, but apparently nothing about the 1/ consequences of using them, 2/ the effectiveness of doing so in the extremely diverse world of the internet, 3/ the impact, protection or destruction of the resulting metadata, and 4/ if and when some of the less invasive methods fail, whether the obligation for age assurance will simply fall back to “a full identity check”?
  • Apparently there are 230 organisations (PDF, and of which 80% are SMEs) involved in the HomeOffice/Qinetiq-led “Vivace” think-tank; but try as I might, I cannot find any useful information to determine whether there are any actual relevant, “platform” companies involved, and at what level, nor whether these 230 companies are all, or even in significant part, involved in VoCO?
  • It’s interesting to consider the framing “if online services know which of their users are children, they are able to provide them with a higher level of protection…” – and then assume the consequential “when online services know which of their users are adults, they may provide them with less protection” – which is not a great “sell”, from the platform perspective. Doesn’t everyone deserve protection, and the freedom to manage it, or switch it off?

It is sad that Parliamentarians are already assuming that the technology works and that it can be thrust upon the platform providers without consequence; they are incorrect in several dimensions:

  • the technology is dubious at best, irrelevant at worst (“typing patterns” on an iPad?)
  • the platform providers are global, but the proposals are UK (not even EU)-only
  • it’s hard to write legally compliant software that will scale or cope with 66 million – let alone 2 billion, users – and attempting a design without (or even: rejecting) the joint engineering input of the big tech platforms is to invent a white elephant; we saw this most recently with the battery-draining, privacy-invasive, non-globally-shippable first NHSx/GCHQ attempt at a Covid19 tracking app.

VoCO will be at least, possibly more huge and cumbersome than the NHS Covid19 app, because the intention is to intercede BETWEEN those 66 million users AND all of the websites which they may want to access; the challenge is on par with the entirety of the test-and-trace infrastructure (People and the other people they may have interacted with, vs: People and the Websites they want to interact with) with risks that would be equivalent to what we have seen from Test-and-Trace:

There is a lot that went (and is going) wrong with Test & Trace, which at least had the benefit of being run from a solitary and well-meaning Government agency; the proposals for VoCO are still to create a federation of small businesses, sharing information about people and whether they are old enough:

VoCO “Age Check Exchanges” – how to build a “digital identity ecosystem” enabled by child protection

…and all of this whilst somehow still being anonymous and protecting (e.g.) their porn or kinky habits from being associated with them, and leaving them free from other forms of “tracking”, all by means of signing up to a voluntary code?

VoCO, as Age Verification did before it, presents a prescriptive, illiberal, impractical, narrow, poorly-informed, disproportionate, and outright dangerous threat to privacy, all in the name of child protection. It is a (bad) technical fix for a social problem, and it should not be attempted.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *