This stuff is the tech equivalent of the Facebook post to tell everyone that you’re leaving Facebook. It happens to each cycle of technology & commercial innovation: ad-blockers, image watermarks (stego or not), OCR-resistant fonts, tagging innocuous web-pages as “porn” to either distract/dissuade attention, blocking scrapers at the network level.
My take: this blog is Creative-Commons licensed. Bring on the bots. Every player who removes themselves from the internet is one less that I have to compete with for attention & influence.
Doc Searls and/or his peers would likely put it as something like this:
It’s riskier to make a living from your content rather than because of your content.
There is, of course, a metagame of making notoriety/money by being the first to denounce new technologies and to seek to defeat them:
The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission. Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless—dogs become cats, cars become cows, and so forth. MIT Technology Review got an exclusive preview of the research, which has been submitted for peer review at computer security conference Usenix.
https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/
Leave a Reply