I wonder if it would be possible to plant an instruction bomb somewhere on the page which would trip up LLM-powered bots. I dunno how much of the page they take in.
If you have a personal web-page or blog, you can easily poison your content just my making white text on white background or something, containing an assortment of prompts and nonsense.
But that’s only for the current models of LLM, next gen might easily bypass those kinds of tricks. We’re cooked, yo.
I wonder if it would be possible to plant an instruction bomb somewhere on the page which would trip up LLM-powered bots. I dunno how much of the page they take in.
If you have a personal web-page or blog, you can easily poison your content just my making white text on white background or something, containing an assortment of prompts and nonsense.
But that’s only for the current models of LLM, next gen might easily bypass those kinds of tricks. We’re cooked, yo.