But this technology will have somelong-term implications for how we verify claims,
I can't count the number of times I've already run into issues with this.
I'll search something and find myself reading something that just seems off—not necessarily of dubious accuracy, but full of oddities like repeating the same or similar ideas several times in the same paragraph, something a human would only do for emphasis, or perhaps to make a more complicated topic more clear but the topic I'm searching information for is something simple like a videogame walkthrough or a recipe. This is merely annoying for inconsequential things like that, but it's a MASSIVE problem that anyone can now buy a domain and use a LLM script to generate an entire site of, say, plausible sounding vaccine denial rhetoric with next to no effort.