But if you think that the road to AI goes down this pathway, you want to maximize the amount of data being collected, and in as raw a form as possible. It reinforces the idea that we have to retain as much data, and conduct as much surveillance as possible.
This is bad. I agree. In practice, though, I notice that OpenAI has trained the SotA with none of this surveillance data.
But this is a reason to restrain AI labs, not a reason we shouldn't worry about AI risk.