One may be tempted to assume that GenAI tools, likeChatGPT, have negated the need for many types of knowl-edge.
While I agree that ChatGPT and other similar mediums have provided users with a tremendous amount of information is it still not up to the human counterpart to distill that information down to something specific? Something usable and actionable?
Effective prompt engineering can assist with those efforts, of course, yet still the human counterpart must provide the parameter of the queries and distill the information down to a workable solution. One idea I am personally struggling with is recognizing that every thought that pops into my head may not be true. Here, too, AI generated ideas may or may not be true and further human interaction can help discard the informational flotsam.