In such conditions, it’s not rational to work on any other problem.
I think this suffers from the fallacy of ability to forsee the future. If one were to agree with Taleb, most stuff is trial and error and positive black swans. Then paradoxically if you want improvement in AI, you need to work in so many different problems.
I think Kenneth Stanley's (a researcher in evolutionary algorithms and curiosity based algorithms) view on open endedness has to be strongly consdiered.