- Apr 2024
-
www.theguardian.com www.theguardian.com
-
Oxford shut down 'Future of Humanity Institute'. Vgl [[Jaan Tallinn]] Nick Bostrom Part of phil dept, but less and less phil on staff. Original existential threat list seemed balanced, over time non-existent AI became only focus, ignoring clear and present dangers like climate change.
-
- May 2023
-
docdrop.org docdrop.org
-
I don't know that we can assume that some point A Thousand Years in the future is going to have the same moral political economic or social priorities 00:41:36 as we do
- Good insight on the absurdity of Longtermism from Mary Harrington
- " I don't know that we can assume that some point a Thousand Years in the future
- is going to have the same moral political economic or social priorities
- as we do
- It's very very clear even the most rudimentary grasp of history or literature
- ought to make it clear that
- people a thousand years ago didn't have the same priorities as us now and
- if you can you can frame that difference as progress in our favor
- or as decline in their favor
- but it's it's very clear that Consciousness you've evolved and culture evolves over time and
- there are there are threads of continuity and that's something that you and I both have in common
- tracing some of those lines but
- it's very clear that what how people think about what's important changes tremendously over over even a century,
- let alone over a thousand years
- so I I question the hubris of any movement which claims
- to have a have any kind of handle on on what might matter in 25 000 years time
- I just don't see how you can do that
- it's absurd."
- Good insight on the absurdity of Longtermism from Mary Harrington
-
-
netzpolitik.org netzpolitik.org
-
https://web.archive.org/web/20230430194301/https://netzpolitik.org/2023/longtermism-an-odd-and-peculiar-ideology/ The EA/LT reasoning explained in this interview, in a way that allows easy outlining. Bit sad to see Jaan Tallinns existential risk path taking this shape, CSER seemed to be more balanced back in 2012/13 when I briefly met him in the context of TEDxTallinn, with climate change a key existential risk, not a speed bump on the road to advanced AI to provide future humanity.
-
- Sep 2022
-
www.e-flux.com www.e-flux.com
-
But we should resist surrendering to a destiny predefined by technological development. We urgently need to imagine a new world order and seize the opportunity provided by the meltdown to develop a strategy that opposes the relentless depoliticization and proletarianization driven by the transhumanist fantasy of superintelligence.
Against longtermism
-
-
-
This has roots in the work of Nick Bostrom, who founded the grandiosely named Future of Humanity Institute (FHI) in 2005, and Nick Beckstead, a research associate at FHI and a programme officer at Open Philanthropy. It has been defended most publicly by the FHI philosopher Toby Ord, author of The Precipice: Existential Risk and the Future of Humanity (2020).
-
-
davekarpf.substack.com davekarpf.substack.com
-
Now consider a hypothetical from science fiction. William Gibson’s two most recent books (The Peripheral and Agency) occur in two time periods — one in the near-future, the other in the far-future. Gibson’s far future is a techno-optimist paradise. It is filled with the future tech that today’s most wild-eyed futurists only dream about. Heads-up displays! Working robots that you can pilot with full telepresence! Functional seasteads! It is a world of abundance and wealth and fantastical artistry. But it is also a world that is notably… empty.
Using Gibson’s Jackpot as a thought experiment for evaluating longtermism
-