37 Matching Annotations
  1. Apr 2023
    1. But if you think that the road to AI goes down this pathway, you want to maximize the amount of data being collected, and in as raw a form as possible. It reinforces the idea that we have to retain as much data, and conduct as much surveillance as possible.

      This is bad. I agree. In practice, though, I notice that OpenAI has trained the SotA with none of this surveillance data.

      But this is a reason to restrain AI labs, not a reason we shouldn't worry about AI risk.

    2. Like in any religion, there's even a feeling of urgency. You have to act now! The fate of the world is in the balance! And of course, they need money! Because these arguments appeal to religious instincts, once they take hold they are hard to uproot.

      Yep, these are real problems which really are contributing to the accelerations we're experiencing right now.

      But the risks still seem plenty real to me, totally irrespective of the religious zealotry of many AI researchers.

    3. If you're persuaded by AI risk, you have to adopt an entire basket of deplorable beliefs that go with it.

      This is an odd and unpersuasive set of arguments.

      1. AI implies nanotech: maybe? I mean, if we want? If it wants? This is far from clear.

      2. Nanotech would let us live in a "society where all material needs are met." …cool? This is… bad?

      3. If we build AI, we'd need to engage in galactic expansion. Maybe? We'll probably do it anyway? This is a (weird) argument for not building AI, but it's not an argument that one shouldn't be concerned about AI risk.

      Here's a maybe steelman: people who believe in AI risk are also pursuing many other goals I don't like. If I help them, I might advance those other odious goals. That's better, but it definitely doesn't rise to the level of making me not want to worry about AI risk!

    4. It would have to talk to people to get what it wants.

      In the 2000's, the AI box argument was pretty persuasive. Eliezer's successful escapes made it less so. Now, having seen how people reacted to GPT-4, this isn't even slightly persuasive. In the first month after its release, people turned it into an agent, acted as its real-world effector in various entrepreneurial experiments, and created ChaosGPT.

    5. A recurring flaw in AI alarmism is that it treats intelligence as a property of individual minds, rather than recognizing that this capacity is distributed across our civilization and culture.

      Yes, this is somewhat encouraging. The AI would need effectors in the real world. Maybe we could secure the relevant effectors well enough to limit its capacity. But the job is not made easier by the absolutely enormous API surface reality presents. An AI can synthesize chemicals and biologicals, can drop-ship 3D printed objects, can execute experiments with robotic arms in cloud wet labs, etc etc. I don't think we're particularly set up for success here.

      Also, humans are pretty willing to help. GPT-4 deceived a TaskRabbit to break a captcha. Deceit aside, people have demonstrated great willingness to be GPT-4's effector. The bright side here is that if willing humans are in some important part of the loop, a hard takeoff is a bit less likely. We're too slow.

    6. In the absence of effective leadership from those at the top of our industry, it's up to us to make an effort, and to think through all of the ethical issues that AI—as it actually exists—is bringing into the world.

      I basically agree!

    7. The pressing ethical questions in machine learning are not about machines becoming self-aware and taking over the world, but about how people can exploit other people, or through carelessness introduce immoral behavior into automated systems.

      I tend to agree! I am mostly more worried about these things than about misaligned AI world domination! I'm also extremely worried about economic dislocation and concomitant social upheaval.

      But unlike Darius, I don't view these topics as rival: I want lots more effort attacking all these potential impacts.

    8. I think our understanding of the mind is in the same position that alchemy was in in the seventeenth century.

      I agree, though empirically, The Bitter Lesson suggests this may not matter. We may not need to understand the mind to create one. We've made the most progress by not understanding the mind.

    9. This whole field of "study" incentivizes crazy. One of the hallmarks of deep thinking in AI risk is that the more outlandish your ideas, the more credibility it gives you among other enthusiasts. It shows that you have the courage to follow these trains of thought all the way to the last station.

      It's really not clear to me that this is bad. "Here's to the Crazy Ones" is powerful for a reason. Everybody thought the Wright Brothers were crazy. Shocklee was certainly crazy. Tesla. And so on.

      However, if you take this on its face, it should tell you that the people most likely to invent powerful AI systems are likely to be "crazy" (by your lights). That is, again, a reason for concern, not a reason for calm!

    10. Such skull-and-dagger behavior by the tech elite is going to provoke a backlash by non-technical people who don't like to be manipulated. You can't tug on the levers of power indefinitely before it starts to annoy other people in your democratic society.

      Yep, this is probably true. Doesn't mean we don't have to worry about AI risk.

    11. AI risk is string theory for computer programmers. It's fun to think about, interesting, and completely inaccessible to experiment given our current technology. You can build crystal palaces of thought, working from first principles, then climb up inside them and pull the ladder up behind you. People who can reach preposterous conclusions from a long chain of abstract reasoning, and feel confident in their truth, are the wrong people to be running a culture.

      This is true! Hinton, asked about the risks associated with AI: “I could give you the usual arguments, but the truth is that the prospect of discovery is too sweet.”

      This is a reason to be concerned that the people developing AI systems are likely to put us in more danger. It's a reason to be concerned, not a reason for calm.

    12. If you think we’re living in a computer program, trying to segfault it is inconsiderate to everyone who lives in it with you. It is far more dangerous and irresponsible than the atomic scientists who risked blowing up the atmosphere.

      … isn't this a reason to worry more about AI risk?

    13. These religious convictions lead to a comic-book ethics, where a few lone heroes are charged with saving the world through technology and clever thinking. What's at stake is the very fate of the universe. As a result, we have an industry full of rich dudes who think they are Batman (though interestingly enough, no one wants to be Robin).

      I agree that this is bad.

    14. This is megalomaniacal. I don't like it.

      I agree with Darius here. I don't like this either.

      (Though I don't take this as a reason not to be concerned. If anything, it's a reason to be more concerned: the type of people working on these systems are not sufficiently concerned with culture, politics, society, history, etc.)

    15. In such conditions, it’s not rational to work on any other problem.

      I agree with Darius's implied position here. It's fine to be quite worried about this and to work on something else. One's choice of work should be a personal reflection of creative interest, opportunity, and so on.

      Supervolcanos are also very concerning. I'm not working on those either.

    16. It's possible that the process could go faster for an AI, but it is not clear how much faster it could go. Exposure to real-world stimuli means observing things at time scales of seconds or longer. Moreover, the first AI will only have humans to interact with—its development will necessarily take place on human timescales. It will have a period when it needs to interact with the world, with people in the world, and other baby superintelligences to learn to be what it is.

      I don't find this persuasive at all. You don't need to make observations in realtime to get observations with timestamps attached. An AI trained on every film and novel ever written would have a richer basis of information about interaction at human timescales than any human would have. See also: the enormous progress in simulated RL training environments (DotA, Diplomacy, etc)

    17. It's perfectly possible an AI won't do much of anything, except use its powers of hyperpersuasion to get us to bring it brownies

      Yes. It's possible. We don't know.

      But if we were going to guess, we should notice that people immediately gave GPT-4 maximization-oriented goals ("start a business and earn as much profit as possible"). It's a good guess that people will give a future proto-ASI maximize-y goals along the same lines.

    18. Artificial intelligence may be just as strongly interconnected as natural intelligence. The evidence so far certainly points in that direction. But the hard takeoff scenario requires that there be a feature of the AI algorithm that can be repeatedly optimized to make the AI better at self-improvement.

      This is another "may" argument—reality might be shaped in a way which averts doom; we don't know. It's modestly more persuasive than the others, but I still have plenty of probability mass on recursive self-improvement being possible. RLAIF is certainly very "promising" in that direction.

    19. Note especially that the constructs we use in AI are fairly opaque after training. They don't work in the way that the superintelligence scenario needs them to work. There's no place to recursively tweak to make them "better", short of retraining on even more data.

      This argument certainly didn't age well. The Bitter Lesson has, in practice, won. The scaling hypothesis is looking quite good these days (acknowledging that several theoretical advances were also valuable… though unclear if they were in fact necessary?)

      A stronger version of this argument would be to look at the scaling law papers and to express concern as to whether enough training tokens exist to reach the loss levels we might desire. But I'm still plenty concerned about the loss levels we can reach with the tokens which exist.

    20. I don't buy this argument at all. Complex minds are likely to have complex motivations; that may be part of what it even means to be intelligent.

      Bostrom's specific claim is that "more or less any level of intelligence could in principle be combined with more or less any final goal". It's not a claim about the complexity of the motivations—just that their goals may be very different from ours.

      Yes, it's possible that the maximizer would want to write poetry. It's also possible that it would want to make a number in its memory be as large as possible. You don't know, and you don't have a good way to reason a priori about which is more likely, so you should treat both as live possibilities.

    21. So how are we supposed to solve ethics and code a moral fixed point for a recursively self-improving intelligence without fucking it up, in a situation where the proponents argue we only get one chance?

      Yes, this is very concerning. This is a reason to be worried, and a reason to be concerned that AI alignment efforts are unlikely to succeed.

    22. The emus responded by adopting basic guerrilla tactics: they avoided pitched battles, dispersed, and melted into the landscape, humiliating and demoralizing the enemy.

      I would not want to be the emus in this scenario. Humans dominate the continent and control most of its natural resources. They determine its future, not the emus.

    23. So even an embodied AI might struggle to get us to do what it wants

      Sure, yes, it might. We don't know. It'll probably struggle with some things and not with others. This is not a good reason not to be worried about the harms which might arise from that struggle. The recent history of these models is of their consistently surprising us (including their creators) with their capabilities.

    24. The outside view doesn't care about content, it sees the form and the context, and it doesn't look good.

      This situation has evolved significantly since 2016. It's no longer "just" weird-looking bearded Bay Area people worrying about this. Pioneers of the field are worried. CEOs of the labs developing these systems are (except LeCunn) worried.

    25. With no way to define intelligence (except just pointing to ourselves), we don't even know if it's a quantity that can be maximized. For all we know, human-level intelligence could be a tradeoff.

      It's true. It's possible that intelligence can't be maximized, or that it has some low fundamental limit. We don't know. Say you view this "maybe" as a 50-50. That's certainly not a persuasive argument to stop worrying about a potential catastrophe.

  2. Oct 2022
    1. all that matters is to fund good people doing good work

      Q. Vivid example given in response to "just fund good people doing good work" A. Think of Renaissance shipping, which was revolutionized by maritime insurance. Their answer wasn't "just get good ships, crewed by good sailors"

  3. Sep 2022
    1. Since this initial intention, peer review has become a stamp of legitimacy that is too often equated with truth, and the process itself has become corrupt in more ways than one. We are no longer limited or beholden to a centrally controlled journal that chooses (and prints) what we have access to: we have the internet. We need a better filter function.

      Given this fairly pessimistic outlook on peer review, I was a bit surprised that it's worth +2 points in your assessment!

    2. On this list, only the nascent journal Octopus and forums like StackOverflow and Research Hub use discourse units.

      Is there a canonical link to an introduction? Googling this term didn't yield much of relevance.

    3. How easy is it to access knowledge?How easy is it to share knowledge?Does it enable a rigorous discourse environment?

      Do these three axes really correspond to the six needs?

      In particular, I'm not sure how these needs map to the axes: * to find and be in community * to get help and feedback * to feel complete and celebrate * to be seen and acknowledged * to receive the resources needed to continue to do research

      I see that "easy to share knowledge" and "enables a rigorous discourse environment" could enable these things… but they don't necessarily.

      For instance, it's easy to imagine environments which make it easy to share knowledge and to discuss it rigorously, but which (e.g. for cultural reasons) don't produce feelings of celebration, or which (e.g. for economic or political reasons) don't yield resources.

    1. Code unit is a bit sequence used to encode each character within a given encoding form.

      I found this pretty unclear. A code unit is an encoding-dependent "minimal chunk". For example, in UTF-8, a code unit is a 8 bit chunk; in UTF-16, a code unit is a 16 bit chunk. An individual code unit is generally not interpretable outside its context. A code unit can indeed be described by a "bit sequence", but that misses the "word size" notion.

      See https://en.wikipedia.org/wiki/Character_encoding#Terminology.