- Oct 2023
-
cdn.openai.com cdn.openai.com
-
GPT-2 Introduction paper
Language Models are Unsupervised Multitask Learners A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, (2019).
-
-
-
GPT-3 introduction paper
-
- Aug 2023
-
arxiv.org arxiv.org
-
Title: Delays, Detours, and Forks in the Road: Latent State Models of Training Dynamics Authors: Michael Y. Hu1 Angelica Chen1 Naomi Saphra1 Kyunghyun Cho Note: This paper seems cool, using older interpretable machine learning models, graphical models to understand what is going on inside a deep neural network
-
- Nov 2022
-
www.exponentialview.co www.exponentialview.co
-
“The metaphor is that the machine understands what I’m saying and so I’m going to interpret the machine’s responses in that context.”
Interesting metaphor for why humans are happy to trust outputs from generative models
-
-
arxiv.org arxiv.org
-
"On the Opportunities and Risks of Foundation Models" This is a large report by the Center for Research on Foundation Models at Stanford. They are creating and promoting the use of these models and trying to coin this name for them. They are also simply called large pre-trained models. So take it with a grain of salt, but also it has a lot of information about what they are, why they work so well in some domains and how they are changing the nature of ML research and application.
-
- Jun 2021
-
www.technologyreview.com www.technologyreview.com
-
The problem is, algorithms were never designed to handle such tough choices. They are built to pursue a single mathematical goal, such as maximizing the number of soldiers’ lives saved or minimizing the number of civilian deaths. When you start dealing with multiple, often competing, objectives or try to account for intangibles like “freedom” and “well-being,” a satisfactory mathematical solution doesn’t always exist.
We do better with algorithms where the utility function can be expressed mathematically. When we try to design for utility/goals that include human values, it's much more difficult.
-
many other systems that are already here or not far off will have to make all sorts of real ethical trade-offs
And the problem is that, even human beings are not very sensitive to how this can be done well. Because there is such diversity in human cultures, preferences, and norms, deciding whose values to prioritise is problematic.
-
- Jan 2021
-
psyarxiv.com psyarxiv.com
-
Singh, M., Richie, R., & Bhatia, S. (2020, October 7). Representing and Predicting Everyday Behavior. https://doi.org/10.31234/osf.io/kb53h
-