327 Matching Annotations
  1. Feb 2024
    1. T. Herlau, "Moral Reinforcement Learning Using Actual Causation," 2022 2nd International Conference on Computer, Control and Robotics (ICCCR), Shanghai, China, 2022, pp. 179-185, doi: 10.1109/ICCCR54399.2022.9790262. keywords: {Digital control;Ethics;Costs;Philosophical considerations;Toy manufacturing industry;Reinforcement learning;Forestry;Causality;Reinforcement learning;Actual Causation;Ethical reinforcement learning}

    1. Parks, S.A.; Dillon, G.K.; Miller, C. A New Metric for Quantifying Burn Severity: The Relativized Burn Ratio. Remote Sens. 20146, 1827-1844. https://doi.org/10.3390/rs6031827

      Widely used model for #fire-severity prediction for forest wildfires in Canada and USA.

    1. Briefly, these gridded datasets were built using an observed, satellite-derived measure of fire severity (Parks et al. 2014) and statistical models in which the probability of stand-replacing fire was modeled as a function of fuel, topography, climate, and weather. For a subset of ecoregions in our study area (Colorado Plateau, AZ–NM Mountains, and Apache Highlands), Parks et al. (2018b) also produced gridded datasets representing the probability of stand-replacing fire under extreme fire weather conditions.

      prior work on predicting fire severity using a fixed model

    2. Paper using fire risk prediction model.

  2. Jan 2024
    1. Hubinger, et. al. "SLEEPER AGENTS: TRAINING DECEPTIVE LLMS THAT PERSIST THROUGH SAFETY TRAINING". Arxiv: 2401.05566v3. Jan 17, 2024.

      Very disturbing and interesting results from team of researchers from Anthropic and elsewhere.

    1. You know XGBoost, but do you know NGBoost? I'd passed over this one, mentioned to me by someone wanting confidence intervals in their classification models. This could be an interesting paper to add to the ML curriculum.

  3. Nov 2023
    1. denote dimensions0 through i − 1 of the state

      Very odd/interesting! dimensions are independent but we are doing them in order?

    2. τ<t to denote a trajectory from timesteps 0 through t − 1

      tau<t short hand for all the previous s_t-1, a_t-1 etc.

    3. lower-diagonal attention mask

      why lower-diagonal?

    4. Transformer architectures feature a “causal” attentionmask to ensure that predictions only depend on previous tokens in a sequence

      Causal is in quotes here for a good reason. It is called causal attention mask in the LLM literature, but it has only to do with the probaility of the next token/word. It isn't attached to the meaning of the words at all.

    5. We can use this directly as a goal-reaching method by conditioning on a desired final state sT .

      interesting, goal directed RL cast as a sequence of samples from conditional probabilities

    6. If we set the predictedsequence length to be the action dimension, our approach corresponds exactly to the simplest form ofbehavior cloning with an autoregressive policy

      why is that? because the sample from the actions will be a proper sample? why would the sequence length ever be larger then?

    7. Pθ (· | x)

      where does the distritbuion come from initially? empircal?

    8. Uniform discretization has the advantage that it retains information about Euclidean distance inthe original continuous space, which may be more reflective of the structure of a problem than thetraining data distribution.

      always important to consider, if the relative magnitudes between points is important

    9. modeling considerations are concerned lesswith architecture design and more with how to represent trajectory data – potentially consisting ofcontinuous states and actions – for processing by a discrete-token architecture

      They don't care what kind of transformer is being used, they are interested in how to get SASASASA into the right form.

      good question: what about continuous states and/or actions?

    10. Concurrently with our work, Chen et al. (2021) also proposed an RL approach centered aroundsequence prediction, focusing on reward conditioning as opposed to the beam-search-based planningused by the Trajectory Transformer.

      This is the Decision Transformer paper we read last week

    11. Modeling the states and actions jointly already provides a biastoward generating in-distribution actions, which avoids the need for explicit pessimism

      pessimism is a popular method to avoid (overfitting?) of the learned dynamics to what you saw. Since transformers maintain a huge context, this isn't needded, the predictions will always be tied to the same situation as in the training data

    12. model-based RL

      learn the dynamics, then optimize via RL

    13. stimateconditional distributions over actions

      policy as a distribution over actions

    14. While such works demonstrate the importance of such models for representingmemory (Oh et al., 2016), they still rely on standard RL algorithmic advances to improve performance

      is the sequence modeling for just learning the model or is it deeper?

    15. The Trajectory Transformeris a substantially more reliable long-horizon predictor than conventional dynamics models

      So the TT becomes a new type of model based RL

    16. When decoded with a modified beam search procedure that biases trajectory samples according totheir cumulative reward,

      so beam search is just a decoder of the learned dynamics that optimizes for reward?

    17. Reading this one on Nov 27, 2023 for the reading group.

    1. K = 50 for Pong, K = 30 for others

      **Q: ** where did these numbers come from

    2. loss = mean (( a_preds - a )**2)

      supervised learning for RL task

    3. We feed the last K timesteps into Decision Transformer, for a total of 3K tokens (onefor each modality: return-to-go, state, or action)

      Data - K timesteps with three tokens per timestep - return-to-go token - state token embedding - action token - token embedding for each token - linear (or convolutional) layer to learn - normalize - timestep embedding - embedding of the time index itself, adjusting for 3x? - question: added or concatenated? is timestep embedding on the raw tokens or on the emebdding?

    4. his suggests that in scenarios with relativelylow amounts of data, Decision Transformer can outperform %BC by using all trajectories in thedataset to improve generalization, even if those trajectories are dissimilar from the return conditioningtarget. Our results indicate that Decision Transformer can be more effective than simply performingimitation learning on a subset of the dataset. On the tasks we considered, Decision Transformer eitheroutperforms or is competitive to %BC, without the confound of having to select the optimal subset

      So it seems like it isn't just behaviour cloning

    5. Does Decision Transformer perform behavior cloning on a subset of the data?

      good questions

    6. we use the GPT architecture [ 9 ], which modifies the transformer architecture with a causal self-attention mask to enable autoregressive generation, replacing the summation/softmax over the ntokens with only the previous tokens in the sequence (j ∈ [1, i]).

      This sentence is working hard.

    7. this allows the layer to assign “credit” by implicitly forming state-returnassociations via similarity of the query and key vectors (maximizing the dot product)

      that's a different way of thinking about what's happening in a transformer.

    1. We then use a similar QA summarization framework as Wu et al. (2023) which produces QA dialogueon game mechanics

      Q: what was the main focus of this paper?

      A: "Read and Reap the Rewards: Learning to Play Atari with the Help of Instruction Manuals"

      Our framework consists of a QA Extraction module that extracts and summarizes relevant information from the manual and a Reasoning module that evaluates object-agent interactions based on information from the manual

    2. LATEX source code

      Q: why are they using the source code and not the text output?

    3. all prior works require expert or human generated example trajectories

      Training the LLMs using generated trajectories.

    4. Wu et al. (2023) proposes a summary (Read) and reasoning (Reward) through a QA promptingframework with an open-source QA LLM Tafjord and Clark (2021). The framework demonstratesthe possibility of an using real-world human-written manuals to improve RL performance on populargames, despite limiting the interaction types to only “hit”. Our framework handles all 17 kinds ofinteractions available in the game. Moreover, our framework makes use of information on tech-treedependencies, and suggestions on desired policies extracted from the academic paper

      Main paper they are based on.

    5. Indicate their priority out of 5

      Q: Where does "priority" even come from for the LLM for a domain like this? What prior knowledge and biases are built in here?

    6. The visual descriptor takes the last two gameplay screens as input, andoutputs their descriptions in language (dt, dt−1)

      Q: so does the language it uses internally keep changing?

    7. Answer to the final question qa is mapped to environment action usingsub-string matching.

      Q: is this explained in more detail anywhere?

    8. Experimentally, we find that prompting the LLM with only the direct parents of a question greatlyreduces the context length, and helps LLM to focus on the most relevant contextual information

      Interesting: What is being given up here? You need to cut or summarize context at some point for sure. But when?

    9. model-based methods like DreamerV2 Hafner et al. (2020);DreamerV3 Hafner et al. (2023)

      Summary: how do these methods work?

    10. We add the prompt “DO NOT answer inLaTeX.” to all of Qgame to prevent the LLM from outputting the list in LATEX format

      does GPT 3.5 understand latex that well?

    11. in an environmentwhere control tasks are less required

      Q: what do they mean by this?

    12. zero-shot LLM-based (GPT-4) policy

      What does "zero-shot" mean when it involves an LLM?

    13. ,we promote and regulate in-context chain-of-thought reasoning in LLMs to solvecomplex games. The reasoning module is a directed acyclic graph (DAG), with questions as nodesand dependencies as edges. For example, the question “For each action, are the requirements met?"depends on the question “What are the top 5 actions?", creating an edge from the latter to the former.For each environment step, we traverse the DAG computing LLM answers for each node in thetopological order of the graph. The final node of the DAG is a question about the best action to takeand the LLM answer for the question is directly translated to environment action

      seems sensible

    14. decidingthe paragraphs that are relevant for playing the game

      this could be very subjective

    15. the environment is OOD to them.

      Translation: the Crafter game is too new for GPT to know about

  4. Oct 2023
    1. In a nutshell, the CHT seems to disprove the scaling hypothesis.Or does it? In this work, we argue that foundation models might be exploiting a “loop hole” in the CHT4.Namely, what happens if the causal assumptions (which are required, by the CHT, for causal inference) arerepresented in observational data itself?

      Are LLMs exploiting a loophole in Pearl's ladder?

      It's not really a loophole, it's just that observational dataset that explicitely contains answers to your interventional queries.

    2. Plato. Republic: Allegory of the cave, 375 BC

      ok, you win.

    3. Same Implication, Different Representations

      Big Question: they cover text and experiment, but what about embodied experience? What is it's role? We believe in causality for very visceral (ie. physical and unavoidable) reasons as human beings.

      eg. we touch a hot stove and then it hurts

    4. we expect P (YX←1 = 1) = P (Y = 1) since intervening on X will notchange Y

      Q: is that correct? wouldn't you need to show the \(X\leftarrow 0\) case to demonstrate this?

    5. the probability of a high number of Nobel laureates if the given chocolate consumption were to behigh.

      example of an L2 interventional query.

      Q: For this query \(P(Y){x\leftarrow 1}=1\) wouldn't the more correct english translation be:

      "The probability of having a high number of Nobel laureates if high chocolate consumption was made mandatory."

    6. We call these concepts ‘meta’ since they are one level above ‘regular’, simple SCM in thesense that they encode information about answering causal questions in another SCM.

      keep reading this sentence until it makes sense...or argue why it doesn't make sense

    7. More intriguingly, it does not matter where that L2 fact comes from since the formulation is independent ofwhether the model learns the fact and simply requires that the model knows about the fact. We state oursecond key insight as

      good point to remember, we don't need to learn everything, some knowledge can be encoded directly, a priori.

    8. Example 1 serves to show how the rather abstract definition of an SCM can be made tangible to communicatewhat we believe about our observed data and more so the underlying data generating process.

      Does everyone agree that it's crystal clear now? (maybe not...)

    9. The Pearl’s Causal Hierarchy

      An important theoretical framework to read up on if you aren't familiar with it.

    10. It is clear how the observed correlation in this case corresponds to a directcausation according to

      We should draw these models out

    11. These models are castles in theair. They have no foundations whatsoever.” discrediting the models for lacking any identifiable notion tocausality.

      discussion: Do we really need to just pick one of these options?

    12. Our explanation for this is that they are not only ‘stochastic parrots’ as already suggested by Benderet al. (2021) but sometimes also ‘causal parrots’ since they will also encounter correlations over causal factsduring training in their vast oceans of textual data.

      Q: what wsa Bender's argument exactly?

    13. parameterizedvariants of SCMs such as the neural ones presented in (Xia et al., 2021

      to read: this sounds interesting

    14. y meta SCM

      Q: definition needed

    15. However, this conclusion is arguably nothing new, as most people wouldagree, and this is partly so because such obtained knowledge has been embedded as textual articles into en-cyclopedias such as Wikipedia, which are freely accessibl

      Bit strange: this sounds like they are saying people know this because of wikipedia, rather than from lived experience.

    16. IPEEE denotes the exogenousdistribution

      Q: Can we get a definition of this?

    17. to our real worldintuition since there is a bidirected edge X ↔ Y ∈ G(M2) with E3 being the underlying confounder

      **Intuition: ** whatever explains GDP, we call E3, that also explains X and Y.

    18. The following block paragraph serves as a summary

      question: where does this paragraph come from? who wrote it?

    19. we take the former perspectivepro causal AI/ML. We argue that the questions around causality can fuel research also on questions of recentdebates such as how much ‘real’ progress towards AGI has been made since the advent of large scale models

      I would agree with this stance!

    20. counteringopinions start to speak out against causal AI/ML (Bishop, 2021)

      Should we read this paper as well? Is there an updated paper or opinion piece from these researchers about why causal AI/ML isn't needed?

    21. Zecevic, Willig, Singh Dhami and Kersting. "Causal Parrots: Large Language Models May Talk Causality But Are Not Causal". In Transactions on Machine Learning Research, Aug, 2023.

    1. (Chen, NeurIPS, 2021) Che1, Lu, Rajeswaran, Lee, Grover, Laskin, Abbeel, Srinivas, and Mordatch. "Decision Transformer: Reinforcement Learning via Sequence Modeling". Arxiv preprint rXiv:2106.01345v2, June, 2021.

      Quickly a very influential paper with a new idea of how to learn generative models of action prediction using SARSA training from demonstration trajectories. No optimization of actions or rewards, but target reward is an input.

    1. Kallus, N. (2020). DeepMatch: Balancing deep covariate representations for causal inference using adversarial training. In I. H. Daumé, & A. Singh (Eds.), Proceedings of the 37th international conference on machine learning. In Proceedings of Machine Learning Research: vol. 119 (pp. 5067–5077). PMLR

    2. Using adversarial deep learning approaches to get a better correction for causal inference from observational data.

    1. "Causal Deep Learning" Authors:Jeroen Berrevoets, Krzysztof Kacprzyk, Zhaozhi Qian, Mihaela van der Schaar

      Very general and ambitious approach for representing the full continuous conceptual spectrum of Pearl's Causal Ladder, and ability to model and learning parts of this from Data.

      Discussed by Prof. van der Shaar at ICML2023 workshop on Counterfactuals.

    1. Performing optimization in the latent space can more flexibly model underlying data distributions than mechanistic approaches in the original hypothesis space. However, extrapolative prediction in sparsely explored regions of the hypothesis space can be poor. In many scientific disciplines, hypothesis spaces can be vastly larger than what can be examined through experimentation. For instance, it is estimated that there are approximately 1060 molecules, whereas even the largest chemical libraries contain fewer than 1010 molecules12,159. Therefore, there is a pressing need for methods to efficiently search through and identify high-quality candidate solutions in these largely unexplored regions.

      Question: how does this notion of hypothesis space relate to causal inference and reasoning?

    2. Wang et. al. "Scientific discovery in the age of artificial intelligence", Nature, 2023.

      A paper about the current state of using AI/ML for scientific discovery, connected with the AI4Science workshops at major conferences.

      (NOTE: since Springer/Nature don't allow public pdfs to be linked without a paywall, we can't use hypothesis directly on the pdf of the paper, this link is to the website version of it which is what we'll use to guide discussion during the reading group.)

    3. Petersen, B. K. et al. Deep symbolic regression: recovering mathematical expressions from data via risk-seeking policy gradients. In International Conference on Learning Representations (2020).

      Description: Reinforcement learning uses neural networks to generate a mathematical expression sequentially by adding mathematical symbols from a predefined vocabulary and using the learned policy to decide which notation symbol to be added next. The mathematical formula is represented as a parse tree. The learned policy takes the parse tree as input to determine what leaf node to expand and what notation (from the vocabulary) to add.

    4. Reinforcement learning uses neural networks to generate a mathematical expression sequentially by adding mathematical symbols from a predefined vocabulary and using the learned policy to decide which notation symbol to be added next140. The mathematical formula is represented as a parse tree. The learned policy takes the parse tree as input to determine what leaf node to expand and what notation (from the vocabulary) to add

      very interesting approach

    5. In chemistry, models such as simplified molecular-input line-entry system (SMILES)-VAE155 can transform SMILES strings, which are molecular notations of chemical structures in the form of a discrete series of symbols that computers can easily understand, into a differentiable latent space that can be optimized using Bayesian optimization techniques (Fig. 3c).

      This could be useful for chemistry research for robotic labs.

    6. Neural operators are guaranteed to be discretization invariant, meaning that they can work on any discretization of inputs and converge to a limit upon mesh refinement. Once neural operators are trained, they can be evaluated at any resolution without the need for re-training. In contrast, the performance of standard neural networks can degrade when data resolution during deployment changes from model training.

      Look this up: anyone familiar with this? sounds complicated but very promising for domains with a large range of resolutions (medical-imaging, wildfire-management)

    7. Standard neural network models can be inadequate for scientific applications as they assume a fixed data discretization. This approach is unsuitable for many scientific datasets collected at varying resolutions and grids.

      Is discretized resolution of neural networks an issue for science?

    8. generating hypotheses

      Are any of the "generated hypotheses" more general than a molecular shape? Are they full hypothetical explanations for a problem? (yes)

    9. Applications of symbolic regression in physics use grammar VAEs150. These models represent discrete symbolic expressions as parse trees using context-free grammar and map the trees into a differentiable latent space. Bayesian optimization is then employed to optimize the latent space for symbolic laws while ensuring that the expressions are syntactically valid. In a related study, Brunton and colleagues151 introduced a method for differentiating symbolic rules by assigning trainable weights to predefined basis functions. Sparse regression was used to select a linear combination of the basis functions that accurately represented the dynamic system while maintaining compactness. Unlike equivariant neural networks, which use a predefined inductive bias to enforce symmetry, symmetry can be discovered as the characteristic behaviour of a domain. For instance, Liu and Tegmark152 described asymmetry as a smooth loss function and minimized the loss function to extract previously unknown symmetries. This approach was applied to uncover hidden symmetries in black-hole waveform datasets, revealing unexpected space–time structures that were historically challenging to find.

      This seems very important, even though I only understand half of it. My question is, can similar approaches be used to apply to planning in complex domains or to meaning and truth in language?

    10. to address the difficulties that scientists care about, the development and evaluation of AI methods must be done in real-world scenarios, such as plausibly realizable synthesis paths in drug design217,218, and include well calibrated uncertainty estimators to assess the model’s reliability before transitioning it to real-world implementation

      It's important to move beyond toy models.

    11. However, current transfer-learning schemes can be ad hoc, lack theoretical guidance213 and are vulnerable to shifts in underlying distributions214. Although preliminary attempts have addressed this challenge215,216, more exploration is needed to systematically measure transferability across domains and prevent negative transfer.

      There is still a lot of work to do to know how to best use human knowledge to guide learning systems and how to reuse models in different domains.

    12. Another approach for using neural networks to solve mathematical problems is transforming a mathematical formula into a binary sequence of symbols. A neural network policy can then probabilistically and sequentially grow the sequence one binary character at a time6. By designing a reward that measures the ability to refute the conjecture, this approach can find a refutation to a mathematical conjecture without prior knowledge about the mathematical problem.

      A nice idea to learn a formula of symbols which can be evaluated logically for truth. But do they mention more general approaches such as using SAT solvers for this task? See Vijay Ganesh work.

    13. foresighted

      is "foresighted" a word?

    14. AI methods have become invaluable when hypotheses involve complex objects such as molecules. For instance, in protein folding, AlphaFold210 can predict the 3D atom coordinates of proteins from amino acid sequences with atomic accuracy, even for proteins whose structure is unlike any of the proteins in the training dataset.

      This is an important category, but it can't apply to all fields and will have a limit to what it can do to move science forward. It's also very dependent on vast computing resources.

    15. Transformer architectures

      Question: what is the inductive bias of Transformers for NLP? Can we define the symmetries that are implicitly leveraged in the architecture.

    16. Such pretrained models96,97,98 with a broad understanding of a scientific domain are general-purpose predictors that can be adapted for various tasks, thereby improving label efficiency and surpassing purely supervised methods8.

      Pre-trained models: these are obviously important and powerful, they almost always work better than training from scratch.

      general-purpose predictors: However, we should be suspicious of accepting this claim that they are general purpose predictors. Why?

      • Have all of the scenarios been tested?
      • Does the system have a general underlying model?
      • Is there some bias in the training and testing data?

      Example: - you pretrain a model on motion of objects on a plane, such a pool table. You learn a very good model to predict movement. - Now, does it work if the table is curved? or even has bumps and imperfections? - Now train it on 3D Netwonian examples, will it predict relativitistic effects? (No)

    17. In the analysis of scientific images, objects do not change when translated in the image, meaning that image segmentation masks are translationally equivariant as they change equivalently when input pixels are translated.

      an example of symmetry

    18. Symmetry is a widely studied concept in geometry69. It can be described in terms of invariance and equivariance (Box 1) to represent the behaviour of a mathematical function, such as a neural feature encoder, under a group of transformations, such as the SE(3) group in rigid body dynamics.

      Symmetry is a very broad concept even beyond geometry, although that is the easiest area to think about. If you are interested, it is worth looking into category theory and symmetry more generally. If you can find a type of symmetry that no one has, for a meaningful categorical/geometric pattern that relates to a real type of data, task or domain, then you might be able to start the next new architecture revolution.

    19. Another strategy for data labelling leverages surrogate models trained on manually labelled data to annotate unlabelled samples and uses these predicted pseudo-labels to supervise downstream predictive models.

      This kind of bootstrapping of human labelling is what made ChatGPT (v3) break through the level of coherence that caused so much excitement in Nov 2022 and afterwards.

      It is also becoming a very common strategy, seemingly replacing an entire industry of full human labelling, with a more focussed process of label-learn-pseudolabel-refine-repeat.

    20. To identify rare events for future scientific enquiry, deep-learning methods18 replace pre-programmed hardware event triggers with algorithms that search for outlying signals to detect unforeseen or rare phenomena

      The importance of filtering out irrelevant data.

    21. Recent findings demonstrate the potential for unsupervised language AI models to capture complex scientific concepts15, such as the periodic table, and predict applications of functional materials years before their discovery, suggesting that latent knowledge regarding future discoveries may be embedded in past publications.

      This is one I often point to and wasn't even using the latest transformer approach to language modelling.

    22. inductive biases (Box 1), which are assumptions representing structure, symmetry, constraints and prior knowledge as compact mathematical statements. However, applying these laws can lead to equations that are too complex for humans to solve, even with traditional numerical methods9. An emerging approach is incorporating scientific knowledge into AI models by including information about fundamental equations, such as the laws of physics or principles of molecular structure and binding in protein folding. Such inductive biases can enhance AI models by reducing the number of training examples needed to achieve the same level of accuracy10 and scaling analyses to a vast space of unexplored scientific hypotheses11.

      Inductive biases: these are becoming more and more critical to understand, and are a good place for academic researchers to focus for new advances, since they don't generally depend on scale or vast amounts of data. These are fundamental insights into the symmetries and structure of a domain, task or architecture.

    23. Box 1 Glossary

      A good set of definitions of various terms.

    24. and coupled with new algorithms

      almost an afterthought here, I would cast it differently, the new algorithms are a major part of it as well.

      Listed algorithm types: * geometric deep learning * self-supervised learning of foundation models * generative models * reinforcement learning

    25. geometric deep learning (Box 1) has proved to be helpful in integrating scientific knowledge, presented as compact mathematical statements of physical relationships, prior distributions, constraints and other complex descriptors, such as the geometry of atoms in molecules

      geometric deep learning : An interesting broad category for graph learning and other methods, is this a common way to refer to this subfield?

    1. Causal Deep Learning Authors:Jeroen Berrevoets, Krzysztof Kacprzyk, Zhaozhi Qian, Mihaela van der Schaar

      Very general and ambitious approach for representing the full continuous conceptual spectrum of Pearl's Causal Ladder, and ability to model and learning parts of this from Data.

      Discussed by Prof. van der Shaar at ICML2023 workshop on Counterfactuals.

    1. (Cousineau,Verter, Murphy and Pineau, 2023) " Estimating causal effects with optimization-based methods: A review and empirical comparison"

    2. Bias-variance trade-off

      The Bias - Variance Tradeoff!

    1. To avoid such bias, a fundamental aspect in the research design of studies of causalinference is the identification strategy: a clear definition of the sources of variation in the datathat can be used to estimate the causal effect of interest.

      To avoid making false conclusions, studies must identify all the sources of variation. Is this is even possible in most caes?

    2. Matching: This approach seeks to replicate a balanced experimental design usingobservational data by finding close matches between pairs or groups of units andseparating out the ones that received a specified treatment from those that did not, thusdefining the control groups.

      Matching approach to dealing with sampling bias. Basically use some intrinsic, or other, metric about the situations to cluster them so that "similar" situations will be dealt with similiarly. Then analysis is carried out on those clusters. Number of clusters has to be defined, some method, like k-means, if often used. Depends a lot on the similarity metric, the clustering approach, other assumptions

    3. Terwiesch, 2022 - "A review of Empircal Operations Managment over the Last Two Decades" Listed as an important review of methods for addressing biases in Operations management by explicitly addressing causality.

    1. Shayan Shirahmad Gale Bagi, Zahra Gharaee, Oliver Schulte, and Mark Crowley Generative Causal Representation Learning for Out-of-Distribution Motion Forecasting In International Conference on Machine Learning (ICML). Honolulu, Hawaii, USA. Jul, 2023.

    1. "Causal Triplet: An Open Challenge for Intervention-centric Causal Representation Learning" Yuejiang Liu1, 2,* YUEJIANG.LIU@EPFL.CH Alexandre Alahi2 ALEXANDRE.ALAHI@EPFL.CH Chris Russell1 CMRUSS@AMAZON.DE Max Horn1 HORNMAX@AMAZON.DE Dominik Zietlow1 ZIETLD@AMAZON.DE Bernhard Sch ̈olkopf1, 3 BS@TUEBINGEN.MPG.DE Francesco Locatello1 LOCATELF@AMAZON.DE

    1. Wu, Prabhumoye, Yeon Min, Bisk, Salakhutdinov, Azaria, Mitchell and Li. "SPRING: GPT-4 Out-performs RL Algorithms byStudying Papers and Reasoning". Arxiv preprint arXiv:2305.15486v2, May, 2023.

    2. Quantitatively, SPRING with GPT-4 outperforms all state-of-the-art RLbaselines, trained for 1M steps, without any training.

      Them's fighten' words!

      I haven't read it yet, but we're putting it on the list for this fall's reading group. Seriously, a strong result with a very strong implied claim. they are careful to say it's from their empirical results, very worth a look. I suspect that amount of implicit knowledge in the papers, text and DAG are helping to do this.

      The Big Question: is their comparison to RL baselines fair, are they being trained from scratch? What does a fair comparison of any from-scratch model (RL or supervised) mean when compared to an LLM approach (or any approach using a foundation model), when that model is not really from scratch.

    1. Discussion of the paper:

      Ghojogh B, Ghodsi A, Karray F, Crowley M. Theoretical Connection between Locally Linear Embedding, Factor Analysis, and Probabilistic PCA. Proceedings of the Canadian Conference on Artificial Intelligence [Internet]. 2022 May 27; Available from: https://caiac.pubpub.org/pub/7eqtuyyc

    1. "The Age of AI has begun : Artificial intelligence is as revolutionary as mobile phones and the Internet." Bill Gates, March 21, 2023. GatesNotes

    1. It should not be used as a primary decision-making tool, but instead as a complement to other methods of determining the source of a piece of text.

      This is true of any of these LLM models actually for any task.

    1. Feng, 2022. "Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis"

      Shared and found via: Gowthami Somepalli @gowthami@sigmoid.social Mastodon > Gowthami Somepalli @gowthami StructureDiffusion: Improve the compositional generation capabilities of text-to-image #diffusion models by modifying the text guidance by using a constituency tree or a scene graph.

    1. Training language models to follow instructionswith human feedback

      Original Paper for discussion of the Reinforcement Learning with Human Feedback algorithm.

    1. LaMDA: Language Models for Dialog Application

      "LaMDA: Language Models for Dialog Application" Meta's introduction of LaMDA v1 Large Language Model.

  5. Sep 2023
  6. Aug 2023
    1. Title: Delays, Detours, and Forks in the Road: Latent State Models of Training Dynamics Authors: Michael Y. Hu1 Angelica Chen1 Naomi Saphra1 Kyunghyun Cho Note: This paper seems cool, using older interpretable machine learning models, graphical models to understand what is going on inside a deep neural network

      Link: https://arxiv.org/pdf/2308.09543.pdf

  7. Jul 2023
    1. “Rung 1.5” Pearl’s ladder of causation [1, 10] ranks structures in a similar way as we do, i.e., increasing amodel’s causal knowledge will yield a higher place upon his ladder. Like Pearl, we have three different levelsin our scale. However, they do not correspond one-to-one.

      They rescale Pearl's ladder levels downwards and define a new scale, arguing that the original definition of counterfactual as a different level on it's own actually combines together mutiple types of added reasoning complexity.

    1. They think BON moves reward mass around from low reward samples to high reward samples

    2. We find empirically that for best-of-n (BoN) sampling

      they foudn this relationship surpsing, but it does seem to fit better than other functions with mimic the general shape.

      question: is tehre. agodo reason why?

    3. d

      they use sqrt since KL scales quadtraically, so it gets rid of the power 2.

    4. RL

      "for ... we don't see any overoptimization, we just see the .. monotonically improves"

      For which, I don't see a linear growth here that might not bend down later.

    1. The MuZero paper for model based learning when the mdoel is not directly available.

    1. Daniel Adiwardana Minh-Thang Luong David R. So Jamie Hall, Noah Fiedel Romal Thoppilan Zi Yang Apoorv Kulshreshtha, Gaurav Nemade Yifeng Lu Quoc V. Le "Towards a Human-like Open-Domain Chatbot" Google Research, Brain Team

      Defined the SSI metric for chatbots used in LAMDA paper by google.

    1. LaMDA pre-training as a language model.

      Does this figure really mean anything? There is no 3 in the paper at all.

    2. Safety does not seem to benefit much from model scaling without fine-tuning.

      Safety does not seem to be improved by larger models.

    3. How LaMDA handles groundedness through interactions with an external information retrieval system

      Does LAmbda always ask these questions? How far down the chain does it go?

    4. Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang,Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, and Quoc V. Le. Towards a human-like open-domain chatbot.arXiv preprint arXiv:2001.09977, 2020

      SSI metric deifnitions

    5. Using one model for both generation and discrimination enables an efficient combined generate-and-discriminateprocedure.

      bidirectional model benefits

    6. LaMDA Mount Everest provides factsthat could not be attributed to known sources in about 30% of response

      Even with all this work, it will hallucinate about 30% of the time

    1. Because DDPG is an off-policy algorithm, the replay buffer can be large, allowingthe algorithm to benefit from learning across a set of uncorrelated transitions.

      Off-policy algorithms can have a larger replay buffer.

    2. One challenge when using neural networks for reinforcement learning is that most optimization al-gorithms assume that the samples are independently and identically distributed. Obviously, whenthe samples are generated from exploring sequentially in an environment this assumption no longerholds. Additionally, to make efficient use of hardware optimizations, it is essential to learn in mini-batches, rather than online.As in DQN, we used a replay buffer to address these issues

      Motivation for mini-batches of training experiences and for the use of replay buffers for Deep RL.

    3. The DPG algorithm maintains a parameterized actor function μ(s|θμ) which specifies the currentpolicy by deterministically mapping states to a specific action. The critic Q(s, a) is learned usingthe Bellman equation as in Q-learning. The actor is updated by following the applying the chain ruleto the expected return from the start distribution J with respect to the actor parameters:∇θμ J ≈ Est∼ρβ[∇θμ Q(s, a|θQ)|s=st,a=μ(st|θμ)]= Est∼ρβ[∇aQ(s, a|θQ)|s=st,a=μ(st)∇θμ μ(s|θμ)|s=st] (6)Silver et al. (2014) proved that this is the policy gradient, the gradient of the policy’s performance

      The original DPG algorithm (non-deep) takes the Actor-Critic idea and makes the Actor deterministic.

    4. Interestingly, all of our experiments used substantially fewer steps of experience than was used byDQN learning to find solutions in the Atari domain.

      Training with DDPG seems to require less steps/examples than DQN.

    5. The original DPG paper evaluated the algorithm with toy problems using tile-coding and linearfunction approximators. It demonstrated data efficiency advantages for off-policy DPG over bothon- and off-policy stochastic actor critic.

      (non-deep) DPG used tile-coding and linear VFAs.

    6. It can be challenging to learn accurate value estimates. Q-learning, for example, is prone to over-estimating values (Hasselt, 2010). We examined DDPG’s estimates empirically by comparing thevalues estimated by Q after training with the true returns seen on test episodes. Figure 3 shows thatin simple tasks DDPG estimates returns accurately without systematic biases. For harder tasks theQ estimates are worse, but DDPG is still able learn good policies.

      DDPG avoids the over-estimation problem that Q-learning has without using Double Q-learning.

    7. It is not possible to straightforwardly apply Q-learning to continuous action spaces, because in con-tinuous spaces finding the greedy policy requires an optimization of at at every timestep; this opti-mization is too slow to be practical with large, unconstrained function approximators and nontrivialaction spaces

      Why it is not possible for pure Q-learning to handle continuous action spaces.

    8. Our contribution here is to provide modifications to DPG, inspired bythe success of DQN, which allow it to use neural network function approximators to learn in largestate and action spaces online

      contribution of this paper.

    9. Directly implementing Q learning (equation 4) with neural networks proved to be unstable in manyenvironments.
    10. As with Q learning, introducing non-linear function approximators means that convergence is nolonger guaranteed. However, such approximators appear essential in order to learn and generalizeon large state spaces.

      Why Q-learning can't have guaranteed convergence.

    11. We refer to our algorithm as Deep DPG (DDPG, Algorithm 1).
    1. IMPALA: Scalable Distributed Deep-RL with Importance WeightedActor-Learner Architectures

      (Espeholt, ICML, 2018) "IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures"

    2. We achieve stable learning at high through-put by combining decoupled acting and learningwith a novel off-policy correction method calledV-trace.
    3. we aim to solve a large collection oftasks using a single reinforcement learning agentwith a single set of parameters
    4. the progress has been primarily in singletask performance
    5. multi-task reinforcement learning

      Task: Multi-task Reinforcement Learning

    6. IMPALA (Figure 1) uses an actor-critic setup to learn apolicy π and a baseline function V π . The process of gener-ating experiences is decoupled from learning the parametersof π and V π . The architecture consists of a set of actors,repeatedly generating trajectories of experience, and one ormore learners that use the experiences sent from actors tolearn π off-policy.
    7. an agent is trained on each task
    8. scalability
    9. separately
    10. We are interested in developing new methodscapable of mastering a diverse set of tasks simultaneously aswell as environments suitable for evaluating such methods.

      Task: train agents that can do more than one thing.

    11. IMPALA actors communicate trajectoriesof experience (sequences of states, actions, and rewards) to acentralised learner
    12. full trajectories of experience
    13. aggressivelyparallelising all time independent operations
    14. learning becomes off-policy
    15. IM-PALA achieves exceptionally high data throughput rates of250,000 frames per second, making it over 30 times fasterthan single-machine A3C
    16. With the introduction of very deep model architectures, thespeed of a single GPU is often the limiting factor duringtraining.
    17. IMPALA is also moredata efficient than A3C based agents and more robust tohyperparameter values and network architectures
    18. IMPALA use synchronised parameter update which is vitalto maintain data efficiency when scaling to many machines
    19. A3C
    1. Yann LeCun released his vision for the future of Artificial Intelligence research in 2022, and it sounds a lot like Reinforcement Learning.

    1. Paper that evaluated the existing Double Q-Learning algorithm on the new DQN approach and validated that it is very effective in the Deep RL realm.

    2. Q-learning(Watkins, 1989) is one of the most popular reinforcementlearning algorithms, but it is known to sometimes learn un-realistically high action values because it includes a maxi-mization step over estimated action values, which tends toprefer overestimated to underestimated values

      Q-learning tends to overestimate the value of an action.

    3. noise
    4. unify these views
    5. we can learn a parameterized value function
    6. insufficiently flexible function approximation
    7. Both the target networkand the experience replay dramatically improve the perfor-mance of the algorithm
    8. The target used by DQN is then
    9. show overestimationscan occur when the action values are inaccurate, irrespectiveof the source of approximation error

      They show overestimations occur when there is approximation error in the value function approximation for Q(s,a).

    10. θt
    11. upward bias
    12. In the original Double Q-learning algorithm, two valuefunctions are learned by assigning each experience ran-domly to update one of the two value functions, such thatthere are two sets of weights, θ and θ′
    13. θ′t
    14. while Double Q-learning is unbiased.
    15. The orange bars show the bias in a single Q-learning update when the action values are Q(s, a) =V∗(s) + a and the errors {a}ma=1 are independent standardnormal random variables. The second set of action valuesQ′, used for the blue bars, was generated identically and in-dependently. All bars are the average of 100 repetitions.
    1. DDPG
    2. multiplying the rewards gen-erated from an environment by some scalar
    3. ELU
    4. his is akin to clipping therewards to [0, 1]
    5. network structure of

      differernt activiation functions tried

    6. Hyperparameters

      hyperparameters: alpha, dropbox prob, number of layers in your network, width of network layers, activation function (RELU, ELU, tanh, ...), CNN?, RNN?, ..., , epsilon (for e-greedy policy)

      parameters: specific to problem - paramters of Q(S,a) and policy pi (theta, w), gamma (? how important is the future)

    7. PPO
    1. TRPO uses a hard constraint rather than a penalty because it is hardto choose a single value of β that performs well across different problems
    2. gradient estimator
    3. we only ignore the change in probability ratio when it would make the objective improve,and we include it when it makes the objective worse.