5 Matching Annotations
- Feb 2023
The code to reproduce our results can be found here.
- Jan 2023
This input embedding is the initial value of the residual stream, which all attention layers and MLPs read from and write to.
- Apr 2022
Starting from random noise, we optimize an image to activate a particular neuron (layer mixed4a, unit 11).
And then we use that image as a kind of variable name to refer to the neuron in a way that more helpful than the the layer number and neuron index within the layer. This explanation is via one of Chris Olah's YouTube videos (https://www.youtube.com/watch?v=gXsKyZ_Y_i8)
- Jun 2020
Moreau, David, and Kristina Wiebels. ‘Assessing Change in Intervention Research: The Benefits of Composite Outcomes’, 2 June 2020. https://doi.org/10.31234/osf.io/t9hw3.
- pooling information
- combining assessments
- composite scores
- outcome measures
- evaluate effectiveness
- Jun 2019
To interpret a model, we require the following insights :Features in the model which are most important.For any single prediction from a model, the effect of each feature in the data on that particular prediction.Effect of each feature over a large number of possible predictions
Machine learning interpretability