15 Matching Annotations
  1. Nov 2020
    1. There are two steps in ourframework:pre-trainingandfine-tuning. Dur-ing pre-training, the model is trained on unlabeleddata over different pre-training tasks. For fine-tuning, the BERT model is first initialized withthe pre-trained parameters, and all of the param-eters are fine-tuned using labeled data from thedownstream tasks.

      .

  2. May 2019
  3. Mar 2019
    1. 0.5 mVms1

      0.5 mV*ms^-1

    2. 160m

      160 um

    3. 14
    4. TRN

      BIRNLEX:1721

    5. 12.60.4 ms

      12.6 +/- 0.4 ms

    6. 0.7F/cm2

      0.7 uF/cm^2

    7. 12 mV
    8. 170cm

      170 ohm*cm

    9. rat

      BIRNLEX:160

    10. 400m

      400 um

    11. 7105cm/s

      7e-05 cm/s

    12. 79 mV

      -79 mV

  4. Nov 2018