18 Matching Annotations
  1. Oct 2019
  2. May 2019
    1. cost function

      But what if the cost function (reward function) is not available? Or it is usually available for many cases of interest?

  3. Dec 2018
  4. May 2018
    1. Using research to prove something you passionately believe in can lead to confirmation bias, where you only pay attention to results that support your existing view.

      be aware of confirmation bias

    2. The natural temptation might be to set your aims as high as possible and make your project as comprehensive as you can. Such projects are easy to imagine, but much harder to implement.

      Start small in writing your aims!

    3. Another approach is to test the basic assumptions that others in the field have used

      Test the basic assumption will be of great value to the field.

  5. Jan 2018
  6. Dec 2017
    1. Warp divergence

      When threads inside a warp branches to different execution paths. Instead of all 32 threads in the warp do the same instruction, on average only half of the threads do the instruction when warp divergence occurs. This causes 50% performance loss.

    2. make as many consecutive threads as possible do the same thing

      an important take-home message for dealing with branch divergence.

    3. Warps are run concurrently in an SM

      This statement conflicts the statement that only one warp is executed at a time per SM.

    4. Each SM has multiple processors but only one instruction unit

      Q: Only one instruction unit is a SM, and a SM has many warps. Does this imply all these warps within the same SM execute the same set of instructions?

      A: No. Each SM has a (cost-free) warp scheduler which prioritizes ready warps (along the time dimension). Take a look at the figure in page 6. of http://www.math.ncku.edu.tw/~mhchen/HPC/CUDA/GPGPU_Lecture5.pdf

  7. Nov 2017
    1. MPI ping pong program
      • world_rank, partner_rank: private for each process?
      • ping_pong_count: shared?

      -> NO! Each process spawns an instance of the same program with its own memory space. Operations that a process carries out on the variables in its memory space does not affect the values of the variables of another process' memory space. To communicate a value of a variable from one process to another (e.g., update the newly computed value of a variable X to its value in another process), we can use MPI_Send and MPI_Recv.

      **After MPI_Send, the sent data is packed into a buffer and the program continues (e.g., it needs no receiver for the program to continue).

      However, after MPI_Recv, the program waits until it receives the data. **

  8. Dec 2016
  9. Aug 2016
  10. Jul 2016
  11. Jun 2016
    1. understand the basic Image Classification pipeline and the data-driven approach (train/predict stages)

      This is my first note using 'Hypothesis.io'