24 Matching Annotations
  1. Dec 2019
    1. To view with pymol, the trajectory must also be converted to .pdb format.

      This is not longer true, you could just load the trajectory in xtc format, to the structure using the load_traj pymol command.

    2. Since the martinize.py script already generated position restraints (thanks to the -p flag),

      This is not entirely true, this changed in even more recent versions of GROMACS (2018 or *19). Dince now you have to specify the restrictions manually. Therefore, the command to use in this case is:

      gmx grompp -f minimization.mdp -c system-solvated.gro -p system.top -o minimization.tpr -r system-solvated.gro
      
    1. add the appropriate number of water beads (the molecule name is W) to your system topology (.top);

      This means making a new topology file that is similar to the system-vaccum.top file but adding the following as last line:

      W            827
      

      827 is the number reported in the output of previous solvate command. Check yours.

    2. $ genbox -cp minimization-vaccum.gro -cs water-box-CG_303K-1bar.gro -vdwd 0.21 -o system-solvated.gro

      The genbox command is no longer existing in recent/newer gromacs versions. This has to be changed to using the solvate command as follows:

      gmx solvate -cp minimization-vaccum.gro -cs water-box-CG_303K-1bar.gro -radius 0.21 -o system-solvated.gro
      
    3. In general this tutorial is not updated to latest GROMACS version. As a matter of fact, it is pretty outdated (using GROMACS 4.x or so).

  2. Oct 2019
    1. As the loss function in force-matching is a least-squares regression problem, the form of the expected prediction error is well-known (see the SI for a short derivation) and can be written as

      So what they did here is demonstrating that minimizing the loss in the forces is equivalent to a EPE calculation via the MSE in the common stats. models error computation... cool.

    2. conservative

      Why can we say that the forces must be conservative? Or is it just a good approximation since maybe dealing with dissipation is just too complex?

    3. Then we can decompose expression 4 as follows (see the SI for derivation):(6)

      This just looks like the common reducible and irreducible error decomposition in stats. models, where the reducible error is further decomposed in variance and bias of the model (bias vs variance tradeoff).

  3. Jan 2019
    1. If Alice still wants to encode Image 156 at 0.000156 and Image 157 at 0.000157 she better make sure that those two images are almost identical unless they want to sacrifice a huge amount of their score by choosing a tiny uncertainty.

      How can we choose images to be almost identical and have the same code? What's the algorithm or method behind this?

    2. With this code, they ace images they have never encountered in training and completely destroy their competition.

      It seems like the analogy of VAE with the Autoencoding Olympics is taken a bit too much here, how is it relevant that they participated and won without explaining anything on how they did it? These unnecessary details can be avoided.

  4. Jun 2018
    1. I guess that in general I tend to agree with most of the points except the first two. There are indeed simpler, and maybe more suitable, alternatives such as TOML as the author says.

    2. True only if you stick to a small subset. The full set is complex – much more so than XML or JSON.

      Again, not so true if you use a real/decent editor to read the files. If you don't then this just holds for any long file in any markup language... arguably.

    3. Can be hard to edit, especially for large files

      I don't think this is much of an issue because any real and decent editor should have the option to collapse/expand indent blocks without any effort.

    4. Insecure by default

      This isn't really much of an issue since there is a safe_load function, as the author says afterwards at the end of this section. Nothing to worry about, IMHO.

  5. Apr 2018
    1. “The number of moving parts is so vast, and several of them are under the control of different groups. There’s no way you could ever pull it together into an integrated system in the same way as you can in a single commercial product with, you know, a single maniac in the middle.”

      While there are indeed many moving parts modern Version Control Systems take care of that properly and automatically. The idea of a Benevolent dictatorship dictating the course of the software is not wrong, but it's been shown that it limits the software itself, no matter how smart or good the dictator is, he/she will never be able to think about every aspect and possible applications of the software. And considering this is a tool for research and experimentation, limiting the tool on purpose is not a very good idea, if you ask me. EVEN THEN, an open-source model leaves space for a Benevolent dictatorship (just as Linux and Google does for many of its FOSS projects), so that isn't really an argument against an open-source non-commercial developing model, in the worst case scenario.

    2. to the point where it’s no longer called “IPython”: The project rebranded itself as “Jupyter”

      The project was not really renamed, it forked such that the language-agnostic parts are now part of Jupyter, the python part is still being development as IPython. More info at http://ipython.org/

  6. Sep 2017
    1. what you really need is some form of mentor that’s an integral part of the user interface

      I was actually thinking about something somewhat related to this today. What if you put the effort, money and "engineering" that's in video games industry to creating better "real"/serious software and UIs? How did the video game industry (not only this, you could say TV industry as well) got so much attention and relevance in today's world in the first place?

    2. They never have

      I don't think nobody ever do think about it, but it's in fact a bit rare, but not THAT uncommon.

    3. incredible feats of dealing with information very, very rapidly and making something out of it.

      No wonder why fast paced shows and rushing through EVERYTHING are two key aspects of the current generations... I guess.

    4. rather than being in a technological version of the 11th century.

      I don't understand this, is AK comparing current vision of technology in schools as from being of the 11th century?

    5. that we’re driven genetically to learn the culture around us.

      This does look logical, it could imply ability to survive by being able to adapt "easily" to the environment/habitat, maybe this is the root of our "advanced" intelligence?

  7. Jun 2017
    1. forcing

      This is always a controversial point for me, forcing people to embrace a specific interpretation and level of freedom seems paradoxical. But, if not what else? We know the GPL was a success for many situations, so it kind of works.

    2. The open source world is full of programmers, but it is sorely missing designers, UX designers, musicians, CSRs, product managers

      I agree, but then... what can the open source world/people/communities do to bring those kind of designers, musicians, etc. to it if not forcing them? As maybe happened with programmers at the beginning (this is just guessing)?

    1. If a string of quantum bits — or qubits, the quantum analog of the bits in a classical computer — is in superposition, it can, in some sense, canvass multiple solutions to the same problem simultaneously, which is why quantum computers promise speedups.

      I've found opinions claiming that this kind of speedup in QC is misleading. Is this just because it's hard to measure the qubit ensemble without making the wavefunction collapse or are there any other things that avoid this kind of speedup?