- Jan 2023
-
www.newyorker.com www.newyorker.com
-
The coaches let the teachers choose the direction for coaching. They usually know better than anyone what their difficulties are.
Everyone has something to work on, and ... everyone knows by themselves what their weaknesses are? Or rather, [[Coaching]] is about helping the coachee to develop new insight regarding their difficulties.
-
- Apr 2022
-
www.newyorker.com www.newyorker.com
-
Not long afterward, I watched Rafael Nadal play a tournament match on the Tennis Channel. The camera flashed to his coach, and the obvious struck me as interesting: even Rafael Nadal has a coach. Nearly every élite tennis player in the world does. Professional athletes use coaches to make sure they are as good as they can be.
What a great observation! It's obvious .. even the best of us could benefit from a coach.
-
Osteen watched, silent and blank-faced the entire time, taking notes. My cheeks burned; I was mortified. I wished I’d never asked him along. I tried to be rational about the situation—the patient did fine. But I had let Osteen see my judgment fail; I’d let him see that I may not be who I want to be.
Ah the shame and pain of failure. So familiar, so hard. 😬 But that's the price of becoming better.
-
Coaching aimed at improving the performance of people who are already professionals is less usual. It’s also riskier: bad coaching can make people worse.
Good coach accelerates learning, even for seasoned professionals. There is probably a downside, bad coach hinders development.
Tags
Annotators
URL
-
-
dwarkeshpatel.com dwarkeshpatel.com
-
Instead of meditating twenty minutes every day and getting marginally better over time, spend 10 days every year at a meditation retreat making giant leaps towards enlightenment, and just live your life for the other 355.
[[Barbell strategies]]
-
- Mar 2022
-
-
Elm applications are usually written following what we call TEA that stand for The Elm Architecture. It is just a loop that waits for events (clicks, etc.) and when they happen, it sends them to us so that we can react and change the interface accordingly. This animation explains The Elm Architecture cycle: GIF
Very nice animated diagram explaining TEA.
-
- Feb 2022
-
eshapard.github.io eshapard.github.io
-
New Ease = Avg Historical Ease * log(0.85) / log(historical succes rate) Where, Ease is the ease factor of the card, 0.85 is the desired 85% success rate, and the historical success rate is the ratio of reviews that were answered Hard, Good, or Easy.
N.b. the ratio of reviews for this particular card.
-
The result is that a card’s ease factor adjusts to whatever range of ease (or laziness) causes you to mark the card as Good. Unless you’re a very disciplined and consistent person, this range is likely to be pretty wide, and the boundaries are likely to move drastically with your mood.
Marking a card "Good" is a signal to Anki that the ease for this card is correct. What is the range of eases for me? The range could wide, because differing difficulty between cards. The issue here is that range for a particular card may fluctuate a lot, according to mood, tiredness, and other factors not related to the card's difficulty. The suggested solution is the log-based formula.
-
With this equation, it won’t matter whether you choose Easy, Good, or Hard. The success rate over time will tell us whether the ease factor is too easy (over 85% success), or too hard (under 85% success). We don’t have to think about it anymore. We can just select between Again and Good; a simple binary choice between getting a card right and getting it wrong.
Should Hard, Good, Easy in [[Anki]] be bundled together? It would make choosing the right option easier, but how about .. e.g. verb conjugations where you miss singular 3rd person and get the other five right?
-
-
docdrop.org docdrop.org
-
it automatically extracts the important information on the page from the arm close your reading to educational videos 00:00:23 on YouTube online courses ebooks and more
Rumin, a closed source (?) note taking app, like [[Evernote]], has a web clipper that extracts metadata with the clip. E.g. for a video, we'd get channel name, publish date, etc. Mildly interesting.
-
- Jan 2022
-
www.reddit.com www.reddit.com
-
Now with a true representation of the input, you can build the chord detection logic by checking the notes against major scales. If we assume all chords contain their root we can build the major scale corresponding with the lowest note and quickly check to see which pieces of it are present, or declare this "not a chord" if we have a note that isn't part of the scale. Above, we'd find the C, E, G, D and realize it's a 1/3/5 chord + the 9th note of the scale and call it {base} add 9.
Ideas for figuring out chords.
The GUI for guitar chords: https://www.chordfinder.us/
Another site is https://oolimo.com/guitarchords/find
-
-
www.smbc-comics.com www.smbc-comics.com
-
Tags: parenting, sex, science, math
"If you don't talk to your kids about quantum computing... Someone else will."
-
- Dec 2021
-
beepb00p.xyz beepb00p.xyz
-
А Вы не пробовали ripgrep-all (https://github.com/phiresky/ripgrep-all)? Это враппер над rg для поиска по документам. Он вообще всеядный: работает с pdf/.doc/.docx, sqlite, архивами, etc.
Ripgrep-all to search in Jupyter notebooks as well? Much of my code is .ipynb files.
-
The only thing that's left is restricting the search to git repositories only. Ripgrep relies on regexes, so we can't do something like Xpath queries and tell it to only search in directories, that contain .git directory. I ended up using a two step approach: first, my/code-targets returns all git repositories it can reach from my/git-repos-search-root. I'm using fd to go through the disk and collect all candidate git repositories. Even though fd is already ridiculously fast, this step still takes some time, so I'm caching the repositories. Cache is refreshed in the background every five minutes so we don't have to crawl the filesystem every time. That saves me few seconds on every search. then, my/search-code keybindings invokes ripgrep against all my directories with code, defined in my/code-targets function.
I have multiple repos, need to search in them all to uncover useful pieces of code.
-
-
medium.com medium.com
-
I choose ripgrep, as explained in the comparison here it’s the fastest search tool available.
//Was// fastest. New ugrep (v3.3) seems to be faster: https://github.com/Genivia/ugrep
-
-
panzerglass.us panzerglass.us
-
The PanzerGlass™ screen protector in black for iPhone X/Xs/11 Pro with antibacterial coating has the same features as the original PanzerGlass™ and will protect your screen from scratches and bumps.
Dimethyloctadecyl ammonium chloride. 3-(trimethoxysilyl)propyl https://pubchem.ncbi.nlm.nih.gov/compound/3-_Trimethoxysilyl_propyl-acrylate
-
-
exobrain.sean.fish exobrain.sean.fish
-
This is an ever evolving list of tools and scripts I use and recommend, or combinations of tools I use to optimize my workflow.Most of these are command line based. On a regular day, the only GUI tool I use is my browser.
Going text-only, interesting collection of tools.
-
-
tiddlymap.org tiddlymap.org
-
The motivation behind TiddlyMap is to combine the strengths of wikis and concept maps in the realms of personal knowledge management in a single application.
Build a mind map from wiki topics using Vis.js.
-
-
support.focusrite.com support.focusrite.com
-
Instrument Level is the most variable level signal which will travel through a TS (Tip, Sleeve) jack connection and will also require a preamp to be raised to Line Level. "Inst" should be selected whenever you connect an instrument, such as a guitar or bass guitar, directly to your interface.
Choose "instrument" when plugging in a guitar. Why? The impedance and max input level match that of the instrument and thus produce the most natural (correct?) sound.
The input is by default at mic level which is not preamped.
-
-
brandmark.io brandmark.io
-
Word vectors capture the context of their corresponding word. They're often inaccurate for extracting actual semantics of language (for example, you can't use them to find antonyms), but they do work well for identifying an overall tonal direction.
Embeddings for logos: Can an embedding be used to encode some style features about logos?
-
We can match up fonts and icons with similar visual features using their neural embedding, which generally produce more cohesive logos.
Good examples of how wrong it looks if a logo and its font are not in balance.
-
Logos are essentially abstract illustrations, but clearly not all illustrations make for good logos. In order to create logos systematically we need some notion of what makes a logo good in a visual sense
Good logos are legible (easy to identify) //and// unique (distinctive).
-
-
-
Annotate your photos Tropy’s beautiful annotation tools allow you to transcribe documents, select image details, and manipulate photographs to get the clearest view of your sources.
Annotations for photos and images. But I guess this is not for pointing out details in an image but "just" on image viewer with an associated text (and structured data?) field.
-
-
www.youtube.com www.youtube.comYouTube1
-
Hypothesis Animated Intro
Great narration in this intro video. E.g. at 0:54 change of pace.
-
-
beepb00p.xyz beepb00p.xyz
-
This is an easy case, and you can just retrieve all over again every time. Example: Pinboard API, there are just a few megabytes of data you have on Pinboard and API doesn't prevent you from retrieving all of it at once.
Don't slice if you can retrieve a full data dump time after time. The service is the data storage, query from there. Risk: if the service switches from storing alltime data to "latest n years", the early data is lost. Need to re-check each service's API description periodically.
-
Now, remember I mentioned that Reddit only gives you the latest 1000 items, so I end up with overlapping data slices? To get a full view of my data, I'm simply going through individual JSON files in chronological order and merging together in accumulate method. [optional] caching layer I've got Reddit data way into past and export it every day, so merging together all these files during data access can indeed take a noticeable time. I'm overcoming this by using a @cachew annotation on the comments() method.
Really read through every exported datafile, parse contents, and merge according to e.g. timestamp. Sounds like huge overhead but maybe it's not, especially when using caching?
Of course, exporting often enough to make sure no data is lost will result in lot of duplicate exported data on the disk. Storage is cheap?
-
- Nov 2021
-
www.cs.princeton.edu www.cs.princeton.edu
-
Boosting is an approach to machine learning based on the idea of creatinga highly accurate prediction rule by combining many relatively weak and inaccu-rate rules
This definition applies to all ensemble methods, right?
-
- Oct 2021
-
-
Because Anki is a lot more efficient than traditional study methods, you can greatly decrease your time spent studying to learn vocabulary.
Based on what? (citation needed)
-
-
augmentingcognition.com augmentingcognition.com
-
Put another way, in Miller's account the chunk was effectively the basic unit of working memory. And so Simon and his collaborators were studying the basic units used in the working memory of chess players. If those chunks were more complex, then that meant a player's working memory had a higher effective capacity.
Working memory as storing "chunks". The count of chunks is more or less constant, but what contributes a chunk may have a lot of variance. A chunk is a concept so familiar that you can reason with it.
-
I said above that I typically spend 10 to 60 minutes Ankifying a paper, with the duration depending on my judgment of the value I'm getting from the paper. However, if I'm learning a great deal, and finding it interesting, I keep reading and Ankifying. Really good resources are worth investing time in.
How to tag and store the "really good sources" for later review? I wouldn't want to have thousands of really good sources because then I wouldn't have any bandwidth for new material. Always pop something out if something goes in?
-
First, if memorizing a fact seems worth 10 minutes of my time in the future, then I do it** I first saw an analysis along these lines in Gwern Branwen's review of spaced repetition: Gwern Branwen, Spaced-Repetition. His numbers are slightly more optimistic than mine – he arrives at a 5-minute rule of thumb, rather than 10 minutes – but broadly consistent. Branwen's analysis is based, in turn, on an analysis in: Piotr Wozniak, Theoretical aspects of spaced repetition in learning.. Second, and superseding the first, if a fact seems striking then into Anki it goes, regardless of whether it seems worth 10 minutes of my future time or not.
2 rules of thumb: "What to ankify?" It's not straightforward to know (beforehand) what actually is worth memorizing. Are there any good heuristics?
Tags
Annotators
URL
-