82 Matching Annotations
  1. Sep 2024
    1. Planck's radiation law

      needed to look up the expression $${\displaystyle u_{\nu }(\nu ,T)={\frac {8\pi h\nu ^{3}}{c^{3}}}{\frac {1}{\exp \left({\frac {h\nu }{k_{\mathrm {B} }T}}\right)-1}}}$$

    2. Since the volume of the inert gas is much lager than the volume of the sample it provides not only an isobaric environment

      the "Since" did not make sense to me. Had to ask chatgpt to explain it to me.

    3. wire sample (typically 0.5 mm diameter and 40 mm in length) is clamped between two sets of brass jaws and resistively heated in an inert-gas-filled discharge chamber while the heating current, the sample voltage drop, and the surface radiance are recorded

      procedure description

    4. The apparatus itself and the specific details on the data reduction used in these measurements have been extensively described elsewhere [10], [11], wherefore a detailed description is omitted in this paper.

      punts description to another paper. Had to search them to see what this setup was.

    5. At the subsecond thermophysics laboratory in Graz thermophysical properties determinations are performed for many years and vanadium was one of the pure metals which have been investigated recently.

      this could've been a blind review violation

    6. Table 1. Values for the melting temperature Tm of pure vanadium from different sources [2], [3], [4], [5], [6], [7], [8], [9]Source of melting temperatureTm (K)Goodfellow Cambridge Limited (supplier) [2]2163Desai [3]2202 (IPTS 68)McClure and Cezairliyan [4]2201 (ITS 90)Aesar [5]2183Storms and McNeal [6]2161Oriani and Jones2192Hultgren et al. [8]2199Kocherzhinskii et al. [9]2223Note: The value given by McClure and Cezairliyan [4] is the value from [3] (2202 K) adapted to the ITS 90.

      lit review

    7. Pure vanadium is a bright white metal with good corrosion resistance to, i.e., alkalis and salt water and is therefore commonly used as an additive in producing rust resistance in springs, and highspeed tool steels, since it is an important carbide stabilizer in making steels

      applications

    8. Summarizing, the following results for thermophysical properties at the melting point have been obtained: radiance temperature at melting (650 nm) Tr,m = 1993 K, melting temperature Tm = 2199 K, normal spectral emissivity at melting (684.5 nm) ɛ = 0.353. An observed feature of all measured data and results is, that a much better agreement with literature references exists for the liquid phase than in the solid state, thus we have restricted the presentation to liquid vanadium.

      this is the "money shot"

    9. normal spectral emissivity

      had to ask ChatGPT

      • Normal: The emissivity is measured at an angle perpendicular (90 degrees) to the surface of the material. • Spectral: The emissivity is not over the entire spectrum but is specific to a certain wavelength of electromagnetic radiation. • Emissivity: This is the ratio of the energy radiated by the material to the energy radiated by an ideal blackbody at the same temperature and wavelength. It indicates how effectively the material emits thermal radiation at a particular wavelength.

    10. The aim was to obtain another full dataset of properties (enthalpy, heat of fusion, electrical resistivity, thermal conductivity, emissivity) of liquid vanadium to either confirm existing recommendations for certain properties or presenting newer measurements for comparison leading towards such recommendations.

      goal of the paper

    11. This recent work presents the results of thermophysical measurements on vanadium including normal spectral emissivity at 684.5 nm

      what the work is about

    12. Although vanadium is commonly used as an additive in the steel production, literature data for thermophysical properties of vanadium around the melting point are sparse and show, where available a variation over a wide range. This manifests especially in the melting temperature (variation of ±30 K), heat of fusion, or specific enthalpy.

      establishes need

  2. Jun 2024
    1. What does Microsoft Word look like with a Photoshop-like palette on the side?

      This is today's tech. It's not in product form yet, but some startups are trying to build writer tools based on this.

      A startup called me to talk about a potential "consult" regarding this, sucked a lot of ideas and hard-earned experience, and they never got back to me or paid up. So fuck them!

      That sounded bitter, and I really am. Can you blame me? I want to support founders, and I take these calls in good faith and end up with this "brain rape". I can't wait to dance on their grave.

    2. Text is becoming something new,

      text is not becoming new, but everything around text is.

      The middlemen of text production, consumption, and distribution are either getting eliminated or replaced with AI models.

      The economic implication of this is obvious and yet profound.

      A larger truth is text has descended from the realm of the sacred to that of the profane. Now, try to map what that means to everything we do with text :)

    3. Could you dynamically change the register or tone of text depending on audience, or the reading age, or dial up the formality or subjective examples or mentions of wildlife, depending on the psychological fingerprint of the reader or listener?

      as I mentioned earlier, these don't need to be rhetorical questions.

    4. to take a chapter of a book and edit it, not by changing words, but by scrubbing it with the semantic equivalent of the burn and dodge tools in Photoshop.

      We can now do style-conditioned generation, style transfer, and style editing for text. I did this for a DARPA project. To make it cool, we can do it even on short texts like tweets and personalized in your writing style. One day, I will explain how we can fight state-sponsored propaganda using that.

    5. Again I don’t know what that means, to have associations and contextualisations always present with a text, a structuralist’s dream, but… it’s different.

      whatever brain wiring I am suffering with, I have this going all the time. I cannot read anything without associations and contexts popping up in my head all the time. Many times I wish I can turn them off and just read the damn words :)

    6. All text will be auto-glossed

      gloss over (v.): "try to conceal or disguise something unfavorable by treating it briefly or representing it misleadingly."

      this is my big worry. For convenience we build tools that gloss and hence gloss over.

    7. nce, for me, is that two thresholds have been crossed: speed and automation.

      true .. this means more self-service tools. See the shovel cartoon I sent earlier.

    8. Sean Graves at the Autonomy Institute has developed a tool called GERM. We used GERM to build a dataset of risks mentioned by the 266,989 UK companies who filed their accounts throughout March 2024. – Sean Graves (Autonomy Data Unit), GERM (Geopolitical & Environmental Risk Monitor) (2024)

      this looks cool.

    9. What if the difference between statements that are simply speculative and statement that mislead are as obvious as, I don’t know, the difference between a photo and a hand-drawn sketch?

      Instead of answering a rhetorical question, I will let you imagine how people read memes. They know it is a meme, yet they will believe it and engage with it instead of looking up the real stuff behind the meme. The real question here is whether people will ignore the difference (my hypothesis is yes) and whether knowing the difference matters (my bet is no).

      Further, you will have people questioning your photo/sketch classifier.

    10. What would it mean to listen to a politician speak on TV, and in real-time see a rhetorical manoeuvre that masks a persuasive bait and switch?

      This can be done in real time today. We tried versions of real-time fact-checking back in the day, and I am convinced people will believe what they want to believe.

    11. For example, here’s a sequence of episode titles that transitions smoothly from geology to history: Vulcanology 1816, the Year Without a Summer Climate Change Meterology Voyages of James Cook Astronomy and Empire Longitude …and so on.

      the smooth transition is not a property of the method but the careful transitions of the episode topics that BBC content creators managed. I would also bet that if you scroll past the list

      again this "trick" has been around for a few decades, but we now have tooling for everyone to play with it, which is fun!

      note: both t-sne and pca are dimension reduction techniques, but with different objectives.

    12. , I feel like I’m peering into the world’s first microscope and spying bacteria, or through a blurry, early telescope, and spotting invisible dots that turn out to be the previously unknown moons of Jupiter…

      okay .. nice analogy

    13. can individual authors be fingerprinted by how they construct text?

      I even wrote a paper about it, which does a lot more than fingerprinting :)

    14. January on the PartyKit blog: Using Vectorize to build an unreasonably good search engine in 160 lines of code (2024). (That post has diagrams and source code.) But what I want to emphasise is how little code there is.

      This is a booby trap. Why? It's the sort of thing that makes people on HN post, "I can build this in a weekend". But when you build an actual search engine, you realize how messy everything is and especially so when you build something for more than one person.

    15. Search for main roman god – this is also Jupiter, but a different one: this Jupiter is the king of the gods in the Roman pantheon. The top result is an episode about Rome and European civilisation, not the episode about the planet Jupiter, showing that embeddings can distinguish concepts even when similarly named.

      I feel that when non-technical folks are empowered enough to experience building with tech (because it is simple to do now), they marvel at things traditional devs/researchers have taken for granted. That's what's happening here.

    16. Search for the biggest planet – again, the episode about the planet Jupiter is at the top. There is no synonyms database here, which is how a traditional search engine would work. The phrase the biggest planet has been translated into its “coordinates” and the search engine has looked for “nearby” coordinates representing episodes.

      hmm .. not a new capability .. this existed prior to modern DL, and I am not sure what changes it can bring that devs / data scientists haven't already tried.

  3. Jan 2024
    1. Function calling Some API providers already provide function calling APIs, but these are extremely ad-hoc, and require manual type inference to extract a e.g OpenAPI compatible schema. Similar to Vercel's automatic deployment of lambda functions specified in a api/ directory, a compelling feature here would be automatically making registered functions available to LLM calls. An additional problem to solve here is function search - which functions should you expose based on the prompt/query?

      this is llm specific

    2. A Watermark for Large Language Models Undetectable Watermarks for Language Models On the Reliability of Watermarks for Large Language Models

      all of these are broken methods. i dont think this can be done reliably in a scalable way

    3. This involves a more substantial investment (time and code) than calling an API, which gives Mistral an edge in retaining users,

      is this true though? HF is making transformer access a standard. plus popularity of claude and gpt suggests people cannot be bothered to install stuff

    4. , as well as filtering low value users, as they will likely take the easier route of API usage. Whether they can find services or new models which those users would pay for is another question.

      this is not clearly written. i cannot understand what you are saying

    5. This

      The last three paras begin with "This". Avoid that. Coreference resolution is hard for humans. Every new para suggests a switch. Forcing the user to resolve coreferents frequently makes reading hard.

    6. LLM serving is completely commoditized

      the "Because" at the beginning of the sentence doesn't make sense as serving commoditized by open source efforts.

    7. Because LLM serving costs are dominated by GPU access currently,

      "LLM" is not a single-category product. A whole lot of modern LLMs are CPU-friendly.

    8. The Langage Model Serving Companyself.__wrap_b=(e,t,r)=>{let n=(r=r||document.querySelector(`[data-br="${e}"]`)).parentElement,o=e=>r.style.maxWidth=e+"px";r.style.maxWidth="";let l=n.clientWidth,a=n.clientHeight,i=l/2-.25,u=l+.5,s;if(l){for(;i+1<u;)o(s=Math.round((i+u)/2)),n.clientHeight===a?u=s:i=s;o(u*t+l*(1-t))}r.__wrap_o||"undefined"!=typeof ResizeObserver&&(r.__wrap_o=new ResizeObserver(()=>{self.__wrap_b(0,+r.dataset.brr,r)})).observe(n)};self.__wrap_b(":r0:",1)

      The title does not say what the article is about. Something like:

      Traditional cloud providers will not be LLM-serving companies.

  4. Jan 2017
    1. A teenager learns to drive with tens or hundreds of miles of practice.

      A 5 year old can master a video game in the matter of few minutes. A (deep) reinforcement learning system, which even on fast GPU clusters, will take a few days.

    2. Between July and December 2015, object detection in a set of difficult-to-recognize images (the KITTI vision benchmark) improved from 39% to 87% success.

      Someone needs to seriously study the effect of model overfitting (based on literature). That study can only happen with a new dataset.

    3. An approach called “deep learning” has been particularly important to the changes of the past five years.

      Important to note all the reported examples of dramatic improvements are from computer vision.

    4. What we will argue here is that AI — in its modern incarnation — is all about advances in prediction

      Exactly. Let's leave "intelligence", "cognition", "understanding", outside the door while operating under traditional gradient-pushing AI.

    5. what is this reducing the cost of?

      This is a whetstone question for examining a product thesis. I have had several discussions of this sort. Any AI solution that is just replacing a minimum wage worker will prove well in the market. For instance, a Deep Learning based face recognition system might be, at times, better than humans, but they will never get easy adoption in, say, a door access control systems. The cost of compensating a security personnel is far less than the installation cost + monthly subscription fee. Plus a security personnel can do things he was not part of his job description (model), like calling 911 in an emergency.

  5. Jul 2016
  6. arxiv.org arxiv.org
    1. Although deep neural networks havein theory much higher representational power thanshallow models, it is not clear if simple text classi-fication problems such as sentiment analysis are theright ones to evaluate them.

      ??

    2. Unlike unsupervisedly trained wordvectors from word2vec, our word features can beaveraged together to form good sentence represen-tations.

      What is "good" here? Something that leads to more accuracy?

    3. We show that by incorporatingadditional statistics such as using bag of n-grams,we reduce the gap in accuracy between linear anddeep models, while being many orders of magnitudefaster.

      Another claim in the paper. But is this really new? At this point all of this is common knowledge.

    4. At the same time, simple linear models have alsoshown impressive performance while being verycomputationally efficient

      Is this the best citation for the topic? For a main claim, shouldn't they be diligent about citing the most important works in the area?