FWHM
undefined acronym
FWHM
undefined acronym
Planck's radiation law
needed to look up the expression $${\displaystyle u_{\nu }(\nu ,T)={\frac {8\pi h\nu ^{3}}{c^{3}}}{\frac {1}{\exp \left({\frac {h\nu }{k_{\mathrm {B} }T}}\right)-1}}}$$
Since the volume of the inert gas is much lager than the volume of the sample it provides not only an isobaric environment
the "Since" did not make sense to me. Had to ask chatgpt to explain it to me.
wire sample (typically 0.5 mm diameter and 40 mm in length) is clamped between two sets of brass jaws and resistively heated in an inert-gas-filled discharge chamber while the heating current, the sample voltage drop, and the surface radiance are recorded
procedure description
The apparatus itself and the specific details on the data reduction used in these measurements have been extensively described elsewhere [10], [11], wherefore a detailed description is omitted in this paper.
punts description to another paper. Had to search them to see what this setup was.
the use of an ellipsometric approach
what is this? hail chatgpt
containerless conditions
what are they and why are they necessary
ohmic pulse-heating apparatus
what is this? needed google search
isobaric heat capacity
needed chatgpt help
At the subsecond thermophysics laboratory in Graz thermophysical properties determinations are performed for many years and vanadium was one of the pure metals which have been investigated recently.
this could've been a blind review violation
Table 1. Values for the melting temperature Tm of pure vanadium from different sources [2], [3], [4], [5], [6], [7], [8], [9]Source of melting temperatureTm (K)Goodfellow Cambridge Limited (supplier) [2]2163Desai [3]2202 (IPTS 68)McClure and Cezairliyan [4]2201 (ITS 90)Aesar [5]2183Storms and McNeal [6]2161Oriani and Jones2192Hultgren et al. [8]2199Kocherzhinskii et al. [9]2223Note: The value given by McClure and Cezairliyan [4] is the value from [3] (2202 K) adapted to the ITS 90.
lit review
Pure vanadium is a bright white metal with good corrosion resistance to, i.e., alkalis and salt water and is therefore commonly used as an additive in producing rust resistance in springs, and highspeed tool steels, since it is an important carbide stabilizer in making steels
applications
Summarizing, the following results for thermophysical properties at the melting point have been obtained: radiance temperature at melting (650 nm) Tr,m = 1993 K, melting temperature Tm = 2199 K, normal spectral emissivity at melting (684.5 nm) ɛ = 0.353. An observed feature of all measured data and results is, that a much better agreement with literature references exists for the liquid phase than in the solid state, thus we have restricted the presentation to liquid vanadium.
this is the "money shot"
normal spectral emissivity
had to ask ChatGPT
• Normal: The emissivity is measured at an angle perpendicular (90 degrees) to the surface of the material. • Spectral: The emissivity is not over the entire spectrum but is specific to a certain wavelength of electromagnetic radiation. • Emissivity: This is the ratio of the energy radiated by the material to the energy radiated by an ideal blackbody at the same temperature and wavelength. It indicates how effectively the material emits thermal radiation at a particular wavelength.
The aim was to obtain another full dataset of properties (enthalpy, heat of fusion, electrical resistivity, thermal conductivity, emissivity) of liquid vanadium to either confirm existing recommendations for certain properties or presenting newer measurements for comparison leading towards such recommendations.
goal of the paper
This recent work presents the results of thermophysical measurements on vanadium including normal spectral emissivity at 684.5 nm
what the work is about
Although vanadium is commonly used as an additive in the steel production, literature data for thermophysical properties of vanadium around the melting point are sparse and show, where available a variation over a wide range. This manifests especially in the melting temperature (variation of ±30 K), heat of fusion, or specific enthalpy.
establishes need
What does Microsoft Word look like with a Photoshop-like palette on the side?
This is today's tech. It's not in product form yet, but some startups are trying to build writer tools based on this.
A startup called me to talk about a potential "consult" regarding this, sucked a lot of ideas and hard-earned experience, and they never got back to me or paid up. So fuck them!
That sounded bitter, and I really am. Can you blame me? I want to support founders, and I take these calls in good faith and end up with this "brain rape". I can't wait to dance on their grave.
Text is becoming something new,
text is not becoming new, but everything around text is.
The middlemen of text production, consumption, and distribution are either getting eliminated or replaced with AI models.
The economic implication of this is obvious and yet profound.
A larger truth is text has descended from the realm of the sacred to that of the profane. Now, try to map what that means to everything we do with text :)
So the camera doesn’t just observe and record, it changes us.
all measurements change the thing being measured when examined deeply
Could you dynamically change the register or tone of text depending on audience, or the reading age, or dial up the formality or subjective examples or mentions of wildlife, depending on the psychological fingerprint of the reader or listener?
as I mentioned earlier, these don't need to be rhetorical questions.
to take a chapter of a book and edit it, not by changing words, but by scrubbing it with the semantic equivalent of the burn and dodge tools in Photoshop.
We can now do style-conditioned generation, style transfer, and style editing for text. I did this for a DARPA project. To make it cool, we can do it even on short texts like tweets and personalized in your writing style. One day, I will explain how we can fight state-sponsored propaganda using that.
But this is super early technology. vec2text will improve.
This part is hope.
Again I don’t know what that means, to have associations and contextualisations always present with a text, a structuralist’s dream, but… it’s different.
whatever brain wiring I am suffering with, I have this going all the time. I cannot read anything without associations and contexts popping up in my head all the time. Many times I wish I can turn them off and just read the damn words :)
All text will be auto-glossed
gloss over (v.): "try to conceal or disguise something unfavorable by treating it briefly or representing it misleadingly."
this is my big worry. For convenience we build tools that gloss and hence gloss over.
nce, for me, is that two thresholds have been crossed: speed and automation.
true .. this means more self-service tools. See the shovel cartoon I sent earlier.
Sean Graves at the Autonomy Institute has developed a tool called GERM. We used GERM to build a dataset of risks mentioned by the 266,989 UK companies who filed their accounts throughout March 2024. – Sean Graves (Autonomy Data Unit), GERM (Geopolitical & Environmental Risk Monitor) (2024)
this looks cool.
What if the difference between statements that are simply speculative and statement that mislead are as obvious as, I don’t know, the difference between a photo and a hand-drawn sketch?
Instead of answering a rhetorical question, I will let you imagine how people read memes. They know it is a meme, yet they will believe it and engage with it instead of looking up the real stuff behind the meme. The real question here is whether people will ignore the difference (my hypothesis is yes) and whether knowing the difference matters (my bet is no).
Further, you will have people questioning your photo/sketch classifier.
What would it mean to listen to a politician speak on TV, and in real-time see a rhetorical manoeuvre that masks a persuasive bait and switch?
This can be done in real time today. We tried versions of real-time fact-checking back in the day, and I am convinced people will believe what they want to believe.
hermeneutics
BTW, did you read Eliade?
For example, here’s a sequence of episode titles that transitions smoothly from geology to history: Vulcanology 1816, the Year Without a Summer Climate Change Meterology Voyages of James Cook Astronomy and Empire Longitude …and so on.
the smooth transition is not a property of the method but the careful transitions of the episode topics that BBC content creators managed. I would also bet that if you scroll past the list
again this "trick" has been around for a few decades, but we now have tooling for everyone to play with it, which is fun!
note: both t-sne and pca are dimension reduction techniques, but with different objectives.
So I tried it, and yes you can.
haha .. you can, but should you, and does that make sense?
There is something there! New information to be interpreted!
I am doubtful, but hey, I am also not smart :)
, I feel like I’m peering into the world’s first microscope and spying bacteria, or through a blurry, early telescope, and spotting invisible dots that turn out to be the previously unknown moons of Jupiter…
okay .. nice analogy
can individual authors be fingerprinted by how they construct text?
I even wrote a paper about it, which does a lot more than fingerprinting :)
embeddings also change our relationship with text, and what we can do with text
don't forget "what we cannot do with text"
January on the PartyKit blog: Using Vectorize to build an unreasonably good search engine in 160 lines of code (2024). (That post has diagrams and source code.) But what I want to emphasise is how little code there is.
This is a booby trap. Why? It's the sort of thing that makes people on HN post, "I can build this in a weekend". But when you build an actual search engine, you realize how messy everything is and especially so when you build something for more than one person.
Search for main roman god – this is also Jupiter, but a different one: this Jupiter is the king of the gods in the Roman pantheon. The top result is an episode about Rome and European civilisation, not the episode about the planet Jupiter, showing that embeddings can distinguish concepts even when similarly named.
I feel that when non-technical folks are empowered enough to experience building with tech (because it is simple to do now), they marvel at things traditional devs/researchers have taken for granted. That's what's happening here.
Search for the biggest planet – again, the episode about the planet Jupiter is at the top. There is no synonyms database here, which is how a traditional search engine would work. The phrase the biggest planet has been translated into its “coordinates” and the search engine has looked for “nearby” coordinates representing episodes.
hmm .. not a new capability .. this existed prior to modern DL, and I am not sure what changes it can bring that devs / data scientists haven't already tried.
Function calling Some API providers already provide function calling APIs, but these are extremely ad-hoc, and require manual type inference to extract a e.g OpenAPI compatible schema. Similar to Vercel's automatic deployment of lambda functions specified in a api/ directory, a compelling feature here would be automatically making registered functions available to LLM calls. An additional problem to solve here is function search - which functions should you expose based on the prompt/query?
this is llm specific
A Watermark for Large Language Models Undetectable Watermarks for Language Models On the Reliability of Watermarks for Large Language Models
all of these are broken methods. i dont think this can be done reliably in a scalable way
th tools which produce GBNF grammars from typescript interfaces,
this is cool
"wedge",
Don't just link. explain in a summary & link
these companies typically focus on APIs and deployment rather than developer experience.
really? examples?
This involves a more substantial investment (time and code) than calling an API, which gives Mistral an edge in retaining users,
is this true though? HF is making transformer access a standard. plus popularity of claude and gpt suggests people cannot be bothered to install stuff
, as well as filtering low value users, as they will likely take the easier route of API usage. Whether they can find services or new models which those users would pay for is another question.
this is not clearly written. i cannot understand what you are saying
run them themselves
ambiguous them. also explain why this is the case
will proliferate,
are already proliferating
This
The last three paras begin with "This". Avoid that. Coreference resolution is hard for humans. Every new para suggests a switch. Forcing the user to resolve coreferents frequently makes reading hard.
LLMs have the simplest IO of any new technology in history, and they are also (currently) completely stateless.
I explain this in detail here: https://deliprao.substack.com/p/how-to-understand-the-post-llm-world
users discover alternatives via API aggregators, and then switch to the cheapest/best one.
with model routing services this switching doesn't even need to be manual. Eg. https://withmartian.com/
wheras
spell
LLM serving is completely commoditized
the "Because" at the beginning of the sentence doesn't make sense as serving commoditized by open source efforts.
Because LLM serving costs are dominated by GPU access currently,
"LLM" is not a single-category product. A whole lot of modern LLMs are CPU-friendly.
use
use or cost?
A comparison with Vercel
Vercel comes out of nowhere
Why will the big language model serving company not be a cloud provider?
Open with this as a statement instead of a question.
The cloud providers still get their piece of the pie.
consider rewriting this entire section for clarity.
Is LLM serving any different?
different from what? "How is LLM serving different from other cloud use cases?"
this
resolution of "this" here is ambiguous. avoid anaphora in the beginning of passages
SSR
SSR not defined/explained.
In particular,
Curiously,
value-add
value-added
and then makes
making
that specialize
specializing in
If
Why italics?
The Langage Model Serving Companyself.__wrap_b=(e,t,r)=>{let n=(r=r||document.querySelector(`[data-br="${e}"]`)).parentElement,o=e=>r.style.maxWidth=e+"px";r.style.maxWidth="";let l=n.clientWidth,a=n.clientHeight,i=l/2-.25,u=l+.5,s;if(l){for(;i+1<u;)o(s=Math.round((i+u)/2)),n.clientHeight===a?u=s:i=s;o(u*t+l*(1-t))}r.__wrap_o||"undefined"!=typeof ResizeObserver&&(r.__wrap_o=new ResizeObserver(()=>{self.__wrap_b(0,+r.dataset.brr,r)})).observe(n)};self.__wrap_b(":r0:",1)
The title does not say what the article is about. Something like:
Traditional cloud providers will not be LLM-serving companies.
Box 1: Human Intelligence and Prediction
Summary: Intelligence != Prediction
A teenager learns to drive with tens or hundreds of miles of practice.
A 5 year old can master a video game in the matter of few minutes. A (deep) reinforcement learning system, which even on fast GPU clusters, will take a few days.
computer scientists flatly reject his emphasis on the cortex as a model for prediction machines
and neuroscientists too.
Between July and December 2015, object detection in a set of difficult-to-recognize images (the KITTI vision benchmark) improved from 39% to 87% success.
Someone needs to seriously study the effect of model overfitting (based on literature). That study can only happen with a new dataset.
An approach called “deep learning” has been particularly important to the changes of the past five years.
Important to note all the reported examples of dramatic improvements are from computer vision.
humans can do them so easily.
even more amazing: the human brain takes only 20W of power (pretty much a light bulb).
“machine learning.
"Machine Learning" was first coined in 1959 by Arthur Samuel. He defined it as "a field of study that gives computer the ability without being explicitly programmed." http://acityofpearls.tumblr.com/post/57427420436/machine-learning-what-and-why
What we will argue here is that AI — in its modern incarnation — is all about advances in prediction
Exactly. Let's leave "intelligence", "cognition", "understanding", outside the door while operating under traditional gradient-pushing AI.
what is this reducing the cost of?
This is a whetstone question for examining a product thesis. I have had several discussions of this sort. Any AI solution that is just replacing a minimum wage worker will prove well in the market. For instance, a Deep Learning based face recognition system might be, at times, better than humans, but they will never get easy adoption in, say, a door access control systems. The cost of compensating a security personnel is far less than the installation cost + monthly subscription fee. Plus a security personnel can do things he was not part of his job description (model), like calling 911 in an emergency.
Although deep neural networks havein theory much higher representational power thanshallow models, it is not clear if simple text classi-fication problems such as sentiment analysis are theright ones to evaluate them.
??
Unlike unsupervisedly trained wordvectors from word2vec, our word features can beaveraged together to form good sentence represen-tations.
What is "good" here? Something that leads to more accuracy?
Our model is trainedasynchronously on multiple CPUs.
bah
different tasks
two different classification tasks
We show that by incorporatingadditional statistics such as using bag of n-grams,we reduce the gap in accuracy between linear anddeep models, while being many orders of magnitudefaster.
Another claim in the paper. But is this really new? At this point all of this is common knowledge.
At the same time, simple linear models have alsoshown impressive performance while being verycomputationally efficient
Is this the best citation for the topic? For a main claim, shouldn't they be diligent about citing the most important works in the area?