- Oct 2024
-
x.com x.com
-
The similarity is because they are all saying roughly the same thing: Total (result) = Kinetic (cost) + Potential (benefit) Cost is either imaginary squared or negative (space-like), benefit is real (time-like), result is mass-like. Just like physics, the economic unfavourable models are the negative results. In economics, diversity of products is a strength as it allows better recovery from failure of any one, comically DEI of people fails miserably at this, because all people are not equal. Here are some other examples you will know if you do physics: E² + (ipc)² = (mc²)² (relativistic Einstein equation), mass being the result, energy time-like (potential), momentum the space-like (kinetic). ∇² - 1/c² ∂²/∂t² = (mc/ℏ)² (Klein-Gordon equation), mass is the result, ∂²/∂t² potential, ∇² is kinetic. Finally we have Dirac equation, which unlike the previous two as "sum of squares" is more like vector addition (first order differentials, not second). iℏγ⁰∂₀ψ + iℏγⁱ∂ᵢψ = mcψ First part is still the time-like potential, second part is the space-like kinetic, and the mass is still the result though all the same. This is because energy is all forms, when on a flat (free from outside influence) worksheet, acts just like a triangle between potential, kinetic and resultant energies. E.g. it is always of the form k² + p² = r², quite often kinetic is imaginary to potential (+,-,-,-) spacetime metric, quaternion mathematics. So the r² can be negative, or imaginary result if costs out way benefits, or work in is greater than work out. Useless but still mathematical solution. Just like physics, you always want the mass or result to be positive and real, or your going to lose energy to the surrounding field, with negative returns. Economic net loss do not last long, just like imaginary particles in physics.
in reply to Cesar A. Hidalgo at https://x.com/realAnthonyDean/status/1844409919161684366
via Anthony Dean @realAnthonyDean
-
- Mar 2024
-
www.cs.princeton.edu www.cs.princeton.edu
-
Its performance is not very different from the system versions of grep, which shows that the recursive technique is not too costly and that it's not worth trying to tune the code.
-
- Dec 2023
-
pythonspeed.com pythonspeed.com
-
Technique #1: Changing numeric representations
For most purposes having a huge amount of accuracy isn’t too important.
Instead of representing the values as floating numbers, we can represent them as percentages between 0 and 100. We’ll be down to two-digit accuracy, but again for many use cases that’s sufficient. Plus, if this is output from a model, those last few digits of “accuracy” are likely to be noise, they won’t actually tell us anything useful.
Whole percentages have the nice property that they can fit in a single byte, an int8—as opposed to float64, which uses eight bytes:
-
lossy compression: drop some of your data in a way that doesn’t impact your final results too much.
If parts of your data don’t impact your analysis, no need to waste memory keeping extraneous details around.
-
- Apr 2023
-
bugs.ruby-lang.org bugs.ruby-lang.org
-
why not allow block forwarding without capturing: foo(&) foo(1, 2, &)
Tags
Annotators
URL
-
- Feb 2023
-
www.youtube.com www.youtube.comYouTube1
-
optimization-procrastination trap is related to shiny object syndrome - the idea of tweaking one's system constantly
perfect tool trap - guess what? there isn't one
-
- Jan 2023
-
archive.org archive.org
-
There is no reason why a small triangle couldn't be used in printing instead of the letter "e"
-
- Jul 2022
-
www.vpri.org www.vpri.org
-
This is critical since many optimizations are accomplished by violating (hopefully safely) module boundaries; it is disastrous to incorporate optimizations into the main body of code. The separation allows the optimizations to be checked against the meanings.
See also the discussion (in the comments) about optimization-after-the-fact in http://akkartik.name/post/mu-2019-2
Tags
Annotators
URL
-
- Jun 2022
-
www.webpagetest.org www.webpagetest.org
Tags
Annotators
URL
-
- Apr 2022
-
www.cs.sfu.ca www.cs.sfu.ca
-
If a compiler cannotdetermine whether or not two pointers may be aliased, it must assume that eithercase is possible, limiting the set of possible optimizations.
pointer alias 的 optimization block 怎么理解?
-
-
www.washingtonpost.com www.washingtonpost.com
-
Algospeak refers to code words or turns of phrase users have adopted in an effort to create a brand-safe lexicon that will avoid getting their posts removed or down-ranked by content moderation systems. For instance, in many online videos, it’s common to say “unalive” rather than “dead,” “SA” instead of “sexual assault,” or “spicy eggplant” instead of “vibrator.”
Definition of "Algospeak"
In order to get around algorithms that demote content in social media feeds, communities have coined new words or new meanings to existing words to communicate their sentiment.
This is affecting TikTok in particular because its algorithm is more heavy-handed in what users see. This is also causing people who want to be seen to tailor their content—their speech—to meet the algorithms needs. It is like search engine optimization for speech.
Article discovered via Cory Doctorow at The "algospeak" dialect
-
- Jan 2022
-
scattered-thoughts.net scattered-thoughts.netCoding1
-
It's typically taken for granted that better performance must require higher complexity. But I've often had the experience that making some component of a system faster allows the system as a whole to be simpler
-
-
scattered-thoughts.net scattered-thoughts.net
-
The latest SQLite 3.8.7 alpha version is 50% faster than the 3.7.17 release from 16 months ago. [...] This is 50% faster at the low-level grunt work of moving bits on and off disk and search b-trees. We have achieved this by incorporating hundreds of micro-optimizations. Each micro-optimization might improve the performance by as little as 0.05%. If we get one that improves performance by 0.25%, that is considered a huge win. Each of these optimizations is unmeasurable on a real-world system (we have to use cachegrind to get repeatable run-times) but if you do enough of them, they add up.
-
- Sep 2021
-
s3.us-west-2.amazonaws.com s3.us-west-2.amazonaws.com
-
don’t store all the values from onerow together, but store all the values from each column together instead. If each col‐umn is stored in a separate file, a query only needs to read and parse those columnsthat are used in that query, which can save a lot of work.
Why column storage is better query optimized
-
-
-
We were able to reduce calls to Infura and save on costs. And because the proxy uses an in-memory cache, we didn’t need to call the Ethereum blockchain every time.
-
our application continuously checks the current block and retrieves transactions from historical blocks;Our application was very heavy on reads and light on writes, so we thought using ‘cached reads’ would be a good approach;We developed a small application to act as a thin proxy between our application and Infura. The proxy application is simple and it only hits Infura/Ethereum on the initial call. All future calls for the same block or transaction are then returned from cache;Writes are automatically forwarded to Infura. This was a seamless optimization. We simply had to point our application to our proxy and everything worked without any changes to our application.
-
- Aug 2021
-
obamawhitehouse.archives.gov obamawhitehouse.archives.gov
-
Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algo-rithms
-
- Mar 2021
-
trailblazer.to trailblazer.to
-
Optimization in this case is nothing crazy, just something I neglected while designing the framework.
-
-
github.com github.com
-
If a UTF8-encoded Ruby string contains unicode characters, then indexing into that string becomes O(N). This can lead to very bad performance in string_end_with_semicolon?, as it would have to scan through the whole buffer for every single file. This commit fixes it to use UTF32 if there are any non-ascii characters in the files.
-
-
github.com github.com
-
What is the point of avoiding the semicolon in concat_javascript_sources
For how detailed and insightful his analysis was -- which didn't elaborate or even touch on his not understanding the reason for adding the semicolon -- it sure appeared like he knew what it was for. Otherwise, the whole issue would/should have been about how he didn't understand that, not on how to keep adding the semicolon but do so in a faster way!
Then again, this comment from 3 months afterwards, indicates he may not think they are even necessary: https://github.com/rails/sprockets/issues/388#issuecomment-252417741
Anyway, just in case he really didn't know, the comment shortly below partly answers the question:
Since the common problem with concatenating JavaScript files is the lack of semicolons, automatically adding one (that, like Sam said, will then be removed by the minifier if it's unnecessary) seems on the surface to be a perfectly fine speed optimization.
This also alludes to the problem: https://github.com/rails/sprockets/issues/388#issuecomment-257312994
But the explicit answer/explanation to this question still remains unspoken: because if you don't add them between concatenated files -- as I discovered just to day -- you will run into this error:
(intermediate value)(...) is not a function at something.source.js:1
, apparently because when it concatenated those 2 files together, it tried to evaluate it as:
({ // other.js })() (function() { // something.js })();
It makes sense that a ; is needed.
-
And no need to walk backwards through all these strings which is surprisingly inefficient in Ruby.
-
Since the common problem with concatenating JavaScript files is the lack of semicolons, automatically adding one (that, like Sam said, will then be removed by the minifier if it's unnecessary) seems on the surface to be a perfectly fine speed optimization.
-
reducing it down to one call significantly speeds up the operation.
-
I feel like the walk in string_end_with_semicolon? is unnecessarily expensive when having an extra semicolon doesn't invalidate any JavaScript syntax.
-
-
stackoverflow.com stackoverflow.com
-
Possibly use Array.prototype.filter.call instead of allocating a new array.
-
- Feb 2021
-
advances.sciencemag.org advances.sciencemag.org
-
Matrajt, Laura, Julia Eaton, Tiffany Leung, und Elizabeth R. Brown. „Vaccine Optimization for COVID-19: Who to Vaccinate First?“ Science Advances 7, Nr. 6 (1. Februar 2020): eabf1374. https://doi.org/10.1126/sciadv.abf1374.
-
- Jan 2021
-
stackoverflow.com stackoverflow.com
-
If you manage to make Svelte aware of what needs to be tracked, chances are that the resulting code will be more performant than if you roll your own with events or whatever. In part because it will use Svelte's runtime code that is already present in your app, in part because Svelte produces seriously optimized change tracking code, that would be hard to hand code all while keeping it human friendly. And in part because your change tracking targets will be more narrow.
-
- Dec 2020
-
www-sciencedirect-com.ezproxy.rice.edu www-sciencedirect-com.ezproxy.rice.edu
-
In this study, we have quantitated yields of low copy and single copy number plasmid DNAs after growth of cells in four widely used broths (SB, SOC, TB, and 2xYT) and compared results to those obtained with LB, the most common E. coli cell growth medium
TB (terrific broth) consistently generated the greatest amount of plasmid DNA, in agreement with its ability to produce higher cell titers. The superiority of TB was primarily due to its high levels of yeast extract (24 g/L) and was independent of glycerol, a unique component of this broth. Interestingly, simply preparing LB with similarly high levels of yeast extract (LB24 broth) resulted in plasmid yields that were equivalent to those of TB.
-
-
www.zymoresearch.com www.zymoresearch.com
-
github.com github.com
-
The template language's restrictions compared to JavaScript/JSX-built views are part of Svelte's performance story. It's able to optimize things ahead of time that are impossible with dynamic code because of the constraints. Here's a couple tweets from the author about that
-
- Nov 2020
-
www.thecodebuzz.com www.thecodebuzz.com
-
GET is the primary mechanism of information retrieval and the focus of almost all performance optimizations.
-
-
github.com github.com
-
It's fast. The Dart VM is highly optimized, and getting faster all the time (for the latest performance numbers, see perf.md). It's much faster than Ruby, and close to par with C++.
-
-
www.npmjs.com www.npmjs.com
-
Note that when using sass (Dart Sass), synchronous compilation is twice as fast as asynchronous compilation by default, due to the overhead of asynchronous callbacks.
If you consider using asynchronous to be an optimization, then this could be surprising.
-
- Oct 2020
-
engineering.appfolio.com engineering.appfolio.com
-
optcarrot, is a headless NES emulator that the Ruby core team are using as a CPU-intensive optimization target.
-
-
www.npmjs.com www.npmjs.comhyperx1
-
You might also want to check out the hyperxify browserify transform to statically compile hyperx into javascript expressions to save sending the hyperx parser down the wire.
-
-
-
JSX has the advantage of being fast, but the disadvantage that it needs to be preprocessed before working. By using template string virtual-html, we can have it work out of the box, and optimize it by writing a browserify transform. Best of both!
See also: https://github.com/choojs/nanohtml#static-optimizations
(this person later recommends this library)
-
-
github.com github.com
-
Static optimizations
-
-
medium.com medium.com
-
I understand that I could use some third party memoization tool on top of the Svelte’s comparator, but my point here is — there is no magic pill, optimizations “out of the box” often turn out to have limitations.
-
This is a very dangerous practice as each optimization means making assumptions. If you are compressing an image you make an assumption that some payload can be cut out without seriously affecting the quality, if you are adding a cache to your backend you assume that the API will return same results. A correct assumption allows you to spare resources. A false assumption introduces a bug in your app. That’s why optimizations should be done consciously.
-
In the vast majority of cases there’s nothing wrong about wasted renders. They take so little resources that it is simply undetectable for a human eye. In fact, comparing each component’s props to its previous props shallowly (I’m not even talking about deeply) can be more resource extensive then simply re-rendering the entire subtree.
-
-
github.com github.com
-
While you could use a map function for loops they aren't optimized.
-
-
github.com github.com
-
"Premature optimization is the root of all evil"; start with RPC as default and later switch to REST or GraphQL when (and only if!) the need arises.
Tags
Annotators
URL
-
-
link.aps.org link.aps.org
-
Burda, Z., Kotwica, M., & Malarz, K. (2020). Ageing of complex networks. Physical Review E, 102(4), 042302. https://doi.org/10.1103/PhysRevE.102.042302
-
- Sep 2020
-
github.com github.com
-
The more I think about this, the more I think that maybe React already has the right solution to this particular issue, and we're tying ourselves in knots trying to avoid unnecessary re-rendering. Basically, this JSX... <Foo {...a} b={1} {...c} d={2}/> ...translates to this JS: React.createElement(Foo, _extends({}, a, { b: 1 }, c, { d: 2 })); If we did the same thing (i.e. bail out of the optimisation allowed by knowing the attribute names ahead of time), our lives would get a lot simpler, and the performance characteristics would be pretty similar in all but somewhat contrived scenarios, I think. (It'll still be faster than React, anyway!)
-
-
-
Romanini, Daniele, Sune Lehmann, and Mikko Kivelä. ‘Privacy and Uniqueness of Neighborhoods in Social Networks’. ArXiv:2009.09973 [Physics], 21 September 2020. http://arxiv.org/abs/2009.09973.
-
-
developer.mozilla.org developer.mozilla.org
-
By only adding the necessary handling when the function is declared async, the JavaScript engine can optimize your program for you.
-
-
github.com github.com
-
The static analysis considerations make things like hero.enemies.map(...) a non-starter — the reason Svelte is able to beat most frameworks in benchmarks is that the compiler has a deep understanding of a component's structure, which becomes impossible when you allow constructs like that.
-
- Aug 2020
-
osf.io osf.io
-
Landgrave, M. (2020). How Do Legislators Value Constituent’s (Statistical) Lives? COVID-19, Partisanship, and Value of a Statistical Life Analysis. https://doi.org/10.31235/osf.io/n93w2
-
-
covid-19.iza.org covid-19.iza.org
-
Optimal Unemployment Benefits in the Pandemic. COVID-19 and the Labor Market. (n.d.). IZA – Institute of Labor Economics. Retrieved August 1, 2020, from https://covid-19.iza.org/publications/dp13389/
-
- Jul 2020
-
-
Martin, G., Hanna, E., & Dingwall, R. (2020). Face masks for the public during Covid-19: An appeal for caution in policy [Preprint]. SocArXiv. https://doi.org/10.31235/osf.io/uyzxe
-
-
svelte.dev svelte.dev
-
In some frameworks you may see recommendations to avoid inline event handlers for performance reasons, particularly inside loops. That advice doesn't apply to Svelte — the compiler will always do the right thing, whichever form you choose.
-
-
kentcdodds.com kentcdodds.com
-
Performance optimizations are not free. They ALWAYS come with a cost but do NOT always come with a benefit to offset that cost.
-
-
dmitripavlutin.com dmitripavlutin.com
-
Even so, the inline function is still created on every render, useCallback() just skips it.
-
Even having useCallback() returning the same function instance, it doesn’t bring any benefits because the optimization costs more than not having the optimization.
-
-
www.jclinepi.com www.jclinepi.com
-
Sperrin, M., Martin, G. P., Sisk, R., & Peek, N. (2020). Missing data should be handled differently for prediction than for description or causal explanation. Journal of Clinical Epidemiology, 0(0). https://doi.org/10.1016/j.jclinepi.2020.03.028
-
- Jun 2020
-
-
Ben-David, S. (2018). Clustering—What Both Theoreticians and Practitioners are Doing Wrong. ArXiv:1805.08838 [Cs, Stat]. http://arxiv.org/abs/1805.08838
-
- Dec 2019
-
www.2ndquadrant.com www.2ndquadrant.com
-
Practical highlights in my opinion:
- It's important to know about data padding in PG.
- Be conscious when modelling data tables about columns ordering, but don't be pure-school and do it in a best-effort basis.
- Gains up to 25% in wasted storage are impressive but always keep in mind the scope of the system. For me, gains are not worth it in the short-term. Whenever a system grows, it is possible to migrate data to more storage-efficient tables but mind the operative burder.
Here follows my own commands on trying the article points. I added
- pg_column_size(row())
on each projection to have clear absolute sizes.-- How does row function work? SELECT pg_column_size(row()) AS empty, pg_column_size(row(0::SMALLINT)) AS byte2, pg_column_size(row(0::BIGINT)) AS byte8, pg_column_size(row(0::SMALLINT, 0::BIGINT)) AS byte16, pg_column_size(row(''::TEXT)) AS text0, pg_column_size(row('hola'::TEXT)) AS text4, 0 AS term ; -- My own take on that SELECT pg_column_size(row()) AS empty, pg_column_size(row(uuid_generate_v4())) AS uuid_type, pg_column_size(row('hola mundo'::TEXT)) AS text_type, pg_column_size(row(uuid_generate_v4(), 'hola mundo'::TEXT)) AS uuid_text_type, pg_column_size(row('hola mundo'::TEXT, uuid_generate_v4())) AS text_uuid_type, 0 AS term ; CREATE TABLE user_order ( is_shipped BOOLEAN NOT NULL DEFAULT false, user_id BIGINT NOT NULL, order_total NUMERIC NOT NULL, order_dt TIMESTAMPTZ NOT NULL, order_type SMALLINT NOT NULL, ship_dt TIMESTAMPTZ, item_ct INT NOT NULL, ship_cost NUMERIC, receive_dt TIMESTAMPTZ, tracking_cd TEXT, id BIGSERIAL PRIMARY KEY NOT NULL ); SELECT a.attname, t.typname, t.typalign, t.typlen FROM pg_class c JOIN pg_attribute a ON (a.attrelid = c.oid) JOIN pg_type t ON (t.oid = a.atttypid) WHERE c.relname = 'user_order' AND a.attnum >= 0 ORDER BY a.attnum; -- What is it about pg_class, pg_attribute and pg_type tables? For future investigation. -- SELECT sum(t.typlen) -- SELECT t.typlen SELECT a.attname, t.typname, t.typalign, t.typlen FROM pg_class c JOIN pg_attribute a ON (a.attrelid = c.oid) JOIN pg_type t ON (t.oid = a.atttypid) WHERE c.relname = 'user_order' AND a.attnum >= 0 ORDER BY a.attnum ; -- Whoa! I need to master mocking data directly into db. INSERT INTO user_order ( is_shipped, user_id, order_total, order_dt, order_type, ship_dt, item_ct, ship_cost, receive_dt, tracking_cd ) SELECT true, 1000, 500.00, now() - INTERVAL '7 days', 3, now() - INTERVAL '5 days', 10, 4.99, now() - INTERVAL '3 days', 'X5901324123479RROIENSTBKCV4' FROM generate_series(1, 1000000); -- New item to learn, pg_relation_size. SELECT pg_relation_size('user_order') AS size_bytes, pg_size_pretty(pg_relation_size('user_order')) AS size_pretty; SELECT * FROM user_order LIMIT 1; SELECT pg_column_size(row(0::NUMERIC)) - pg_column_size(row()) AS zero_num, pg_column_size(row(1::NUMERIC)) - pg_column_size(row()) AS one_num, pg_column_size(row(9.9::NUMERIC)) - pg_column_size(row()) AS nine_point_nine_num, pg_column_size(row(1::INT2)) - pg_column_size(row()) AS int2, pg_column_size(row(1::INT4)) - pg_column_size(row()) AS int4, pg_column_size(row(1::INT2, 1::NUMERIC)) - pg_column_size(row()) AS int2_one_num, pg_column_size(row(1::INT4, 1::NUMERIC)) - pg_column_size(row()) AS int4_one_num, pg_column_size(row(1::NUMERIC, 1::INT4)) - pg_column_size(row()) AS one_num_int4, 0 AS term ; SELECT pg_column_size(row(''::TEXT)) - pg_column_size(row()) AS empty_text, pg_column_size(row('a'::TEXT)) - pg_column_size(row()) AS len1_text, pg_column_size(row('abcd'::TEXT)) - pg_column_size(row()) AS len4_text, pg_column_size(row('abcde'::TEXT)) - pg_column_size(row()) AS len5_text, pg_column_size(row('abcdefgh'::TEXT)) - pg_column_size(row()) AS len8_text, pg_column_size(row('abcdefghi'::TEXT)) - pg_column_size(row()) AS len9_text, 0 AS term ; SELECT pg_column_size(row(''::TEXT, 1::INT4)) - pg_column_size(row()) AS empty_text_int4, pg_column_size(row('a'::TEXT, 1::INT4)) - pg_column_size(row()) AS len1_text_int4, pg_column_size(row('abcd'::TEXT, 1::INT4)) - pg_column_size(row()) AS len4_text_int4, pg_column_size(row('abcde'::TEXT, 1::INT4)) - pg_column_size(row()) AS len5_text_int4, pg_column_size(row('abcdefgh'::TEXT, 1::INT4)) - pg_column_size(row()) AS len8_text_int4, pg_column_size(row('abcdefghi'::TEXT, 1::INT4)) - pg_column_size(row()) AS len9_text_int4, 0 AS term ; SELECT pg_column_size(row(1::INT4, ''::TEXT)) - pg_column_size(row()) AS int4_empty_text, pg_column_size(row(1::INT4, 'a'::TEXT)) - pg_column_size(row()) AS int4_len1_text, pg_column_size(row(1::INT4, 'abcd'::TEXT)) - pg_column_size(row()) AS int4_len4_text, pg_column_size(row(1::INT4, 'abcde'::TEXT)) - pg_column_size(row()) AS int4_len5_text, pg_column_size(row(1::INT4, 'abcdefgh'::TEXT)) - pg_column_size(row()) AS int4_len8_text, pg_column_size(row(1::INT4, 'abcdefghi'::TEXT)) - pg_column_size(row()) AS int4_len9_text, 0 AS term ; SELECT pg_column_size(row()) - pg_column_size(row()) AS empty_row, pg_column_size(row(''::TEXT)) - pg_column_size(row()) AS no_text, pg_column_size(row('a'::TEXT)) - pg_column_size(row()) AS min_text, pg_column_size(row(1::INT4, 'a'::TEXT)) - pg_column_size(row()) AS two_col, pg_column_size(row('a'::TEXT, 1::INT4)) - pg_column_size(row()) AS round4; SELECT pg_column_size(row()) - pg_column_size(row()) AS empty_row, pg_column_size(row(1::SMALLINT)) - pg_column_size(row()) AS int2, pg_column_size(row(1::INT)) - pg_column_size(row()) AS int4, pg_column_size(row(1::BIGINT)) - pg_column_size(row()) AS int8, pg_column_size(row(1::SMALLINT, 1::BIGINT)) - pg_column_size(row()) AS padded, pg_column_size(row(1::INT, 1::INT, 1::BIGINT)) - pg_column_size(row()) AS not_padded; SELECT a.attname, t.typname, t.typalign, t.typlen FROM pg_class c JOIN pg_attribute a ON (a.attrelid = c.oid) JOIN pg_type t ON (t.oid = a.atttypid) WHERE c.relname = 'user_order' AND a.attnum >= 0 ORDER BY t.typlen DESC; DROP TABLE user_order; CREATE TABLE user_order ( id BIGSERIAL PRIMARY KEY NOT NULL, user_id BIGINT NOT NULL, order_dt TIMESTAMPTZ NOT NULL, ship_dt TIMESTAMPTZ, receive_dt TIMESTAMPTZ, item_ct INT NOT NULL, order_type SMALLINT NOT NULL, is_shipped BOOLEAN NOT NULL DEFAULT false, order_total NUMERIC NOT NULL, ship_cost NUMERIC, tracking_cd TEXT ); -- And, what about other varying size types as JSONB? SELECT pg_column_size(row('{}'::JSONB)) - pg_column_size(row()) AS empty_jsonb, pg_column_size(row('{}'::JSONB, 0::INT4)) - pg_column_size(row()) AS empty_jsonb_int4, pg_column_size(row(0::INT4, '{}'::JSONB)) - pg_column_size(row()) AS int4_empty_jsonb, pg_column_size(row('{"a": 1}'::JSONB)) - pg_column_size(row()) AS basic_jsonb, pg_column_size(row('{"a": 1}'::JSONB, 0::INT4)) - pg_column_size(row()) AS basic_jsonb_int4, pg_column_size(row(0::INT4, '{"a": 1}'::JSONB)) - pg_column_size(row()) AS int4_basic_jsonb, 0 AS term;
-
- Aug 2019
-
hypothes.is hypothes.is
-
Centric web solution is the renowned best web development company.
We have a very highly experienced and expert development team who are experts in web design & development.
We provide various services like Wordpress web development, eCommerce web development, Wordpress theme design and plugin development, website maintenance & optimization.
For more our services call us on +91 98587 58541 or visit our website https://www.centricwebsolution.com/.
Our Services Are:-
- Web Design & Development
- WordPress Development
- WooCommerce Development
- Custom Web Application Development
- Website Migration Services
- Website Maintenance & Performance optimization
- Website Plugin and API Development
- Website Store Optimization
- PHP Web Development
- Enterprise Web Application Development
- Laravel Development Services
- Angularjs Development Services
Tags
- PHP Web Development
- Custom Web Application Development
- Angularjs Development Services
- Website Plugin and API Development
- Website Maintenance & Performance optimization
- Web Design & Development
- Enterprise Web Application Development
- WooCommerce Development
- Website Migration Services
- Laravel Development Services
- Website Store Optimization
Annotators
URL
-
- Jan 2019
-
iphysresearch.github.io iphysresearch.github.io
-
Optimization Models for Machine Learning: A Survey
感觉此文于我而言真正有价值的恐怕只有文末附录的 Dataset tables 汇总整理了。。。。。
-
- Nov 2018
-
iphysresearch.github.io iphysresearch.github.io
-
Learning with Random Learning Rates
作者提出了一种新的Alrao优化算法,让网络中每个 unit 或 feature 都各自从不同级别的随机分布中采样获得其自己的学习率。该算法没有额外计算损耗,可以更快速达到理想 lr 下的SGD性能,用来测试 DL 模型很棒!
-
On the loss landscape of a class of deep neural networks with no bad local valleys
文章声称的全局最小训练,事实上主要基于一个比较特殊的人工神经网络的结构,用了各种连接到 output 的 skip connection,还有几个额外的assumptions 作为理论保证。
-
Revisiting Small Batch Training for Deep Neural Networks
这篇文章简而言之就是mini-batch sizes取得尽可能小一些可能比较好。自己瞅了一眼正在写的 paper,这不禁让我小肝微微一颤,心想:还是下次再把 batch-size 取得小一点吧。。。[挖鼻]
-
Don't Use Large Mini-Batches, Use Local SGD
最近(2018/8)在听数学与系统科学的非凸最优化进展时候,李博士就讲过:现在其实不太欣赏变 learning rate 了,反而逐步从 SGD 到 MGD 再到 GD 的方式,提高 batch-size 会有更好的优化效果!
-
Accelerating Natural Gradient with Higher-Order Invariance
每次看到研究梯度优化理论的 paper,都感觉到无比的神奇!厉害到爆表。。。。
-
Backprop Evolution
这似乎是说反向传播的算法,在函数结构本身上,就还有很大的优化空间。作者在一些初等函数和常见矩阵操作基础上探索了一些操作搭配,发现效能轻易的就优于传统的反向传播算法。
不禁启发:我们为什么不能也用网络去拟合优化梯度更新函数呢?
-
Gradient Descent Finds Global Minima of Deep Neural Networks
全篇的数学理论证明:深度过参网络可以训练到0。(仅 train loss,非 test loss)+(GD,非 SGD)
-
A Convergence Theory for Deep Learning via Over-Parameterization
又一个全篇的数学理论证明,但是没找到 conclusion 到底是啥,唯一接近的是 remark 的信息,但内容也都并不惊奇。不过倒是一个不错的材料,若作为熟悉DNN背后的数学描述的话。
-
- Feb 2018
-
blogs.msdn.microsoft.com blogs.msdn.microsoft.com
-
edge probes
It seems that the edge probe is a special case of the value probe.
-
- Dec 2017
-
www.ise.ufl.edu www.ise.ufl.edu
-
For example, stochastic quasi-gradient algorithms [3] can be used forthe minimization of function (1).
-
- Jun 2017
-
Local file Local file
-
@article{ben2002robust, title={Robust optimization--methodology and applications}, author={Ben-Tal, Aharon and Nemirovski, Arkadi}, journal={Mathematical Programming}, volume={92}, number={3}, pages={453--480}, year={2002}, publisher={Springer} }
-
- Feb 2017
-
www.digitalocean.com www.digitalocean.com
- Apr 2016
-
cssanalytics.wordpress.com cssanalytics.wordpress.com
-
While there are assets that have not been assigned to a cluster If only one asset remaining then Add a new cluster Only member is the remaining asset Else Find the asset with the Highest Average Correlation (HC) to all assets not yet been assigned to a Cluster Find the asset with the Lowest Average Correlation (LC) to all assets not yet assigned to a Cluster If Correlation between HC and LC > Threshold Add a new Cluster made of HC and LC Add to Cluster all other assets that have yet been assigned to a Cluster and have an Average Correlation to HC and LC > Threshold Else Add a Cluster made of HC Add to Cluster all other assets that have yet been assigned to a Cluster and have a Correlation to HC > Threshold Add a Cluster made of LC Add to Cluster all other assets that have yet been assigned to a Cluster and have Correlation to LC > Threshold End if End if End While
Fast Threshold Clustering Algorithm
Looking for equivalent source code to apply in smart content delivery and wireless network optimisation such as Ant Mesh via @KirkDBorne's status https://twitter.com/KirkDBorne/status/479216775410626560 http://cssanalytics.wordpress.com/2013/11/26/fast-threshold-clustering-algorithm-ftca/
-
-
cs231n.github.io cs231n.github.io
-
Effect of step size. The gradient tells us the direction in which the function has the steepest rate of increase, but it does not tell us how far along this direction we should step.
That's the reason why step size is an important factor in optimization algorithm. Too small step can cause the algorithm longer to converge. Too large step can cause that we change the parameters too much thus
overstepped
the optima.
-
- Jan 2015
-
cs231n.github.io cs231n.github.io
-
There are other ways of performing the optimization (e.g. LBFGS), but Gradient Descent is currently by far the most common and established way of optimizing Neural Network loss functions.
Are there any studies that compare different pros and cons of the optimization procedures with respect to some specific NN architectures (e.g., classical LeNets)?
Tags
Annotators
URL
-