77 Matching Annotations
  1. Nov 2022
    1. it became clear that Fermat's Last Theorem could be proven as a corollary of a limited form of the modularity theorem (unproven at the time and then known as the "Taniyama–Shimura–Weil conjecture"). The modularity theorem involved elliptic curves, which was also Wiles's own specialist area.[15][16]

      Elliptical curves are also use in Ed25519 which are purportedly more robust to side channel attacks. Could there been some useful insight from Wiles and the modularity theorem?

    1. From the Introduction to Ed25519, there are some speed benefits, and some security benefits. One of the more interesting security benefits is that it is immune to several side channel attacks: No secret array indices. The software never reads or writes data from secret addresses in RAM; the pattern of addresses is completely predictable. The software is therefore immune to cache-timing attacks, hyperthreading attacks, and other side-channel attacks that rely on leakage of addresses through the CPU cache. No secret branch conditions. The software never performs conditional branches based on secret data; the pattern of jumps is completely predictable. The software is therefore immune to side-channel attacks that rely on leakage of information through the branch-prediction unit. For comparison, there have been several real-world cache-timing attacks demonstrated on various algorithms. http://en.wikipedia.org/wiki/Timing_attack

      Further arguments that Ed25519 is less vulnerable to - cache-timing attacks - hyperthreading attacks - other side-channel attacks that rely on leakage of addresses through CPU cache Also boasts - no secret branch conditions (no conditional branches based on secret data since pattern of jumps is predictable)

      Predicable because underlying process that generated it isn't a black box?

      Could ML (esp. NN, and CNN) be a parallel? Powerful in applications but huge risk given uncertainty of underlying mechanism?

      Need to read papers on this

    2. More "sales pitch" comes from this IETF draft: While the NIST curves are advertised as being chosen verifiably at random, there is no explanation for the seeds used to generate them. In contrast, the process used to pick these curves is fully documented and rigid enough so that independent verification has been done. This is widely seen as a security advantage, since it prevents the generating party from maliciously manipulating the parameters. – ATo Aug 21, 2016 at 7:25

      An argument why Ed25519 signature alg & Curve 25519 key exchange alg is more secure; less vulnerable to side attacks since the process that generates is have been purportedly verified and extensively documented.

  2. Oct 2022
    1. An assessment method for algorithms. In een sessie werd dit genoemd in combinatie met IAMA als methoden voor assessment.

  3. Aug 2022
    1. The Medianizer algorithm

      makerdao is not open to syntetix-like attack <- the latter only had two price discovery sources.

    Tags

    Annotators

  4. Jul 2022
    1. “Algorithms are animated by data, data comes from people, people make up society, and society is unequal,” the paper reads. “Algorithms thus arc towards existing patterns of power and privilege, marginalization, and disadvantage.”
    1. “replace ‘algorithm’ with ‘audience.'” Instead of positioning videos to perform well for an algorithm, how can they perform best with an audience?
  5. May 2022
  6. Apr 2022
    1. Before 2009, Facebook had given users a simple timeline––a never-ending stream of content generated by their friends and connections, with the newest posts at the top and the oldest ones at the bottom. This was often overwhelming in its volume, but it was an accurate reflection of what others were posting. That began to change in 2009, when Facebook offered users a way to publicly “like” posts with the click of a button. That same year, Twitter introduced something even more powerful: the “Retweet” button, which allowed users to publicly endorse a post while also sharing it with all of their followers. Facebook soon copied that innovation with its own “Share” button, which became available to smartphone users in 2012. “Like” and “Share” buttons quickly became standard features of most other platforms.Shortly after its “Like” button began to produce data about what best “engaged” its users, Facebook developed algorithms to bring each user the content most likely to generate a “like” or some other interaction, eventually including the “share” as well. Later research showed that posts that trigger emotions––especially anger at out-groups––are the most likely to be shared.

      The Firehose versus the Algorithmic Feed

      See related from The Internet Is Not What You Think It Is: A History, A Philosophy, A Warning, except with more depth here.

    1. Algorithms in themselves are neither good nor bad. And they can be implemented even where you don’t have any technology to implement them. That is to say, you can run an algorithm on paper, and people have been doing this for many centuries. It can be an effective way of solving problems. So the “crisis moment” comes when the intrinsically neither-good-nor-bad algorithm comes to be applied for the resolution of problems, for logistical solutions, and so on in many new domains of human social life, and jumps the fence that contained it as focusing on relatively narrow questions to now structuring our social life together as a whole. That’s when the crisis starts.

      Algorithms are agnostic

      As we know them now, algorithms—and [[machine learning]] in general—do well when confined to the domains in which they started. They come apart when dealing with unbounded domains.

  7. Mar 2022
    1. computers might therefore easily outperform humans at facial recognition and do so in a much less biased way than humans. And at this point, government agencies will be morally obliged to use facial recognition software since it will make fewer mistakes than humans do.

      Banning it now because it isn't as good as humans leaves little room for a time when the technology is better than humans. A time when the algorithm's calculations are less biased than human perception and interpretation. So we need rigorous methodologies for testing and documenting algorithmic machine models as well as psychological studies to know when the boundary of machine-better-than-human is crossed.

    1. In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible.

      Although the model was driven "towards compounds such as the nerve agent VX", it found VX but also many other known chemical warfare agents and many new molecules...that looked equally plausible."

      AI is the tool. The parameters by which it is set up makes something "good" or "bad".

  8. Jan 2022
  9. Dec 2021
    1. Standard algorithms as a reliable engine in SaaS https://en.itpedia.nl/2021/12/06/standaard-algoritmen-als-betrouwbaar-motorblok-in-saas/ The term "Algorithm" has gotten a bad rap in recent years. This is because large tech companies such as Facebook and Google are often accused of threatening our privacy. However, algorithms are an integral part of every application. As is known, SaaS is standard software, which makes use of algorithms just like other software.

      • But what are algorithms anyway?
      • How can we use standard algorithms?
      • How do standard algorithms end up in our software?
      • When is software not an algorithm?
  10. Nov 2021
    1. Now that we've made peace with the concepts of projections (matrix multiplications)

      Projections are matrix multiplications.Why didn't you sayso? spatial and channel projections in the gated gmlp

    2. Computers are especially good at matrix multiplications. There is an entire industry around building computer hardware specifically for fast matrix multiplications. Any computation that can be expressed as a matrix multiplication can be made shockingly efficient.
  11. Aug 2021
    1. A friend of mine recently took his teenage daughter on vacation to San Francisco, where he'd once lived but she'd never been. As they drove to the tourist mecca of Fisherman's Wharf, he made a few detours, taking in some of the old sights to brighten his fading memories. Every time he departed from the route Google Maps offered, though, he noticed that his daughter grew anxious. He pondered her reactions and realized then that when they were driving in a strange place, she normally saw her parents dutifully following the directions offered up by the app. Disobeying it in what were to her unfamiliar surroundings clearly made her uncomfortable.
    1. there’s no spec for a search engine, since youcan’t write code for “deliver links to the 10 web pages that best match the customer’s intent”
  12. Jun 2021
  13. Mar 2021
  14. Feb 2021
  15. Jan 2021
  16. Dec 2020
    1. ; e-commerce sites have an economic incentiveto use personalization to induce users into spending moremoney

      personalize of algorithm can make people spend more money, which help to improve the benifical of both website and merchant.

  17. Nov 2020
  18. Sep 2020
  19. Aug 2020
  20. Jul 2020
  21. Jun 2020
  22. May 2020
  23. Apr 2020
  24. Jan 2020
  25. Oct 2019
    1. An algorithm is a step by step list of instructions that if followed exactly will solve the problem under consideration

      Là 1 danh sách hướng dẫn chính xác từng bước được dùng để giải quyết vấn đề.

  26. May 2019
    1. enginethatistheproblembut,rather,theusersofsearchengineswhoare.Itsuggeststhatwhatismostpopularissimplywhatrisestothetopofthesearchpile
      • I wanted to highlight the previous sentence as well, but for some reason it wouldn't let me*

      I understand why the author is troubled by the campaign's opinion of "It's not the search engines fault". It makes it seem as if there was nothing that could be done to stop promoting those ideas, and that if something is popular it will just have to be the result at the top.

      This can be problematic, as people who were not initially searching that specific phrase may click through to read racist, sexist, homophobic, or biased information (to just name a few) that perpetuates inaccuracies and negative stereotypes. It provides easier access into dangerous thinking built on the foundations of racism, sexism, etc.

      If the algorithms are changed or monitored to remove those negative searches, the people exposed to those ideas would decrease, which could help tear down the extreme communities that can build up from them.

      While I do understand this view, I also think that system can be helpful too. All the search engine does is reflect the most popular searches, and if negative ideals are what people are searching, then we can become aware and direct their paths to more educational and unbiased sources. It could be interesting to see what would happen if someone clicked on a link that said "Women belong in the kitchen", that led them to results that spoke about equality and feminism.

  27. Mar 2019
    1. When Instagram found out that users missed out on 70% of the posts on their feed, they announced a new algorithm. An algorithm that promised to let you see the posts that you care about the most. But the workings of the algorithm remained a burning question until recently when the Facebook-owned company revealed how does the Instagram algorithm work. When the Instagram feed was tweaked from chronological to algorithmic, people anticipated a drop in engagement levels. However, people have now started seeing 90% of their feed since the algorithm was instituted. Read on to find out everything you need to know about the Instagram feed algorithm.

      When Instagram found out that users missed out on 70% of the posts on their feed, they announced a new algorithm. An algorithm that promised to let you see the posts that you care about the most.

  28. Jan 2019
    1. Contrary to mainstream thinking that this new technology is unregulated, it’s really quite the opposite. These systems apply the strictest of rules under highly deterministic and predictable models that are regulated through mathematics. In the future, industry will be regulated not just by institutions and committees but by algorithms and mathematics. The new technology will gradually out-regulate the regulators and, in many cases, make them obsolete because the new system offers more certainty. Antonopoulos explains that “the opposite of authoritarianism is not chaos, but autonomy.”

      <big>评:</big><br/><br/>1933 年德国包豪斯设计学院被纳粹关闭,大部分师生移民到美国,他们同时也把自己的建筑风格带到了美利坚。尽管人们在严格的几何造型上感受到了冷漠感,但是包豪斯主义致力于美术和工业化社会之间的调和,力图探索艺术与技术的新统一,促使公众思考——「如何成为更完备的人」?而这一点间接影响到了我们现在所熟知的美国式人格。<br/><br/>区块链最终会超越「人治」、达到「算法自治」的状态吗?类似的讨论声在人工智能领域同样不绝于耳。「绝对理性」站到了完备人格的对立面,这种冰冷的特质标志着人类与机器交手后的败退。过去有怀疑论者担心,算法的背后实际上由人操控,但随着「由算法生成」的算法,甚至「爷孙代自承袭」算法的出现,这样的担忧逐渐变得苍白无力——我们有了更大的焦虑:是否会出现 “blockchain-based authoritarianism”?

  29. Nov 2018
    1. how does misrepresentative information make it to the top of the search result pile—and what is missing in the current culture of software design and programming that got us here?

      Two core questions in one? As to "how" bad info bubbles to the top of our search results, we know that the algorithms are proprietary—but the humans who design them bring their biases. As to "what is missing," Safiya Noble suggests here and elsewhere that the engineers in Silicon Valley could use a good dose of the humanities and social sciences in their decision-making. Is she right?

  30. Jul 2018
    1. The new organs process this enormous amount of information to break you down into categories, which are sometimes innocuous like, “Listens to Spotify” or “Trendy Moms”, but can also be more sensitive, identifying ethnicity and religious affiliation, or invasively personal, like “Lives away from family”. More than this, the new organs are being integrated with increasingly sophisticated algorithms, so they can generate predictive portraits of you, which they then sell to advertisers who can target products that you don’t even know you want yet. 
    1. Perelman says his Babel Generator also proves how easy it is to game the system. While students are not going to walk into a standardized test with a Babel Generator in their back pocket, he says, they will quickly learn they can fool the algorithm by using lots of big words, complex sentences, and some key phrases - that make some English teachers cringe. "For example, you will get a higher score just by [writing] "in conclusion,'" he says.
    2. "The idea is bananas, as far as I'm concerned," says Kelly Henderson, an English teacher at Newton South High School just outside Boston. "An art form, a form of expression being evaluated by an algorithm is patently ridiculous."
  31. Apr 2018
    1. This fall, my colleagues and I released gobo.social, a customizable news aggregator. Gobo presents you with posts from your friends, but also gives you a set of sliders that govern what news you see and what’s hidden from you. Want more serious news, less humor? Move a slider. Need to hear more female voices? Adjust the gender slider, or press the “mute all men” button for a much quieter internet. Gobo currently includes half a dozen ways to tune your news feed, with more to come.

      Gobo, a proof of concept.

  32. Mar 2018
  33. Mar 2017
    1. for not very large numbers

      Would an approach using the Sieve or Eratosthenes work better for very large numbers? Or the best shot would be a probabilistic primality test?

  34. Dec 2016
  35. Oct 2016
  36. Aug 2016
  37. Apr 2016
    1. While there are assets that have not been assigned to a cluster If only one asset remaining then Add a new cluster Only member is the remaining asset Else Find the asset with the Highest Average Correlation (HC) to all assets not yet been assigned to a Cluster Find the asset with the Lowest Average Correlation (LC) to all assets not yet assigned to a Cluster If Correlation between HC and LC > Threshold Add a new Cluster made of HC and LC Add to Cluster all other assets that have yet been assigned to a Cluster and have an Average Correlation to HC and LC > Threshold Else Add a Cluster made of HC Add to Cluster all other assets that have yet been assigned to a Cluster and have a Correlation to HC > Threshold Add a Cluster made of LC Add to Cluster all other assets that have yet been assigned to a Cluster and have Correlation to LC > Threshold End if End if End While

      Fast Threshold Clustering Algorithm

      Looking for equivalent source code to apply in smart content delivery and wireless network optimisation such as Ant Mesh via @KirkDBorne's status https://twitter.com/KirkDBorne/status/479216775410626560 http://cssanalytics.wordpress.com/2013/11/26/fast-threshold-clustering-algorithm-ftca/

  38. Nov 2015
    1. I don't totally agree with the fact that writers the creation of language is the target of a writer, I think language is just a means, the "algorithm" that "plays" with words/semanthincs, as any machine can do