132 Matching Annotations
  1. Jan 2024
    1. Lenstra’s elliptic curve algorithm for fac-toring large integers. This is one of the recent applications of elliptic curvesto the “real world,” to wit, the attempt to break certain widely used public keyciphers.
  2. Nov 2023
    1. Actor-critic is a temporal difference algorithm used in reinforcement learning. It consists of two networks: the actor, which decides which action to take, and the critic, which evaluates the action produced by the actor by computing the value function and informs the actor how good the action was and how it should adjust. In simple terms, the actor-critic is a temporal difference version of policy gradient. The learning of the actor is based on a policy gradient approach.

      Actor-critic

  3. Sep 2023
    1. Leaky Bucket is implemented similarly to Token Bucket where OVER_LIMIT is returned when the bucket is full. However tokens leak from the bucket at a consistent rate which is calculated as duration / limit. This algorithm is useful for metering, as the bucket leaks allowing traffic to continue without the need to wait for the configured rate limit duration to reset the bucket to zero.
  4. Aug 2023
    1. Nieuws kwam tot ons via een combinatie van kranten, bladen en radio of tv. Papieren media hadden het te doen met beperkte fysieke ruimte omdat papier geld kost, ook iets weegt en meer papier is ook nog duurder te vervoeren. Bij radio- en tv-zenders was het niet anders door beperkte tijd, een beperkt aantal kanalen en zeer hoge kosten. Dus een redactie maakte een beperkte selectie voor ons: een filter.Ook informatie-uitwisseling onderling ging per post en ook dat was bewerkelijk en bepaald niet gratis. Iets dergelijks gold eigenlijk voor alle vormen van informatie die tot ons kwam.En sinds een tijdje worden die filters minder belangrijk of ze verdwijnen compleet. Het zelf massaal verspreiden van (nep-)nieuws en andere informatie kost niets meer, dus iedereen kan iedereen onbeperkt bekogelen met extreme hoeveelheden informatie.In veel gevallen is er geen enkel filter meer op die toestroom van informatie. En al is dat filter er wel, dan moet je dat zelf maar zien in te stellen. Of, nog erger, het filter is er, maar functioneert niet in jouw belang en is daarmee onbetrouwbaar

      info filters niet meer ingebouwd in het systeem; dat moeten we nu zelf zien te creëeren, of deze worden anders voor ons gemaakt (zie bijvoorbeeld algoritmes, enzo)

    1. N+7 algorithm used by the Oulipo writers. This algorithm replaces every noun—every person, place, or thing—in Hacking the Academy with the person, place, or thing—mostly things—that comes seven nouns later in the dictionary. The results of N+7 would seem absolutely nonsensical, if not for the disruptive juxtapositions, startling evocations, and unexpected revelations that ruthless application of the algorithm draws out from the original work. Consider the opening substitution of Hacking the Academy, sustained throughout the entire book: every instance of the word academy is literally an accident.

      How might one use quirky algorithms in interestingly destructive or even generative ways to combinatorially create new things?

  5. Jul 2023
    1. As Threads "soars", Bluesky and Mastodon are adopting algorithmic feeds. (Tech Crunch) You will eat the bugs. You will live in the pod. You will read what we tell you. You will own nothing and we don't much care if you are happy.

      Applying the WEF meme about pods and bugs to Threads inspiring Bluesky and one Mastodon app to push algorithmic feeds.

  6. Apr 2023
    1. Instrumental to this era of speed cubing was Dr. Jessica Fridrich, who in 1997 developed a method for solving the cube faster than ever. Most of the fastest cube solvers nowadays use some version of the Fridrich method.
    2. no algorithm can flip a single edge in place
    3. There aren’t any algorithms that can swap only a pair of corners, nor only a pair of edges.
    4. swapping a pair of edges and a pair of corners cancel each other out, since there’s an algorithm to undo that.
    1. We'll start from the bottom and work our way up to modern load balancing algorithms.

      load balancing

  7. Feb 2023
    1. TikTok offers an online resource center for creators seeking to learn more about its recommendation systems, and has opened multiple transparency and accountability centers where guests can learn how the app’s algorithm operates.

      There seems to be a number of issues with the positive and negative feedback systems these social media companies are trying to create. What are they really measuring? The either aren't measuring well or aren't designing well (or both?)...

    2. “The reality is that tech companies have been using automated tools to moderate content for a really long time and while it’s touted as this sophisticated machine learning, it’s often just a list of words they think are problematic,” said Ángel Díaz, a lecturer at the UCLA School of Law who studies technology and racial discrimination.
  8. Jan 2023
    1. https://www.complexityexplorer.org/courses/162-foundations-applications-of-humanities-analytics/segments/15630

      https://www.youtube.com/watch?v=HwkRfN-7UWI


      Seven Principles of Data Feminism

      • Examine power
      • Challenge power
      • Rethink binaries and hierarchies
      • Elevate emotion an embodiment
      • Embrace pluralism
      • Consider context
      • Make labor visible

      Abolitionist movement

      There are some interesting analogies to be drawn between the abolitionist movement in the 1800s and modern day movements like abolition of police and racial justice, etc.


      Topic modeling - What would topic modeling look like for corpuses of commonplace books? Over time?


      wrt article: Soni, Sandeep, Lauren F. Klein, and Jacob Eisenstein. “Abolitionist Networks: Modeling Language Change in Nineteenth-Century Activist Newspapers.” Journal of Cultural Analytics 6, no. 1 (January 18, 2021). https://doi.org/10.22148/001c.18841. - Brings to mind the difference in power and invisible labor between literate societies and oral societies. It's easier to erase oral cultures with the overwhelm available to literate cultures because the former are harder to see.

      How to find unbiased datasets to study these?


      aspirational abolitionism driven by African Americans in the 1800s over and above (basic) abolitionism

    1. 个人学习可能取决于他人行为的主张突出了将学习环境视为一个涉及多个互动参与者的系统的重要性
    1. Naive Recursive Approach O(2^n): ```python def fib(n): if n == 1 or n == 2: // base case return 1

      else:
          return fib(n-1) + fib(n-2)
      

      ```

      Dynamic Programming Solution O(n): python def fib(n): // assuming n > 2 seq = zeros(n) seq[0] = seq[1] = 1 for i from 2 to (n-1): seq[i] = seq[i-1] + seq[i-2] return seq[n-1]

  9. www.cs.princeton.edu www.cs.princeton.edu
    1. "Finding Optimal Solutions to Rubik's Cub e Using Pattern Databases" by Richard E. Korf, AAAI 1997.

      The famous "Korf Algorithm" for finding the optimal solution to any Rubik's Cube state.

  10. Dec 2022
  11. Oct 2022
  12. Sep 2022
  13. Jul 2022
    1. Recommendation media is the new standard for content distribution. Here’s why friend graphs can‘t compete in an algorithmic world.
  14. www.peoplevsalgorithms.com www.peoplevsalgorithms.com
    1. Of course, the product itself is a small part of the equation. It’s the community of creators that really matters. People like the influencer family are the fuel that make these engines run. The next generation of social platforms could care less if your personal network is signed up and more about the vibrancy of a indentured creator class that that can be endlessly funneled into the video feed. Like media, social platforms are generational. Social is shifting from connections to entertainment. The next gen wants their MTV.
    2. Media is a game of intent and attention. The most valuable platforms dominate one or the other. Few win at both. On the internet, our intent is funneled into commercial action.

      people vs. algorithms

  15. Jun 2022
  16. May 2022
  17. Apr 2022
    1. Before 2009, Facebook had given users a simple timeline––a never-ending stream of content generated by their friends and connections, with the newest posts at the top and the oldest ones at the bottom. This was often overwhelming in its volume, but it was an accurate reflection of what others were posting. That began to change in 2009, when Facebook offered users a way to publicly “like” posts with the click of a button. That same year, Twitter introduced something even more powerful: the “Retweet” button, which allowed users to publicly endorse a post while also sharing it with all of their followers. Facebook soon copied that innovation with its own “Share” button, which became available to smartphone users in 2012. “Like” and “Share” buttons quickly became standard features of most other platforms.Shortly after its “Like” button began to produce data about what best “engaged” its users, Facebook developed algorithms to bring each user the content most likely to generate a “like” or some other interaction, eventually including the “share” as well. Later research showed that posts that trigger emotions––especially anger at out-groups––are the most likely to be shared.

      The Firehose versus the Algorithmic Feed

      See related from The Internet Is Not What You Think It Is: A History, A Philosophy, A Warning, except with more depth here.

    2. Babel is a metaphor for what some forms of social media have done to nearly all of the groups and institutions most important to the country’s future—and to us as a people.

      Algorithms creating the divide

    1. very specific choices made by the companies that have come to dominate the internet generally and social media platforms in particular

      The move to blame specific corporate social media algorithms.

    2. More importantly, these companies are still way too guarded about how exactly their standards operate, or how their engagement ranking systems influence what goes viral and what doesn’t.
    3. There are some bugs in the software.

      A solution that suggests some fixes to the existing structures will suffice.

    4. Russians could study and manipulate patterns in the engagement ranking system on a Facebook or YouTube.

      This suggests there is enough transparency in social media algorithms to game them.

    1. Algospeak refers to code words or turns of phrase users have adopted in an effort to create a brand-safe lexicon that will avoid getting their posts removed or down-ranked by content moderation systems. For instance, in many online videos, it’s common to say “unalive” rather than “dead,” “SA” instead of “sexual assault,” or “spicy eggplant” instead of “vibrator.”

      Definition of "Algospeak"

      In order to get around algorithms that demote content in social media feeds, communities have coined new words or new meanings to existing words to communicate their sentiment.

      This is affecting TikTok in particular because its algorithm is more heavy-handed in what users see. This is also causing people who want to be seen to tailor their content—their speech—to meet the algorithms needs. It is like search engine optimization for speech.

      Article discovered via Cory Doctorow at The "algospeak" dialect

  18. Feb 2022
    1. has the operator return its first defined argument, then pass over the next defined one in case of a dead-end, in a depth-first selection algorithm.
    2. The evaluation may result in the discovery of dead ends, in which case it must "switch" to a previous branching point and start over with a different alternative.
    3. Ambiguous functions
    1. In preparing these instructions, Gaspard-Michel LeBlond, one of their authors, urges the use of uniform media for registering titles, suggesting that “ catalog materials are not diffi cult to assemble; it is suffi cient to use playing cards [. . .] Whether one writes lengthwise or across the backs of cards, one should pick one way and stick with it to preserve uniformity. ” 110 Presumably LeBlond was familiar with the work of Abb é Rozier fi fteen years earlier; it is unknown whether precisely cut cards had been used before Rozier. The activity of cutting up pages is often mentioned in prior descrip-tions.

      In published instructions issued on May 8, 1791 in France, Gaspard-Michel LeBlond by way of standardization for library catalogs suggests using playing cards either vertically or horizontally but admonishing catalogers to pick one orientation and stick with it. He was likely familiar with the use of playing cards for this purpose by Abbé Rozier fifteen years earlier.

    2. Because of the constantly growing number of volumes, and to minimize coordination issues, Gottfried van Swieten emphasizes a set of instructions for registering all the books of the court library. Written instructions are by no means common prior to the end of the eighteenth century. Until then, cataloging takes place under the supervision of a librarian who instructs scriptors orally, pointing out problems and corrections as every-one goes along.

      Unlike prior (oral) efforts, Gottfried van Swieten created a writtten set of instructions for cataloging texts at the Austrian National Library. This helped to minimize coordination issues as well as time to teach and perfect the system.


      Written rules, laws, and algorithms help to create self-organization. This is done by saving time and energy that would have gone into the work of directed building of a system instead. The saved work can then be directed towards something else potentially more productive or regenerative.

  19. Jan 2022
    1. Checks are usually done in this order: 404 if resource is public and does not exist or 3xx redirection OTHERWISE: 401 if not logged-in or session expired 403 if user does not have permission to access resource (file, json, ...) 404 if resource does not exist or not willing to reveal anything, or 3xx redirection
  20. Apr 2021
  21. Mar 2021
    1. What Fukuyama and a team of thinkers at Stanford have proposed instead is a means of introducing competition into the system through “middleware,” software that allows people to choose an algorithm that, say, prioritizes content from news sites with high editorial standards.

      This is the second reference I've seen recently (Jack Dorsey mentioning a version was the first) of there being a marketplace for algorithms.

      Does this help introduce enough noise into the system to confound the drive to the extremes for the average person? What should we suppose from the perspective of probability theory?

    1. In an internal presentation from that year, reviewed by the Wall Street Journal, a company researcher, Monica Lee, found that Facebook was not only hosting a large number of extremist groups but also promoting them to its users: “64% of all extremist group joins are due to our recommendation tools,” the presentation said, predominantly thanks to the models behind the “Groups You Should Join” and “Discover” features.
    2. <small><cite class='h-cite via'> <span class='p-author h-card'>Joan Donovan, PhD</span> in "This is just some of the best back story I’ve ever read. Facebooks web of influence unravels when @_KarenHao pulls the wrong thread. Sike!! (Only the Boston folks will get that.)" / Twitter (<time class='dt-published'>03/14/2021 12:10:09</time>)</cite></small>

    1. Using chemicals to improve our economy of attention and become emotionally "fitter" is an option that penetrated public consciousness some time ago.

      Same is true of reinforcement learning algorithms.

    2. They have become more significant because social interaction is governed by social convention to a much lesser extent than it was fifty years ago.

      Probably because everything is now alogrithmically mediated.

    3. The possibility of pharmacological intervention thus expands the subjective autonomy of people to act in their own best interests or to their own detriment. This in turn is accompanied by a new form of self-reflection, which encompasses both structural images of the brain and the ability to imagine the neuro-chemical activity that goes on there. What is alarming is that many of the neuroscientific findings that have triggered a transformation in our perception of ourselves are linked with commercial interests.

      The same can be said about reinforcement learning algorithms. Just replace "pharmacological intervention" with "algorithmic mediation of social interactions".

    1. Given the racist algorithmic codes of the internet (Noble, 2018)

      I was thinking about Noble's work too because she begins with the distressing story of searching for "black girls" online and how this started her deeper inquiry.

      Here are a few words from Noble about this book where she surfaces some of the key questions here: https://www.youtube.com/watch?v=6KLTpoTpkXo

      What are the implications for us as educators? For youth women like Malia and Tamika?

    2. Algorithms of Oppression: How Search EnginesReinforce Racism

      I was thinking about Noble's work too because she begins with the distressing story of searching for "black girls" online and how this started her deeper inquiry.

      Here are a few words from Noble about this book where she surfaces some of the key questions here: https://www.youtube.com/watch?v=6KLTpoTpkXo

      What are the implications for us as educators? For youth women like Malia and Tamika?

  22. Feb 2021
    1. What I want to suggest to you is that, in some improbable way, this page is as much of an heir to the structure of a commonplace book as the most avant-garde textual collage. Who is the “author” of this page? There are, in all likelihood, thousands of them. It has been constructed, algorithmically, by remixing small snippets of text from diverse sources, with diverse goals, and transformed into something categorically different and genuinely valuable. In the center column, we have short snippets of text written by ten individuals or groups, though of course, Google reports that it has 32 million more snippets to survey if we want to keep clicking. The selection of these initial ten links is itself dependant on millions of other snippets of text that link to these and other journalism-related pages on the Web.

      Google search is just an algorithmic search version of John Locke's commonplace book index iterated across millions of individual commonplace books.

    2. In a certain sense, this is a search algorithm, a defined series of steps that allows the user to index the text in a way that makes it easier to query.

      Indices are simply a physical manifestation of metadata upon which we built a rudimentary search algorithm.

    1. Koo's discovery makes it possible to peek inside the black box and identify some key features that lead to the computer's decision-making process.

      Moving towards "explainable AI".

    2. Neural nets learn and make decisions independently of their human programmers. Researchers refer to this hidden process as a "black box." It is hard to trust the machine's outputs if we don't know what is happening in the box.

      Counter-argument: Why do we trust a human being's decisions if we don't know what is happening inside their brain? Yes, we can question the human being but we then have to trust that what they tell us about their rationale is true.

    1. We've developed scientific methods to study black boxes for hundreds of years now, but these methods have primarily been applied to [living beings] up to this point

      It's called psychology.

    1. A 2018 MIT study co-authored by the company’s own former chief media scientist found that false news stories on Twitter are 70 percent more likely to be retweeted than true stories and spread across the network at least six times faster.
  23. Jan 2021
    1. Documents examined by the Wall Street Journal last May show Facebook’s internal research found 64 percent of new members in extremist groups joined because of the social network’s “Groups you should join” and “Discover” algorithms.
  24. Dec 2020
    1. Recent patent filings show that Microsoft has been exploring additional ideas to monitor workers in the interest of organizational productivity. One filing describes a “meeting insight computing system” that would generate a quality score for a meeting using data such as body language, facial expressions, room temperature, time of day, and number of people in the meeting.

      So this will require that you have to have video turned on. How will they sell this to employees? "You need to turn your video on so that the algorithm can generate an accurate meeting quality score using your body language and facial expression.

      Sounds perfect. Absolutely no concerns about privacy violations, etc. in this product.

  25. Oct 2020
    1. the aggregation algorithm is roughly equivalent to the following pseudo code
    1. A more active stance by librarians, journalists, educators, and others who convey truth-seeking habits is essential.

      In some sense these people can also be viewed as aggregators and curators of sorts. How can their work be aggregated and be used to compete with the poor algorithms of social media?

    1. When Wojcicki took over, in 2014, YouTube was a third of the way to the goal, she recalled in investor John Doerr’s 2018 book Measure What Matters.“They thought it would break the internet! But it seemed to me that such a clear and measurable objective would energize people, and I cheered them on,” Wojcicki told Doerr. “The billion hours of daily watch time gave our tech people a North Star.” By October, 2016, YouTube hit its goal.

      Obviously they took the easy route. You may need to measure what matters, but getting to that goal by any means necessary or using indefensible shortcuts is the fallacy here. They could have had that North Star, but it's the means they used by which to reach it that were wrong.

      This is another great example of tech ignoring basic ethics to get to a monetary goal. (Another good one is Marc Zuckerberg's "connecting people" mantra when what he should be is "connecting people for good" or "creating positive connections".

    2. The conundrum isn’t just that videos questioning the moon landing or the efficacy of vaccines are on YouTube. The massive “library,” generated by users with little editorial oversight, is bound to have untrue nonsense. Instead, YouTube’s problem is that it allows the nonsense to flourish. And, in some cases, through its powerful artificial intelligence system, it even provides the fuel that lets it spread.#lazy-img-336042387:before{padding-top:66.68334167083543%;}

      This is a great summation of the issue.

    1. Safiya Noble, Algorithms of Oppression (New York: New York University Press, 2018). See also Mozilla’s 2019 Internet Health Report at https://internethealthreport.org/2019/lets-ask-more-of-ai/.
    1. You know Goethe's (or hell, Disney's) story of The Sorceror's Apprentice? Look it up. It'll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice. The difference with Zuck is that he doesn't have all the mastery that's in the sorcerer's job description. He can't control the spirits released by machines designed to violate personal privacy, produce echo chambers, and to rationalize both by pointing at how popular it all is with the billions who serve as human targets for messages (while saying as little as possible about the $billions that bad acting makes for the company).

      This is something I worry about with the IndieWeb movement sometimes. What will be the ultimate effect of everyone having their own site instead of relying on social media? In some sense it may have a one-to-one map to personal people (presuming there aren't armies of bot-sites) interacting. The other big portion of the puzzle that I often leave out is the black box algorithms that social silos run which have a significant influence on their users. Foreseeably one wouldn't choose to run such a black box algorithm on their own site and by doing so they take a much more measured and human approach to what they consume and spread out, in part because I hope they'll take more ownership of their own site.

    1. Many of the book’s essayists defend freedom of expression over freedom from obscenity. Says Rabbi Arthur Lelyveld (father of Joseph, who would become executive editor of The New York Times): “Freedom of expression, if it is to be meaningful at all, must include freedom for ‘that which we loathe,’ for it is obvious that it is no great virtue and presents no great difficulty for one to accord freedom to what we approve or to that to which we are indifferent.” I hear too few voices today defending speech of which they disapprove.

      I might take issue with this statement and possibly a piece of Jarvis' argument here. I agree that it's moral panic that there could be such a thing as "too much speech" because humans have a hard limit for how much they can individually consume.

      The issue I see is that while anyone can say almost anything, the problem becomes when a handful of monopolistic players like Facebook or YouTube can use algorithms to programattically entice people to click on and consume fringe content in mass quantities and that subtly, but assuredly nudges the populace and electorate in an unnatural direction. Most of the history of human society and interaction has long tended toward a centralizing consensus in which we can manage to cohere. The large scale effects of algorithmic-based companies putting a heavy hand on the scales are sure to create unintended consequences and they're able to do it at scales that the Johnson and Nixon administrations only wish they had access to.

      If we look at as an analogy to the evolution of weaponry, I might suggest we've just passed the border of single shot handguns and into the era of machine guns. What is society to do when the next evolution occurs into the era of social media atomic weapons?

  26. Sep 2020
  27. Jul 2020
    1. Beware online "filter bubbles"

      Relevance of right in front of you Internet means different things to different people Algorithms edit the web based on what you have looked at in the past "There is no standard Google anymore" Personalizing news and search results to each user "The Internet is showing us what it thinks we need to see, not necessarily what we need to see" "Filter Bubble"--information you live in online, you don't decide what gets in, but you definitely don't see what gets left out Mainly looking at what you click on first Information junk food instead of information balanced diet Gatekeepers found a new way to gate keep through algorithms What does this do to democracy? What sort of internet/web ethics need to be developed to get us through to the next thing? Algorithms need to be transparent and to give us some control; need a sort of civic responsibility Internet needs to be a tool of democracy and access for ALL

  28. Apr 2020
  29. Dec 2019
    1. So what's our total time cost? O(nlog⁡2n)O(n\log_{2}{n})O(nlog2​n). The log⁡2n\log_{2}{n}log2​n comes from the number of times we have to cut nnn in half to get down to sublists of just 1 element (our base case). The additional nnn comes from the time cost of merging all nnn items together each time we merge two sorted sublists.
    1. Alexander Samuel reflects on tagging and its origins as a backbone to the social web. Along with RSS, tags allowed users to connect and collate content using such tools as feed readers. This all changed with the advent of social media and the algorithmically curated news feed.

      Tags were used for discovery of specific types of content. Who needs that now that our new overlords of artificial intelligence and algorithmic feeds can tell us what we want to see?!

      Of course we still need tags!!! How are you going to know serendipitously that you need more poetry in your life until you run into the tag on a service like IndieWeb.xyz? An algorithmic feed is unlikely to notice--or at least in my decade of living with them I've yet to run into poetry in one.

  30. Nov 2019
    1. BFS looks at each adjacent node and doesn't consider the children of those adjacent nodes. DFS looks at each adjacent node, and looks at all the children of the current adjacent nodes. It again, looks at the children of the next adjacent node (adjacent to the children of the prevoius)

      Difference between BFS (Breadth First Search) & DFS (Depth First Search)

  31. Jul 2019
  32. Jun 2019
    1. Search does not merely present pages but structures knowledge, and the results retrieved in a commercial search engine create their own particular material reality. Ranking is itself information that also reflects the political, social, and cultural values of the society that search engine companies operate within, a notion that is often obscured in traditional information science studies. Noble said that Google representatives usually say either that it’s the computer’s fault or that it’s an anomaly they can’t control.

    2. Noble describes entering the term “beautiful,” and shows a screen of pictures of white people. She entered “ugly”, and the results were a racial mix.

    3. She search for “three black teenagers” in 2010, and getting mug shots as the result. Then searched “black girls” in that same year brought the viewer to porn sites.

    4. Noble focuses on degrading stereotypes of women of African descent as a prime example of these prejudices, which translate to overt racism.

    1. Noble describes entering the term “beautiful,” and shows a screen of pictures of white people. She entered “ugly”, and the results were a racial mix.

    2. She search for “three black teenagers” in 2010, and getting mug shots as the result. Then searched “black girls” in that same year brought the viewer to porn sites.

    3. Noble focuses on degrading stereotypes of women of African descent as a prime example of these prejudices, which translate to overt racism.

    4. Search does not merely present pages but structures knowledge, and the results retrieved in a commercial search engine create their own particular material reality. Ranking is itself information that also reflects the political, social, and cultural values of the society that search engine companies operate within, a notion that is often obscured in traditional information science studies. Noble said that Google representatives usually say either that it’s the computer’s fault or that it’s an anomaly they can’t control.

    5. Search does not merely present pages but structures knowledge, and the results retrieved in a commercial search engine create their own particular material reality. Ranking is itself information that also reflects the political, social, and cultural values of the society that search engine companies operate within, a notion that is often obscured in traditional information science studies. Noble said that Google representatives usually say either that it’s the computer’s fault or that it’s an anomaly they can’t control.

    6. Search does not merely present pages but structures knowledge, and the results retrieved in a commercial search engine create their own particular material reality. Ranking is itself information that also reflects the political, social, and cultural values of the society that search engine companies operate within, a notion that is often obscured in traditional information science studies. Noble said that Google representatives usually say either that it’s the computer’s fault or that it’s an anomaly they can’t control.

    1. Noble describes entering the term “beautiful,” and shows a screen of pictures of white people. She entered “ugly”, and the results were a racial mix.

    2. She search for “three black teenagers” in 2010, and getting mug shots as the result. Then searched “black girls” in that same year brought the viewer to porn sites.

    3. Noble focuses on degrading stereotypes of women of African descent as a prime example of these prejudices, which translate to overt racism.

  33. May 2019
    1. concepts that we have never yet imagined

      Has this been achieved by people, or have algorithms taken on this task and automated this process beyond our ability to directly interact with these concepts?

  34. Apr 2019
    1. One reason is that products are often designed in ways that make us act impulsively and against our better judgment. For example, suppose you have a big meeting at work tomorrow. Ideally, you want to spend some time preparing for it in the evening and then get a good night’s rest. But before you can do either, a notification pops up on your phone indicating that a friend tagged you on Facebook. “This will take a minute,” you tell yourself as you click on it. But after logging in, you discover a long feed of posts by friends. A few clicks later, you find yourself watching a YouTube video that one of them shared. As soon as the video ends, YouTube suggests other related and interesting videos. Before you know it, it’s 1:00 a.m., and it’s clear that you will need an all-nighter to get ready for the following morning’s meeting. This has happened to most of us.

      This makes me think about the question of social and moral responsibility- I understand that YouTube and Facebook didn't develop these algorithms with nefarious intent, but it is a very drug-like experience, and I know I'm not the only one who can relate to this experience

  35. Mar 2019
    1. Roth, now 67, gravitated to matching markets, where parties must choose one another, through applications, courtship and other means. In 1995, he wrote a mathematical algorithm that greatly improved the efficiency of the system for matching medical school graduates to hospitals for their residencies. That work led him to improve the matching models for law clerkships, the hiring of newly minted economists, Internet auctions and sororities. “I’m a market designer,” he says. “Currently, I’m focused on kidneys. We’re trying to match more donor kidneys to people globally.”

      Interesting for many, many fields.

    1. If you do not like the price you’re being offered when you shop, do not take it personally: many of the prices we see online are being set by algorithms that respond to demand and may also try to guess your personal willingness to pay. What’s next? A logical next step is that computers will start conspiring against us. That may sound paranoid, but a new study by four economists at the University of Bologna shows how this can happen.
    1. We have developed quite a few concepts and methods for using the computer system to help us plan and supervise sophisticated courses of action, to monitor and evaluate what we do, and to use this information as direct feedback for modifying our planning techniques in the future.

      This reminds me of "personalized learning."

  36. Feb 2019
    1. I think it could be a big mistake to have the population at large play around with algorithms.

      Interesting that a trader, the person who'd most likely be on the winning side of inexperienced people playing with algorithmic finance, would be hesitant to release it on the world at large.

    1. In other words, when YouTube fine-tunes its algorithms, is it trying to end compulsive viewing, or is it merely trying to make people compulsively watch nicer things?

      YouTube's business interests are clearly rewarded by compulsive viewing. If it is even possible to distinguish "nicer" things, YouTube might have to go against its business interests if less-nice things DO lead to more compulsive viewing. Go even deeper, as Rob suggests below, and ask if viewing itself can shape both how (compulsive?) and what (nice or not-nice?) we view?

    1. Algorithms will privilege some forms of ‘knowing’ over others, and the person writing that algorithm is going to get to decide what it means to know… not precisely, like in the former example, but through their values. If they value knowledge that is popular, then knowledge slowly drifts towards knowledge that is popular.

      I'm so glad I read Dave's post after having just read Rob Horning's great post, "The Sea Was Not a Mask", also addressing algorithms and YouTube.

  37. Jan 2019
    1. Do we want technology to keep giving more people a voice, or will traditional gatekeepers control what ideas can be expressed?

      Part of the unstated problem here is that Facebook has supplanted the "traditional gatekeepers" and their black box feed algorithm is now the gatekeeper which decides what people in the network either see or don't see. Things that crazy people used to decry to a non-listening crowd in the town commons are now blasted from the rooftops, spread far and wide by Facebook's algorithm, and can potentially say major elections.

      I hope they talk about this.

  38. Oct 2018
    1. Once products and, more important, people are coded as having certain preferences and tendencies, the feedback loops of algorithmic systems will work to reinforce these often flawed and discriminatory assumptions. The presupposed problem of difference will become even more entrenched, the chasms between people will widen.
    1. We want to make our model temporally-aware, as furtherinsights can be gathered by analyzing the temporal dy-namics of the user interactions.

      sounds exciting

    2. Reproducibility:We ran our experiment on a single com-puter, running a 3.2 GHz Intel Core i7 CPU, using PyTorchversion 0.2.0.45. We run the optimization on GPU NVIDIAGTX 670. We trained our model with the following parame-ters:= 0:04,= 0:01,K= 120. All code will be madeavailable at publication time6.

      reproducibility

  39. Sep 2018
  40. Aug 2018
    1. interest in understanding how web pages are rankedis foiled: in particular, users cannot know whether ornot a high ranking is the result of payment – andagain, such secrecy reduces trust and thereby theusability and accessibility of important information
    2. The basic dilemma is simple. If the algorithms areopen – then webmasters (and anyone else) interestedin having their websites appear at the top of a searchresult will be able to manipulate their sites so as toachieve that result: but such results would then bemisleading in terms of genuine popularity, potentialrelevance to a searcher’s interests, etc., therebyreducing users’ trust in the search engine results andhence reducing the usability and accessibility ofimportant information. On the other hand, if thealgorithms are secret, then the legitimate public
  41. Jul 2018
    1. Leading thinkers in China argue that putting government in charge of technology has one big advantage: the state can distribute the fruits of AI, which would otherwise go to the owners of algorithms.
  42. Jun 2018
    1. use algorithms to decide on what individual users most wanted to see. Depending on our friendships and actions, the system might deliver old news, biased news, or news which had already been disproven.
    2. 2016 was the year of politicians telling us what we should believe, but it was also the year of machines telling us what we should want.
  43. Apr 2018
    1. ConvexHull

      In mathematics, the convex hull or convex envelope of a set X of points in the Euclidean plane or in a Euclidean space (or, more generally, in an affine space over the reals) is the smallest convex set that contains X. For instance, when X is a bounded subset of the plane, the convex hull may be visualized as the shape enclosed by a rubber band stretched around X. -Wikipedia

  44. Mar 2018
  45. May 2017
    1. How do we reassert humanity’s moral compass over these alien algorithms? We may need to develop a version of Isaac Asimov’s “Three Laws of Robotics” for algorithms.

      A proposed solution to bad effects of info algorithms.

  46. Apr 2017
  47. Mar 2017
    1. “Design it so that Google is crucial to creating a response rather than finding one,”

      With "Google" becoming generic for "search" today, it is critical that students understand that Google, a commercial entity, will present different results in search to different people based on previous searches. Eli Pariser's work on the filter bubble is helpful for demonstrating this.

  48. Feb 2017
    1. Algorithms are aimed at optimizing everything. They can save lives, make things easier and conquer chaos. Still, experts worry they can also put too much control in the hands of corporations and governments, perpetuate bias, create filter bubbles, cut choices, creativity and serendipity, and could result in greater unemployment
  49. Aug 2016
    1. A team at Facebook reviewed thousands of headlines using these criteria, validating each other’s work to identify a large set of clickbait headlines. From there, we built a system that looks at the set of clickbait headlines to determine what phrases are commonly used in clickbait headlines that are not used in other headlines. This is similar to how many email spam filters work.

      Though details are scarce, the very idea that Facebook would tackle this problem with both humans and algorithms is reassuring. The common argument about human filtering is that it doesn’t scale. The common argument about algorithmic filtering is that it requires good signal (though some transhumanists keep saying that things are getting better). So it’s useful to know that Facebook used so hybrid an approach. Of course, even algo-obsessed Google has used human filtering. Or, at least, human judgment to tweak their filtering algorithms. (Can’t remember who was in charge of this. Was a semi-frequent guest on This Week in Google… Update: Matt Cutts) But this very simple “we sat down and carefully identified stuff we think qualifies as clickbait before we fed the algorithm” is refreshingly clear.

  50. Jun 2016
  51. Apr 2016
    1. While there are assets that have not been assigned to a cluster If only one asset remaining then Add a new cluster Only member is the remaining asset Else Find the asset with the Highest Average Correlation (HC) to all assets not yet been assigned to a Cluster Find the asset with the Lowest Average Correlation (LC) to all assets not yet assigned to a Cluster If Correlation between HC and LC > Threshold Add a new Cluster made of HC and LC Add to Cluster all other assets that have yet been assigned to a Cluster and have an Average Correlation to HC and LC > Threshold Else Add a Cluster made of HC Add to Cluster all other assets that have yet been assigned to a Cluster and have a Correlation to HC > Threshold Add a Cluster made of LC Add to Cluster all other assets that have yet been assigned to a Cluster and have Correlation to LC > Threshold End if End if End While

      Fast Threshold Clustering Algorithm

      Looking for equivalent source code to apply in smart content delivery and wireless network optimisation such as Ant Mesh via @KirkDBorne's status https://twitter.com/KirkDBorne/status/479216775410626560 http://cssanalytics.wordpress.com/2013/11/26/fast-threshold-clustering-algorithm-ftca/

  52. Jan 2016
  53. May 2015
    1. Financial algorithms execute trades based on many variables, sometimes performing autonomously. And they move faster than human thought. Since the markets operate on uncertainties and probabilities, the algorithms presumably responded to the uncertainties and probabilities implied by the false tweet, but Karppi says it's impossible to know the specific genetics of these algorithms.