427 Matching Annotations
  1. Jan 2018
    1. ante la destructividad naturalizada que ha acompañado el Antropoceno y ante la aparición de lo artificial como el modo ineluctable de la vida humana, necesitamos oponer el cultivo de modos de devenir cualitativamente nuevos a través del potencial futurizante ofrecido por lo artificial. En este caso ‘posibilidad’ significa “la negociación con la realidad y no una escalada de lo que es”
  2. Dec 2017
    1. Most of the recent advances in AI depend on deep learning, which is the use of backpropagation to train neural nets with multiple layers ("deep" neural nets).

      Neural nets consist of layers of nodes, with edges from each node to the nodes in the next layer. The first and last layers are input and output. The output layer might only have two nodes, representing true or false. Each node holds a value representing how excited it is. Each edge has a value representing strength of connection, which determines how much of the excitement passes through.

      The edges in an untrained neural net start with random values. The training data consists of a series of samples that are already labeled. If the output is wrong, the edges are adjusted according to how much they contributed to the error. It's called backpropagation because it starts with the output nodes and works toward the input nodes.

      Deep neural nets can be effective, but only for single specific tasks. And they need huge sets of training data. They can also be tricked rather easily. Worse, someone who has access to the net can discover ways of adding noise to images that will make the net "see" things that obviously aren't there.

  3. Aug 2017
    1. So this transforms how we do design. The human engineer now says what the design should achieve, and the machine says, "Here's the possibilities." Now in her job, the engineer's job is to pick the one that best meets the goals of the design, which she knows as a human better than anyone else, using human judgment and expertise.

      A post on the Keras blog was talking about eventually using AI to generate computer programs to match certain specifications. Gruber is saying something very similar.

  4. Jun 2017
  5. Apr 2017
  6. Mar 2017
    1. Great overview and commentary. However, I would have liked some more insight into the ethical ramifications and potential destructiveness of an ASI-system as demonstrated in the movie.

  7. Feb 2017
  8. Jan 2017
    1. According to a 2015 report by Incapsula, 48.5% of all web traffic are by bots.

      ...

      The majority of bots are "bad bots" - scrapers that are harvesting emails and looking for content to steal, DDoS bots, hacking tools that are scanning websites for security vulnerabilities, spammers trying to sell the latest diet pill, ad bots that are clicking on your advertisements, etc.

      ...

      Content on websites such as dev.to are reposted elsewhere, word-for-word, by scrapers programmed by Black Hat SEO specialists.

      ...

      However, a new breed of scrapers exist - intelligent scrapers. They can search websites for sentences containing certain keywords, and then rewrite those sentences using "article spinning" techniques.

  9. Dec 2016
    1. The team on Google Translate has developed a neural network that can translate language pairs for which it has not been directly trained. "For example, if the neural network has been taught to translate between English and Japanese, and English and Korean, it can also translate between Japanese and Korean without first going through English."

  10. Sep 2016
  11. Jun 2016
  12. May 2016
  13. Apr 2016
    1. We should have control of the algorithms and data that guide our experiences online, and increasingly offline. Under our guidance, they can be powerful personal assistants.

      Big business has been very militant about protecting their "intellectual property". Yet they regard every detail of our personal lives as theirs to collect and sell at whim. What a bunch of little darlings they are.

  14. Jan 2016
  15. Dec 2015
    1. OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.
    1. Big Sur is our newest Open Rack-compatible hardware designed for AI computing at a large scale. In collaboration with partners, we've built Big Sur to incorporate eight high-performance GPUs
  16. Nov 2015
    1. TPOT is a Python tool that automatically creates and optimizes machine learning pipelines using genetic programming. Think of TPOT as your “Data Science Assistant”: TPOT will automate the most tedious part of machine learning by intelligently exploring thousands of possible pipelines, then recommending the pipelines that work best for your data.

      https://github.com/rhiever/tpot TPOT (Tree-based Pipeline Optimization Tool) Built on numpy, scipy, pandas, scikit-learn, and deap.

  17. Jul 2015
  18. May 2015
    1. In this work, Lee and Brunskill fit a separate Knowledge Tracing model to each student’s data. This involv ed fitting four parameters: initial probability o f mastery, probability of transitioning from unmastered to mastered, probability of giving an incorrect answer if the student has mastered the skill, and probability of giving a correct answer if the student has not mastered the skill. Each student’s model is fit using a combination of Expectation Maximization (EM) combined with a brute force search

      First comment

  19. Nov 2014
    1. The Most Terrifying Thought Experiment of All Time

      TLDR: Thought experiment that, by knowing about it, you are contributing to humanity enslavement to a all powerful AI

  20. Feb 2014
    1. Point 3 is almost certainly the one that still bugs Doug. All sorts of mechanisms and utilities are around and used (source code control, registries, WWW search engines, and on and on), but the problem of indexing and finding relevant information is tougher today than ever before, even on one's own hard disk, let alone the WWW.

      I would agree that "the problem of indexing and finding relevant information is tougher today than ever before" ... and especially "on one's own hard disk".

      Vannevar Bush recognized the problem of artificial systems of indexing long before McIlroy pulled this page from his typewriter in 1964, and here we are 50 years later using the same kind of filesystem indexing systems and wondering why it's harder than ever to find information on our own hard drives.

    1. The real heart of the matter of selection, however, goes deeper than a lag in the adoption of mechanisms by libraries, or a lack of development of devices for their use. Our ineptitude in getting at the record is largely caused by the artificiality of systems of indexing. When data of any sort are placed in storage, they are filed alphabetically or numerically, and information is found (when it is) by tracing it down from subclass to subclass. It can be in only one place, unless duplicates are used; one has to have rules as to which path will locate it, and the rules are cumbersome. Having found one item, moreover, one has to emerge from the system and re-enter on a new path. The human mind does not work that way. It operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain. It has other characteristics, of course; trails that are not frequently followed are prone to fade, items are not fully permanent, memory is transitory. Yet the speed of action, the intricacy of trails, the detail of mental pictures, is awe-inspiring beyond all else in nature.

      With the advent of Google Docs we're finally moving away from the archaic indexing mentioned here. The filesystem metaphor was simple and dominated how everyone manages their data-- which extended into how we developed web content, as well.

      The declaration that Hierarchical File Systems are Dead has led to better systems of tagging and search, but we're still far from where we need to be since there is still a heavy focus on the document as a whole instead of also the content within the document.

      The linearity of printed books is even more treacherously entrenched in our minds than the classification systems used by libraries to store those books.

      One day maybe we'll liberate every piece of content from every layer of its concentric cages: artificial systems of indexing, books, web pages, paragraphs, even sentences and words themselves. Only then will we be able to re-dress those thoughts automatically into those familiar and comforting forms that keep our thoughts caged.