6 Matching Annotations
  1. Last 7 days
    1. for - consciousness, AI, Alex Gomez- Marin, neuroscience, hard problem of consciousness, nonmaterialism, materialism - progress trap - transhumanism - AI - war on conciousness

      Summary - Alex advocates - for a nonmaterialist perspective on consciousness and argues - that there is an urgency to educate the public on this perspective - due to the transhumanist agenda that could threaten the future of humanity - He argues that the problem of whether consciousness is best explained by materialism or not is central to resolving the threat posed by the direction AI takes - In this regard, he interprets that the very words that David Chalmers chose to articulate the Hard Problem of Consciousness reveals the assumption of a materialist reference frame. - He used a legal metaphor too illustrate his point: - When a lawyer poses three question "how did you kill that person" - the question is entrapping the accused . It already contains the assumption of guilt. - I would characterize his role as a scientist who practices authentic seeker of wisdom - will learn from a young child if they have something valuable to teach and - will help educate a senior if they have something to learn - The efficacy of timebinding depends on authenticity and is harmed by dogma

  2. Jun 2024
  3. Jun 2023
  4. Nov 2019
    1. In 2001, AI founder Marvin Minsky asked "So the question is why didn't we get HAL in 2001?"[167] Minsky believed that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blamed the qualification problem.[168] For Ray Kurzweil, the issue is computer power and, using Moore's Law, he predicted that machines with human-level intelligence will appear by 2029.[169] Jeff Hawkins argued that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[170] There were many other explanations and for each there was a corresponding research program underway.
    2. Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts
    1. In 1979, Russell Noftsker, being convinced that Lisp machines had a bright commercial future due to the strength of the Lisp language and the enabling factor of hardware acceleration, proposed to Greenblatt that they commercialize the technology.[citation needed] In a counter-intuitive move for an AI Lab hacker, Greenblatt acquiesced, hoping perhaps that he could recreate the informal and productive atmosphere of the Lab in a real business. These ideas and goals were considerably different from those of Noftsker. The two negotiated at length, but neither would compromise. As the proposed firm could succeed only with the full and undivided assistance of the AI Lab hackers as a group, Noftsker and Greenblatt decided that the fate of the enterprise was up to them, and so the choice should be left to the hackers. The ensuing discussions of the choice divided the lab into two factions. In February 1979, matters came to a head. The hackers sided with Noftsker, believing that a commercial venture fund-backed firm had a better chance of surviving and commercializing Lisp machines than Greenblatt's proposed self-sustaining start-up. Greenblatt lost the battle.