- Dec 2024
-
www.dreamsongs.com www.dreamsongs.com
-
https://web.archive.org/web/20241201071240/https://www.dreamsongs.com/WorseIsBetter.html
Richard P Gabriel documents the history behind 'worse is better' a talk he held in Cambridge in #1989/ The role of LISP in the then AI wave stands out to me. And the emergence of C++ on Unix and OOP. I remember doing a study project (~91) w Andre en Martin in C++ v2 because we realised w OOP it would be easier to solve and the teacher thought it would be harder for us to use a diff language.
via via via Chris Aldrich in h. to Christian Tietze, https://forum.zettelkasten.de/discussion/comment/22075/#Comment_22075 to Christine Lemmer-Webber https://dustycloud.org/blog/how-decentralized-is-bluesky/ to here.
-[ ] find overv of AI history waves and what tech / languages drove them at the time
Tags
Annotators
URL
-
- Jan 2024
-
www.imdb.com www.imdb.com
-
More, essentially all research in self-reference for decades has been in artificial intelligence, which is the device around which this plot turns. The language of AI is LISP, the name of the archvillain. In the heyday of LISP machines, the leading system was Flavors LISP Object Oriented Programming or: you guessed it -- Floop. I myself worked on a defense AI program that included the notion of a `third brain,' that is an observer living in a world different than (1) that of the world's creator, and (2) of the characters.
-
- May 2023
-
en.wikiquote.org en.wikiquote.org
-
The object of the present volume is to point out the effects and the advantages which arise from the use of tools and machines ;—to endeavour to classify their modes of action ;—and to trace both the causes and the consequences of applying machinery to supersede the skill and power of the human arm.
[28] AI - precedents...
-
- Dec 2019
-
en.wikipedia.org en.wikipedia.org
-
Hans Moravec argued in 1976 that computers were still millions of times too weak to exhibit intelligence. He suggested an analogy: artificial intelligence requires computer power in the same way that aircraft require horsepower. Below a certain threshold, it's impossible, but, as power increases, eventually it could become easy.[79] With regard to computer vision, Moravec estimated that simply matching the edge and motion detection capabilities of human retina in real time would require a general-purpose computer capable of 109 operations/second (1000 MIPS).[80] As of 2011, practical computer vision applications require 10,000 to 1,000,000 MIPS. By comparison, the fastest supercomputer in 1976, Cray-1 (retailing at $5 million to $8 million), was only capable of around 80 to 130 MIPS, and a typical desktop computer at the time achieved less than 1 MIPS.
-
- Nov 2019
-
en.wikipedia.org en.wikipedia.org
-
The neats: logic and symbolic reasoning[edit source] Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal.[100] In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 1960s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[101] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[102] Prolog uses a subset of logic (Horn clauses, closely related to "rules" and "production rules") that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum's expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.[103] Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof.[104] McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problems—not machines that think as people do.[105] The scruffies: frames and scripts[edit source] Among the critics of McCarthy's approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like "story understanding" and "object recognition" that required a machine to think like a person. In order to use ordinary concepts like "chair" or "restaurant" they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. Gerald Sussman observed that "using precise language to describe essentially imprecise concepts doesn't make them any more precise."[106] Schank described their "anti-logic" approaches as "scruffy", as opposed to the "neat" paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.[107] In 1975, in a seminal paper, Minsky noted that many of his fellow "scruffy" researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be "logical", but these structured sets of assumptions are part of the context of everything we say and think. He called these structures "frames". Schank used a version of frames he called "scripts" to successfully answer questions about short stories in English.[108] Many years later object-oriented programming would adopt the essential idea of "inheritance" from AI research on frames.
-
-
en.wikipedia.org en.wikipedia.org
-
Bolt, Beranek and Newman (BBN) developed its own Lisp machine, named Jericho,[7] which ran a version of Interlisp. It was never marketed. Frustrated, the whole AI group resigned, and were hired mostly by Xerox. So, Xerox Palo Alto Research Center had, simultaneously with Greenblatt's own development at MIT, developed their own Lisp machines which were designed to run InterLisp (and later Common Lisp). The same hardware was used with different software also as Smalltalk machines and as the Xerox Star office system.
-
In 1979, Russell Noftsker, being convinced that Lisp machines had a bright commercial future due to the strength of the Lisp language and the enabling factor of hardware acceleration, proposed to Greenblatt that they commercialize the technology.[citation needed] In a counter-intuitive move for an AI Lab hacker, Greenblatt acquiesced, hoping perhaps that he could recreate the informal and productive atmosphere of the Lab in a real business. These ideas and goals were considerably different from those of Noftsker. The two negotiated at length, but neither would compromise. As the proposed firm could succeed only with the full and undivided assistance of the AI Lab hackers as a group, Noftsker and Greenblatt decided that the fate of the enterprise was up to them, and so the choice should be left to the hackers. The ensuing discussions of the choice divided the lab into two factions. In February 1979, matters came to a head. The hackers sided with Noftsker, believing that a commercial venture fund-backed firm had a better chance of surviving and commercializing Lisp machines than Greenblatt's proposed self-sustaining start-up. Greenblatt lost the battle.
Tags
Annotators
URL
-