580 Matching Annotations
  1. Nov 2023
    1. As an ex-Viv (w/ Siri team) eng, let me help ease everyone's future trauma as well with the Fundamentals of Assisted Intelligence.<br><br>Make no mistake, OpenAI is building a new kind of computer, beyond just an LLM for a middleware / frontend. Key parts they'll need to pull it off:… https://t.co/uIbMChqRF9

      — Rob Phillips 🤖🦾 (@iwasrobbed) October 29, 2023
      <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
  2. Oct 2023
    1. Wang et. al. "Scientific discovery in the age of artificial intelligence", Nature, 2023.

      A paper about the current state of using AI/ML for scientific discovery, connected with the AI4Science workshops at major conferences.

      (NOTE: since Springer/Nature don't allow public pdfs to be linked without a paywall, we can't use hypothesis directly on the pdf of the paper, this link is to the website version of it which is what we'll use to guide discussion during the reading group.)

    1. Envisioning the next wave of emergent AI

      Are we stretching too far by saying that AI are currently emergent? Isn't this like saying that card indexes of the early 20th century are computers. In reality they were data storage and the "computing" took place when humans did the actual data processing/thinking to come up with new results.

      Emergence would seem to actually be the point which comes about when the AI takes its own output and continues processing (successfully) on it.

  3. Sep 2023
    1. R.U.R.: Rossum’s Universal Robots, drama in three acts by Karel Čapek, published in 1920 and performed in 1921. This cautionary play, for which Čapek invented the word robot (derived from the Czech word for forced labour), involves a scientist named Rossum who discovers the secret of creating humanlike machines. He establishes a factory to produce and distribute these mechanisms worldwide. Another scientist decides to make the robots more human, which he does by gradually adding such traits as the capacity to feel pain. Years later, the robots, who were created to serve humans, have come to dominate them completely.

      https://www.britannica.com/topic/RUR

    1. What do you do then? You can take the book to someone else who, you think, can read better than you, and have him explain the parts that trouble you. ("He" may be a living person or another book-a commentary or textbook. )

      This may be an interesting use case for artificial intelligence tools like ChatGPT which can provide the reader of complex material with simplified synopses to allow better penetration of the material (potentially by removing jargon, argot, etc.)

    2. Active Reading

      He then pushes a button and "plays back" the opinion whenever it seems appropriate to do so. He has performed acceptably without having had to think.

      This seems to be a reasonable argument to make for those who ask, why read? why take notes? especially when we can use search and artificial intelligence to do the work for us. Can we really?

  4. Aug 2023
  5. Jul 2023
    1. Epstein, Ziv, Hertzmann, Aaron, Herman, Laura, Mahari, Robert, Frank, Morgan R., Groh, Matthew, Schroeder, Hope et al. "Art and the science of generative AI: A deeper dive." ArXiv, (2023). Accessed July 21, 2023. https://doi.org/10.1126/science.adh4451.

      Abstract

      A new class of tools, colloquially called generative AI, can produce high-quality artistic media for visual arts, concept art, music, fiction, literature, video, and animation. The generative capabilities of these tools are likely to fundamentally alter the creative processes by which creators formulate ideas and put them into production. As creativity is reimagined, so too may be many sectors of society. Understanding the impact of generative AI - and making policy decisions around it - requires new interdisciplinary scientific inquiry into culture, economics, law, algorithms, and the interaction of technology and creativity. We argue that generative AI is not the harbinger of art's demise, but rather is a new medium with its own distinct affordances. In this vein, we consider the impacts of this new medium on creators across four themes: aesthetics and culture, legal questions of ownership and credit, the future of creative work, and impacts on the contemporary media ecosystem. Across these themes, we highlight key research questions and directions to inform policy and beneficial uses of the technology.

    1. Inserting a maincards with lack of memory .t3_14ot4na._2FCtq-QzlfuN-SwVMUZMM3 { --postTitle-VisitedLinkColor: #9b9b9b; --postTitleLink-VisitedLinkColor: #9b9b9b; --postBodyLink-VisitedLinkColor: #989898; } Lihmann's system of inserting a maincard is fundamentally based on a person's ability to remember there are other maincards already inserted that would be related to the card you want to insert.What if you have very poor memory like many people do, what is your process of inserting maincards?In my Antinet I handled it in an enhanced method from what I did in my 27 yrs of research notebooks which is very different then Lihmann's method.

      reply to u/drogers8 at https://www.reddit.com/r/antinet/comments/14ot4na/inserting_a_maincards_with_lack_of_memory/

      I would submit that your first sentence is wildly false.

      What topic(s) cover your newly made cards? Look those up in your index and find where those potentially related cards are (whether you remember them or not). Go to that top level card listed in your index and see what's there or in the section of cards that come after it. Find the best card in that branch and file your new card(s) as appropriate. If necessary, cross-index them with sub-topics in your index to make them more findable in the future. If you don't find one or more of those topics in your index, then create a new branch and start an index entry for one or more of those terms. (You'll find yourself making lots of index entries to start, but it will eventually slow down—though it shouldn't stop—as your collection grows.)

      Ideally, with regular use, you'll likely remember more and more, especially for active areas you're really interested in. However, take comfort that the system is designed to let you forget everything! This forgetting will actually help create future surprise as well as serendipity that will actually be beneficial for potentially generating new ideas as you use (and review) your notes.

      And if you don't believe me, consider that Alberto Cevolini edited an entire book, broadly about these techniques—including an entire chapter on Luhmann—, which he aptly named Forgetting Machines!

  6. Jun 2023
  7. learn-us-east-1-prod-fleet01-xythos.content.blackboardcdn.com learn-us-east-1-prod-fleet01-xythos.content.blackboardcdn.com
    1. The problem with that presumption is that people are alltoo willing to lower standards in order to make the purported newcomer appear smart. Justas people are willing to bend over backwards and make themselves stupid in order tomake an AI interface appear smart

      AI has recently become such a big thing in our lives today. For a while I was seeing chatgpt and snapchat AI all over the media. I feel like people ask these sites stupid questions that they already know the answer too because they don't want to take a few minutes to think about the answer. I found a website stating how many people use AI and not surprisingly, it shows that 27% of Americans say they use it several times a day. I can't imagine how many people use it per year.

    1. there is a scenario 00:18:21 uh possibly a likely scenario where we live in a Utopia where we really never have to worry again where we stop messing up our our planet because intelligence is not a bad commodity more 00:18:35 intelligence is good the problems in our planet today are not because of our intelligence they are because of our limited intelligence
      • limited (machine) intelligence

        • cannot help but exist
        • if the original (human) authors of the AI code are themselves limited in their intelligence
      • comment

        • this limitation is essentially what will result in AI progress traps
        • Indeed,
          • progress and their shadow artefacts,
          • progress traps,
          • is the proper framework to analyze the existential dilemma posed by AI
    1. I’ve also found that Tailwind works extremely well as an extension of my memory. I’ve uploaded my “spark file” of personal notes that date back almost twenty years, and using that as a source, I can ask remarkably open-ended questions—“did I ever write anything about 19th-century urban planning” or “what was the deal with that story about Houdini and Conan Doyle?”—and Tailwind will give me a cogent summary weaving together information from multiple notes. And it’s all accompanied by citations if I want to refer to the original direct quotes for whatever reason.

      This sounds like the sort of personalized AI tool I've been wishing for since the early ChatGPT models if not from even earlier dreams that predate that....

  8. May 2023
    1. Deep Learning (DL) A Technique for Implementing Machine LearningSubfield of ML that uses specialized techniques involving multi-layer (2+) artificial neural networksLayering allows cascaded learning and abstraction levels (e.g. line -> shape -> object -> scene)Computationally intensive enabled by clouds, GPUs, and specialized HW such as FPGAs, TPUs, etc.

      [29] AI - Deep Learning

    1. The object of the present volume is to point out the effects and the advantages which arise from the use of tools and machines ;—to endeavour to classify their modes of action ;—and to trace both the causes and the consequences of applying machinery to supersede the skill and power of the human arm.

      [28] AI - precedents...

    1. An AI model taught to view racist language as normal is obviously bad. The researchers, though, point out a couple of more subtle problems. One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms. It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.

      [21] AI Nuances

    1. Tagging and linking with AI (Napkin.one) by Nicole van der Hoeven

      https://www.youtube.com/watch?v=p2E3gRXiLYY

      Nicole underlines the value of a good user interface for traversing one's notes. She'd had issues with tagging things in Obsidian using their #tag functionality, but never with their [[WikiLink]] functionality. Something about the autotagging done by Napkin's artificial intelligence makes the process easier for her. Some of this may be down to how their user interface makes it easier/more intuitive as well as how it changes and presents related notes in succession.

      Most interesting however is the visual presentation of notes and tags in conjunction with an outliner for taking one's notes and composing a draft using drag and drop.

      Napkin as a visual layer over tooling like Obsidian, Logseq, et. al. would be a much more compelling choice for me in terms of taking my pre-existing data and doing something useful with it rather than just creating yet another digital copy of all my things (and potentially needing sync to keep them up to date).

      What is Napkin doing with all of their user's data?

  9. Apr 2023
    1. Abstract

      Recent innovations in artificial intelligence (AI) are raising new questions about how copyright law principles such as authorship, infringement, and fair use will apply to content created or used by AI. So-called “generative AI” computer programs—such as Open AI’s DALL-E 2 and ChatGPT programs, Stability AI’s Stable Diffusion program, and Midjourney’s self-titled program—are able to generate new images, texts, and other content (or “outputs”) in response to a user’s textual prompts (or “inputs”). These generative AI programs are “trained” to generate such works partly by exposing them to large quantities of existing works such as writings, photos, paintings, and other artworks. This Legal Sidebar explores questions that courts and the U.S. Copyright Office have begun to confront regarding whether the outputs of generative AI programs are entitled to copyright protection as well as how training and using these programs might infringe copyrights in other works.

    1. The result of working with this technique for a long time is a kind of second memory, an alter ego with which you can always communicate. It has, similar to our own memory, no pre-planned comprehensive order, no hierarchy, and surely no linear structure like a book. And by that very fact, it is alive independently of its author. The entire note collection can only be described as a mess, but at least it is a mess with a non-arbitrary internal structure.

      Luhmann attributes (an independent) life to his zettelkasten. It is effectuated by internal branching, opportunities for links or connections, and a register as well as lack of pre-planned comprehensive order, lack of hierarchy, and lack of linear structure.

      Which of these is necessary for other types of "life"? Can any be removed? Compare with other systems.

  10. Mar 2023
    1. Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. FAccT ’21. New York, NY, USA: Association for Computing Machinery, 2021. https://doi.org/10.1145/3442188.3445922.

      Would the argument here for stochastic parrots also potentially apply to or could it be abstracted to Markov monkeys?

    1. A.I. Is Mastering Language. Should We Trust What It Says?<br /> by Steven Johnson, art by Nikita Iziev

      Johnson does a good job of looking at the basic state of artificial intelligence and the history of large language models and specifically ChatGPT and asks some interesting ethical questions, but in a way which may not prompt any actual change.


      When we write about technology and the benefits and wealth it might bring, do we do too much ethics washing to help paper over the problems to help bring the bad things too easily to pass?

    2. We know from modern neuroscience that prediction is a core property of human intelligence. Perhaps the game of predict-the-next-word is what children unconsciously play when they are acquiring language themselves: listening to what initially seems to be a random stream of phonemes from the adults around them, gradually detecting patterns in that stream and testing those hypotheses by anticipating words as they are spoken. Perhaps that game is the initial scaffolding beneath all the complex forms of thinking that language makes possible.

      Is language acquisition a very complex method of pattern recognition?

    3. Another way to widen the pool of stakeholders is for government regulators to get into the game, indirectly representing the will of a larger electorate through their interventions.

      This is certainly "a way", but history has shown, particularly in the United States, that government regulation is unlikely to get involved at all until it's far too late, if at all. Typically they're only regulating not only after maturity, but only when massive failure may cause issues for the wealthy and then the "regulation" is to bail them out.

      Suggesting this here is so pie-in-the sky that it only creates a false hope (hope washing?) for the powerless. Is this sort of hope washing a recurring part of

    4. Whose values do we put through the A.G.I.? Who decides what it will do and not do? These will be some of the highest-stakes decisions that we’ve had to make collectively as a society.’’

      A similar set of questions might be asked of our political system. At present, the oligopolic nature of our electoral system is heavily biasing our direction as a country.

      We're heavily underrepresented on a huge number of axes.

      How would we change our voting and representation systems to better represent us?

    1. the apocalypse they refer to is not some kind of sci-fi takeover like Skynet, or whatever those researchers thought had a 10 percent chance of happening. They’re not predicting sentient evil robots. Instead, they warn of a world where the use of AI in a zillion different ways will cause chaos by allowing automated misinformation, throwing people out of work, and giving vast power to virtually anyone who wants to abuse it. The sin of the companies developing AI pell-mell is that they’re recklessly disseminating this mighty force.

      Not Skynet, but social disruption

  11. Feb 2023
    1. Sam Matla talks about the collector's fallacy in a negative light, and for many/most, he might be right. But for some, collecting examples and evidence of particular things is crucially important. The key is to have some idea of what you're collecting and why.

      Historians collecting small facts over time may seem this way, but out of their collection can emerge patterns which otherwise would never have been seen.

      cf: Keith Thomas article

      concrete examples of this to show the opposite?

      Relationship to the idea of AI coming up with black box solutions via their own method of diffuse thinking

    1. Certainly, computerizationmight seem to resolve some of the limitations of systems like Deutsch’s, allowing forfull-text search or multiple tagging of individual data points, but an exchange of cardsfor bits only changes the method of recording, leaving behind the reality that one muststill determine what to catalogue, how to relate it to the whole, and the overarchingsystem.

      Despite the affordances of recording, searching, tagging made by computerized note taking systems, the problem still remains what to search for or collect and how to relate the smaller parts to the whole.


      customer relationship management vs. personal knowledge management (or perhaps more important knowledge relationship management, the relationship between individual facts to the overall whole) suggested by autocomplete on "knowl..."

    2. One might then say that Deutsch’s index devel-oped at the height of the pursuit of historical objectivity and constituted a tool ofhistorical research not particularly innovative or limited to him alone, given that the useof notecards was encouraged by so many figures, and it crystallized a positivistic meth-odology on its way out.

      Can zettelkasten be used for other than positivitistic methodologies?

    1. In his 1976 book, Computer Power and Human Reason: From Judgment to Calculation, the computer scientist Joseph Weizenbaum observed some interesting tendencies in his fellow humans. In one now-famous anecdote, he described his secretary’s early interactions with his program ELIZA, a proto-chatbot he created in 1966.

      Description of Joseph Weizenbaum's ELIZA program

      When rule-based artificial intelligence was the state-of-the-art.

    1. https://www.cyberneticforests.com/ai-images

      Critical Topics: AI Images is an undergraduate class delivered for Bradley University in Spring 2023. It is meant to provide an overview of the context of AI art making tools and connects media studies, new media art, and data ethics with current events and debates in AI and generative art. Students will learn to think critically about these tools by using them: understand what they are by making work that reflects the context and histories of the tools.

    1. Writers struggled with the fickle nature of the system. They often spent a great deal of time wading through Wordcraft's suggestions before finding anything interesting enough to be useful. Even when writers struck gold, it proved challenging to consistently reproduce the behavior. Not surprisingly, writers who had spent time studying the technical underpinnings of large language models or who had worked with them before were better able to get the tool to do what they wanted.

      Because one may need to spend an inordinate amount of time filtering through potentially bad suggestions of artificial intelligence, the time and energy spent keeping a commonplace book or zettelkasten may pay off magnificently in the long run.

    2. Many authors noted that generations tended to fall into clichés, especially when the system was confronted with scenarios less likely to be found in the model's training data. For example, Nelly Garcia noted the difficulty in writing about a lesbian romance — the model kept suggesting that she insert a male character or that she have the female protagonists talk about friendship. Yudhanjaya Wijeratne attempted to deviate from standard fantasy tropes (e.g. heroes as cartographers and builders, not warriors), but Wordcraft insisted on pushing the story toward the well-worn trope of a warrior hero fighting back enemy invaders.

      Examples of artificial intelligence pushing toward pre-existing biases based on training data sets.

    3. “...it can be very useful for coming up with ideas out of thin air, essentially. All you need is a little bit of seed text, maybe some notes on a story you've been thinking about or random bits of inspiration and you can hit a button that gives you nearly infinite story ideas.”- Eugenia Triantafyllou

      Eugenia Triantafyllou is talking about crutches for creativity and inspiration, but seems to miss the value of collecting interesting tidbits along the road of life that one can use later. Instead, the emphasis here becomes one of relying on an artificial intelligence doing it for you at the "hit of a button". If this is the case, then why not just let the artificial intelligence do all the work for you?

      This is the area where the cultural loss of mnemonics used in orality or even the simple commonplace book will make us easier prey for (over-)reliance on technology.


      Is serendipity really serendipity if it's programmed for you?

    4. Wordcraft shined the most as a brainstorming partner and source of inspiration. Writers found it particularly useful for coming up with novel ideas and elaborating on them. AI-powered creative tools seem particularly well suited to sparking creativity and addressing the dreaded writer's block.

      Just as using a text for writing generative annotations (having a conversation with a text) is a useful exercise for writers and thinkers, creative writers can stand to have similar textual creativity prompts.

      Compare Wordcraft affordances with tools like Nabokov's card index (zettelkasten) method, Twyla Tharp's boxes, MadLibs, cadavre exquis, et al.

      The key is to have some sort of creativity catalyst so that one isn't working in a vacuum or facing the dreaded blank page.

    5. We like to describe Wordcraft as a "magic text editor". It's a familiar web-based word processor, but under the hood it has a number of LaMDA-powered writing features that reveal themselves depending on the user's activity.

      The engineers behind Wordcraft refer to it "as a 'magic text editor'". This is a cop-out for many versus a more concrete description of what is actually happening under the hood of the machine.

      It's also similar, thought subtly different to the idea of the "magic of note taking" by which writers are taking about ideas of emergent creativity and combinatorial creativity which occur in that space.

    6. The application is powered by LaMDA, one of the latest generation of large language models. At its core, LaMDA is a simple machine — it's trained to predict the most likely next word given a textual prompt. But because the model is so large and has been trained on a massive amount of text, it's able to learn higher-level concepts.

      Is LaMDA really able to "learn higher-level concepts" or is it just a large, straight-forward information theoretic-based prediction engine?

    1. I have to report that the AI did not make a useful or pleasant writing partner. Even a state-of-the-art language model cannot presently “understand” what a fiction writer is trying to accomplish in an evolving draft. That’s not unreasonable; often, the writer doesn’t know exactly what they’re trying to accom­plish! Often, they are writing to find out.
    1. A Luhmann web article from 2001-06-30!

      Berzbach, Frank. “Künstliche Intelligenz aus Holz.” Online magazine. Magazin für junge Forschung, June 30, 2001. https://sciencegarden.net/kunstliche-intelligenz-aus-holz/.


      Interesting to see the stark contrast in zettelkasten method here in an article about Luhmann versus the discussions within the blogosphere, social media, and other online spaces circa 2018-2022.


      ᔥ[[Daniel Lüdecke]] in Arbeiten mit (elektronischen) Zettelkästen at 2013-08-30 (accessed:: 2023-02-10 06:15:58)

    1. The breakthroughs are all underpinned by a new class of AI models that are more flexible and powerful than anything that has come before. Because they were first used for language tasks like answering questions and writing essays, they’re often known as large language models (LLMs). OpenAI’s GPT3, Google’s BERT, and so on are all LLMs. But these models are extremely flexible and adaptable. The same mathematical structures have been so useful in computer vision, biology, and more that some researchers have taken to calling them "foundation models" to better articulate their role in modern AI.

      Foundation Models in AI

      Large language models, more generally, are “foundation models”. They got the large-language name because that is where they were first applied.

  12. Jan 2023
    1. To start with, a human must enter a prompt into a generative model in order to have it create content. Generally speaking, creative prompts yield creative outputs. “Prompt engineer” is likely to become an established profession, at least until the next generation of even smarter AI emerges.

      Generative AI requires prompt engineering, likely a new profession

      What domain experience does a prompt engineer need? How might this relate to relate to specialty in librarianship?

    1. We appreciate this is a long span of time, and were concerned why any specific artificial memory system should last for so long.

      I suspect that artificial memory systems, particularly those that make some sort of logical sense, will indeed be long lasting ones.

      Given the long, unchanging history of the Acheulean hand axe, as an example, these sorts of ideas and practices were handed down from generation to generation.

      Given their ties to human survival, they're even more likely to persist.

      Indigenous memory systems in Aboriginal settings date to 65,000 years and also provide an example of long-lived systems.

    2. These may occur on rock walls, but were commonly engraved onto robust bones since at least the beginning of the European Upper Palaeolithic and African Late Stone Age, where it is obvious they served as artificial memory systems (AMS) or external memory systems (EMS) to coin the terms used in Palaeolithic archaeology and cognitive science respectively, exosomatic devices in which number sense is clearly evident (for definitions see d’Errico Reference d'Errico1989; Reference d'Errico1995a,Reference d'Erricob; d'Errico & Cacho Reference d'Errico and Cacho1994; d'Errico et al. Reference d'Errico, Doyon and Colage2017; Hayden Reference Hayden2021).

      Abstract marks have appeared on rock walls and engraved into robust bones as artificial memory systems (AMS) and external memory systems (EMS).

    1. Fried-berg Judeo-Arabic Project, accessible at http://fjms.genizah.org. This projectmaintains a digital corpus of Judeo-Arabic texts that can be searched and an-alyzed.

      The Friedberg Judeo-Arabic Project contains a large corpus of Judeo-Arabic text which can be manually searched to help improve translations of texts, but it might also be profitably mined using information theoretic and corpus linguistic methods to provide larger group textual translations and suggestions at a grander scale.

    2. More recent ad-ditions to the website include a “jigsaw puzzle” screen that lets users viewseveral items while playing with them to check whether they are “joins.” An-other useful feature permits the user to split the screen into several panelsand, thus, examine several items simultaneously (useful, e.g., when compar-ing handwriting in several documents). Finally, the “join suggestions” screenprovides the results of a technologically groundbreaking computerized anal-ysis of paleographic and codiocological features that suggests possible joinsor items written by the same scribe or belonging to the same codex. 35

      Computer means can potentially be used to check or suggest potential "joins" of fragments of historical documents.

      An example of some of this work can be seen in the Friedberg Genizah Project and their digital tools.

  13. Dec 2022
    1. The History of Zettelkasten The Zettelkasten method is a note-taking system developed by German sociologist and philosopher Niklas Luhmann. It involves creating a network of interconnected notes on index cards or in a digital database, allowing for flexible organization and easy access to information. The method has been widely used in academia and can help individuals better organize their thoughts and ideas.

      https://meso.tzyl.nl/2022/12/05/the-history-of-zettelkasten/

      If generated, it almost perfect reflects the public consensus, but does a miserable job of reflecting deeper realities.

  14. Nov 2022
    1. Title : Artificial Intelligence and Democratic Values: Next Steps for the United States Content : In Dartmouth University , appears AI as sciences however USA motionless a national AI policy comparing to Europe where The Council of Europe is developing the first international AI convention and earlier UE launched the European data privacy law, the General Data Privacy Regulation.

      In addition, China's efforts to become “world leader in AI by 2030, as long as China is developing a digital structures matched with The one belt one road project . USA , did not contribute to UNESCO AI Recommendations however USA It works to promote democratic values and human rights and integrate them with the governance of artificial intelligence .

      USA and UE are facing challenges with transatlantic data flows , with Ukrainian crises The situation became more difficult. In order to reinstate leadership in AI policy, the United States should advance the policy initiative launched last year by the Office of Science and Technology Policy (OSTP) and Strengthening efforts to support AI Bill of rights .

      EXCERPT: USA believe that foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application can establish responsible AI in USA. Link: https://www.cfr.org/blog/artificial-intelligence-and-democratic-values-next-steps-united-states Topic : AI and Democratic values Country : United States of America

  15. Oct 2022
    1. https://www.explainpaper.com/

      Another in a growing line of research tools for processing and making sense of research literature including Research Rabbit, Connected Papers, Semantic Scholar, etc.

      Functionality includes the ability to highlight sections of research papers with natural language processing to explain what those sections mean. There's also a "chat" that allows you to ask questions about the paper which will attempt to return reasonable answers, which is an artificial intelligence sort of means of having an artificial "conversation with the text".

      cc: @dwhly @remikalir @jeremydean

    1. I would put creativity into three buckets. If we define creativity as coming up with something novel or new for a purpose, then I think what AI systems are quite good at the moment is interpolation and extrapolation.

      Demis Hassabis, the founder of DeepMind, classifies creativity in three ways: interpolation, extrapolation, and "true invention". He defines the first two traditionally, but gives a more vague description of the third. What exactly is "true invention"?

      How can one invent without any catalyst at all? How can one invent outside of a problem's solution space? outside of the adjacent possible? Does this truly exist? Or doesn't it based on definition.

  16. Sep 2022
  17. Aug 2022
    1. The term "stigmergy" was introduced by French biologist Pierre-Paul Grassé in 1959 to refer to termite behavior. He defined it as: "Stimulation of workers by the performance they have achieved." It is derived from the Greek words στίγμα stigma "mark, sign" and ἔργον ergon "work, action", and captures the notion that an agent’s actions leave signs in the environment, signs that it and other agents sense and that determine and incite their subsequent actions.[4][5]

      Theraulaz, Guy (1999). "A Brief History of Stigmergy". Artificial Life. 5 (2): 97–116. doi:10.1162/106454699568700. PMID 10633572. S2CID 27679536.

    1. For the sake of simplicity, go to Graph Analysis Settings and disable everything but Co-Citations, Jaccard, Adamic Adar, and Label Propogation. I won't spend my time explaining each because you can find those in the net, but these are essentially algorithms that find connections for you. Co-Citations, for example, uses second order links or links of links, which could generate ideas or help you create indexes. It essentially automates looking through the backlinks and local graphs as it generates possible relations for you.

      comment on: https://www.youtube.com/watch?v=9OUn2-h6oVc

  18. Jul 2022
    1. In this paper, we propose and analyse a potential power triangle between three kinds of mutuallydependent, mutually threatening and co-evolving cognitive systems—the human being, the socialsystem and the emerging synthetic intelligence. The question we address is what configuration betweenthese powers would enable humans to start governing the global socio-econo-political system
      • Optimization problem - human beings, their social system and AI - what is optimal configuration?
  19. Jun 2022
    1. Dall-E delivers ten images for each request, and when you see results that contain sensitive or biased content, you can flag them to OpenAI for review. The question then becomes whether OpenAI wants Dall-E's results to reflect society's approximate reality or some idealized version. If an occupation is majority male or female, for instance, and you ask Dall-E to illustrate someone doing that job, the results can either reflect the actual proportion in society, or some even split between genders. They can also account for race, weight, and other factors. So far, OpenAI is still researching how exactly to structure these results. But as it learns, it knows it has choices to make.

      Philosophical questions for AI-generated artwork

      As if we needed more technology to dissolve a shared, cohesive view of reality, we need to consider how it is possible to tune the AI parameters to reflect some version of what is versus some version of how we want it to be.

    1. Harness collective intelligence augmented by digital technology, and unlock exponential innovation. Beyond old hierarchical structures and archaic tools.

      https://twitter.com/augmented_CI

      The words "beyond", "hierarchical", and "archaic" are all designed to marginalize prior thought and tools which all work, and are likely upon which this broader idea is built. This is a potentially toxic means of creating "power over" this prior art rather than a more open spirit of "power with".

  20. May 2022
    1. Bret Victor shared this post to make the point that we shouldn't be worrying about sentient AI right now; that the melting ice caps are way more of a threat than AGI. He linked to this article, saying that corporations act like a non-human, intelligent entity, that has real impacts in the world today, that may be way more consequential than AI.

    1. Ben Williamson shared this post on Twitter, saying that it's a good idea to remove the words 'artificial intelligence' and 'AI' from policy statements, etc. as a way of talking about specific details of a technology. We can see loads of examples of companies using 'AI' to obfuscate what they are really going.

    1. The bulk of Vumacam’s subscribers have thus far been private security companies like AI Surveillance, which supply anything from armed guards to monitoring for a wide range of clients, including schools, businesses, and residential neighborhoods. This was always the plan: Vumacam CEO Croock started AI Surveillance with Nichol shortly after founding Vumacam and then stepped away to avoid conflicts with other Vumacam customers.

      AI-driven Surveillance-as-a-Service

      Vumacam provides the platform, AI-driven target selection, and human review. Others subscribe to that service and add their own layers of services to customers.

  21. Apr 2022
    1. Since most of our feeds rely on either machine algorithms or human curation, there is very little control over what we actually want to see.

      While algorithmic feeds and "artificial intelligences" might control large swaths of what we see in our passive acquisition modes, we can and certainly should spend more of our time in active search modes which don't employ these tools or methods.

      How might we better blend our passive and active modes of search and discovery while still having and maintaining the value of serendipity in our workflows?

      Consider the loss of library stacks in our research workflows? We've lost some of the serendipity of seeing the book titles on the shelf that are adjacent to the one we're looking for. What about the books just above and below it? How do we replicate that sort of serendipity into our digital world?

      How do we help prevent the shiny object syndrome? How can stay on task rather than move onto the next pretty thing or topic presented to us by an algorithmic feed so that we can accomplish the task we set out to do? Certainly bookmarking a thing or a topic for later follow up can be useful so we don't go too far afield, but what other methods might we use? How can we optimize our random walks through life and a sea of information to tie disparate parts of everything together? Do we need to only rely on doing it as a broader species? Can smaller subgroups accomplish this if carefully planned or is exploring the problem space only possible at mass scale? And even then we may be under shooting the goal by an order of magnitude (or ten)?

    1. Connected Papers uses the publicly available corpus compiled by Semantic Scholar — a tool set up in 2015 by the Allen Institute for Artificial Intelligence in Seattle, Washington — amounting to around 200 million articles, including preprints.

      Semantic Scholar is a digital tool created by the Allen Institute for Artificial Intelligence in Seattle, Washington in 2015. It's corpus is publicly available for search and is used by other tools including Connected Papers.

    1. He continues by comparing open works to Quantum mechanics, and he arrives at the conclusion that open works are more like Einstein's idea of the universe, which is governed by precise laws but seems random at first. The artist in those open works arranges the work carefully so it could be re-organized by another but still keep the original voice or intent of the artist.

      Is physics open or closed?

      Could a play, made in a zettelkasten-like structure, be performed in a way so as to keep a consistent authorial voice?

      What potential applications does the idea of opera aperta have for artificial intelligence? Can it be created in such a way as to give an artificial brain a consistent "authorial voice"?

  22. Mar 2022
    1. This generative model normally penalizes predicted toxicity and rewards predicted target activity. We simply proposed to invert this logic by using the same approach to design molecules de novo, but now guiding the model to reward both toxicity and bioactivity instead.

      By changing the parameters of the AI, the output of the AI changed dramatically.

  23. Feb 2022
    1. Stay at the forefront of educational innovation

      What about a standard of care for students?

      Bragging about students not knowing how the surveillance technology works is unethical.<br><br>Students using accessibility software or open educational resources shouldn't be punished for accidentally avoiding surveillance. pic.twitter.com/Uv7fiAm0a3

      — Ian Linkletter (@Linkletter) February 22, 2022
      <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

      #annotation https://t.co/wVemEk2yao

      — Remi Kalir (@remikalir) February 23, 2022
      <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
    1. We need to getour thoughts on paper first and improve them there, where we canlook at them. Especially complex ideas are difficult to turn into alinear text in the head alone. If we try to please the critical readerinstantly, our workflow would come to a standstill. We tend to callextremely slow writers, who always try to write as if for print,perfectionists. Even though it sounds like praise for extremeprofessionalism, it is not: A real professional would wait until it wastime for proofreading, so he or she can focus on one thing at a time.While proofreading requires more focused attention, finding the rightwords during writing requires much more floating attention.

      Proofreading while rewriting, structuring, or doing the thinking or creative parts of writing is a form of bikeshedding. It is easy to focus on the small and picayune fixes when writing, but this distracts from the more important parts of the work which really need one's attention to be successful.

      Get your ideas down on paper and only afterwards work on proofreading at the end. Switching contexts from thinking and creativity to spelling, small bits of grammar, and typography can be taxing from the perspective of trying to multi-task.


      Link: Draft #4 and using Webster's 1913 dictionary for choosing better words/verbiage as a discrete step within the rewrite.


      Linked to above: Are there other dictionaries, thesauruses, books of quotations, or individual commonplace books, waste books that can serve as resources for finding better words, phrases, or phrasing when writing? Imagine searching through Thoreau's commonplace book for finding interesting turns of phrase. Naturally searching through one's own commonplace book is a great place to start, if you're saving those sorts of things, especially from fiction.

      Link this to Robin Sloan's AI talk and using artificial intelligence and corpuses of literature to generate writing.

  24. Jan 2022
    1. https://vimeo.com/232545219

      from: Eyeo Conference 2017

      Description

      Robin Sloan at Eyeo 2017 | Writing with the Machine | Language models built with recurrent neural networks are advancing the state of the art on what feels like a weekly basis; off-the-shelf code is capable of astonishing mimicry and composition. What happens, though, when we take those models off the command line and put them into an interactive writing environment? In this talk Robin presents demos of several tools, including one presented here for the first time. He discusses motivations and process, shares some technical tips, proposes a course for the future — and along the way, write at least one short story together with the audience: all of us, and the machine.

      Notes

      Robin created a corpus using If Magazine and Galaxy Magazine from the Internet Archive and used it as a writing tool. He talks about using a few other models for generating text.

      Some of the idea here is reminiscent of the way John McPhee used the 1913 Webster Dictionary for finding words (or le mot juste) for his work, as tangentially suggested in Draft #4 in The New Yorker (2013-04-22)

      Cross reference: https://hypothes.is/a/t2a9_pTQEeuNSDf16lq3qw and https://hypothes.is/a/vUG82pTOEeu6Z99lBsrRrg from https://jsomers.net/blog/dictionary


      Croatian acapella singing: klapa https://www.youtube.com/watch?v=sciwtWcfdH4


      Writing using the adjacent possible.


      Corpus building as an art [~37:00]

      Forgetting what one trained their model on and then seeing the unexpected come out of it. This is similar to Luhmann's use of the zettelkasten as a serendipitous writing partner.

      Open questions

      How might we use information theory to do this more easily?

      What does a person or machine's "hand" look like in the long term with these tools?

      Can we use corpus linguistics in reverse for this?

      What sources would you use to train your model?

      References:

      • Andrej Karpathy. 2015. "The Unreasonable Effectiveness of Recurrent Neural Networks"
      • Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, et al. "Generating sentences from a continuous space." 2015. arXiv: 1511.06349
      • Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. "A Hybrid Convolutional Variational Autoencoder for Text generation." arXiv:1702.02390
      • Soroush Mehri, et al. 2017. "SampleRNN: An Unconditional End-to-End Neural Audio Generation Model." arXiv:1612.07837 applies neural networks to sound and sound production
  25. Dec 2021
  26. Nov 2021
  27. Oct 2021
  28. Sep 2021
  29. Aug 2021
    1. Provide more opportunities for new talent. Because healthcare has been relatively solid and stagnant in what it does, we're losing out on some of the new talent that comes out — who are developing artificial intelligence, who are working at high-tech firms — and those firms can pay significantly higher than hospitals for those talents. We have to find a way to provide some opportunities for that and apply those technologies to make improvements in healthcare.

      Intestesing. Mr. Roach thinks healthcare is not doing enough to attract new types of talent (AI and emerging tech) into healthcare. We seem to be losing this talent to the technology sector.

      I would agree with this point. Why work for healthcare with all of its massive demands and HIPPA and lack of people knowing what you are even building. Instead, you can go into tech, have a better quality of life, get paid so much more, and have the possibility of exiting due to a buyout from the healthcare industry.

    1. Building on platforms' stores of user-generated content, competing middleware services could offer feeds curated according to alternate ranking, labeling, or content-moderation rules.

      Already I can see too many companies relying on artificial intelligence to sort and filter this material and it has the ability to cause even worse nth degree level problems.

      Allowing the end user to easily control the content curation and filtering will be absolutely necessary, and even then, customer desire to do this will likely loose out to the automaticity of AI. Customer laziness will likely win the day on this, so the design around it must be robust.

  30. Jul 2021
    1. Facebook AI. (2021, July 16). We’ve built and open-sourced BlenderBot 2.0, the first #chatbot that can store and access long-term memory, search the internet for timely information, and converse intelligently on nearly any topic. It’s a significant advancement in conversational AI. https://t.co/H17Dk6m1Vx https://t.co/0BC5oQMEck [Tweet]. @facebookai. https://twitter.com/facebookai/status/1416029884179271684

  31. Jun 2021
    1. t hadn’t learned sort of the concept of a paddle or the concept of a ball. It only learned about patterns of pixels.

      Cognition and perception are closely related in humans, as the theory of embodied cognition has shown. But until the concept of embodied cognition gained traction, we had developed a pretty intellectual concept of cognition: as something located in our brains, drained of emotions, utterly rational, deterministic, logical, and so on. This is still the concept of intelligence that rules research in AI.

    2. the original goal at least, was to have a machine that could be like a human, in that the machine could do many tasks and could learn something in one domain, like if I learned how to play checkers maybe that would help me learn better how to play chess or other similar games, or even that I could use things that I’d learned in chess in other areas of life, that we sort of have this ability to generalize the things that we know or the things that we’ve learned and apply it to many different kinds of situations. But this is something that’s eluded AI systems for its entire history.

      The truth is we do not need to have computers to excel in the things we do best, but to complement us. We shall bet on cognitive extension instead of trying to re-create human intelligence --which is a legitimate area of research, but computer scientists should leave this to cognitive science and neuroscience.

    1. Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.”

      What if they're not? What if they're building an advertising machine to manipulate us into giving them all our money?

      From an investor perspective, the artificial answer certainly seems sexy while using some clever legerdemain to keep the public from seeing what's really going on behind the curtain?