8 Matching Annotations
  1. Dec 2022
    1. Now the computer scientist Nassim Dehouche has proposed an updated version, which should terrify those of us who live by the pen: “Can you write a page of text that could not have been generated by an AI, and explain why?”

      scary

    1. For the typography of the album and the invitation, Brandon Guerin and Mickael Alzate combined DaVinci Italic and Suisse Int’l. Photography by Alyas. 3D art by Samy LaCrapule with help from Timotheos, Janis and Nolann Blettner
  2. Nov 2022
    1. Cognition AutomationThe first kind is cognitive automation: encoding human abstractions in a piece of software, then using that software to automate tasks normally performed by humans. Nearly all of current AI fall into this category.Cognitive automation can happen via explicitly hard-coding human-generated rules (so-called symbolic AI or GOFAI), or via collecting a dense sampling of labeled inputs and fitting a curve to it (such as a deep learning model). This curve then functions as a sort of interpolative database — while it doesn’t store the exact data points used to fit it, you can query it to retrieve interpolated points, much like you can query a model like StableDiffusion to retrieve arbitrary images generated by combining existing images.This second form of automation is especially powerful, since encoding implicit abstractions only via training examples is far more practical and versatile than explicitly programming abstractions by hand, for all kinds of historically difficult problems.Cognitive AssistanceThe second kind of AI is cognitive assistance: using AI to help us make sense of the world and make better decisions. AI to help us perceive, think, understand, and do more. AI that you could use like an extension of your own mind. Today, some applications of machine learning fall into this category, but they’re few and far between. Yet, I believe this is where the true potential of AI lies.Do note that cognitive assistance is not a different kind of technology, per se, separate from deep learning or GOFAI. It’s a different kind of application of the same technologies. For instance, if you take a model like StableDiffusion and integrate it into a visual design product to support and expand human workflows, you’re turning cognitive automation into cognitive assistance.Cognitive AutonomyThe last kind is cognitive autonomy: creating artificial minds that could thrive independently of us, that would exist for their own sake. The old dream of the field of AI. Autonomous agents, that could set their own goals in an open-ended way. That could adapt to new situations and circumstances — even ones unforeseen by their creators. That might even feel emotions or experience consciousness.Today and for the foreseeable future, this is stuff of science-fiction. It would require a set of technological breakthroughs that we haven’t even started exploring.
  3. www.mygard.info www.mygard.info
    1. Gard is an open-source project on a mission to make hydroponics more accessible, enabling people to grow their food in a sustainable way.
    1. Depth2Img is another interesting addition to Stable Diffusion that can infer depth from an input image and represent that in the generated outputs. The new release also includes a text-guided inpainting model that simplifies the experience of modifying parts of a given image.  
    2. Stable Diffusion v2 is a significant upgrade to its predecessor. The new version was trained using a new text encoder called OpenCLIP, which improves the quality of images relative to the previous latent diffusion encoder.