8 Matching Annotations
  1. Nov 2024
    1. Best suited for deployment of trained AI models in Android and iOS operating systems, TensorFlow Lite provides customers with on-device machine learning capability through mobile-optimized pre-trained models. It’s efficient while having low latency and compatibility for multiple languages which makes it very versatile. Developers can leverage its lightweight and mobile-optimized models to provide on-device AI functionality with minimal latency when implementing TensorFlow Lite in mobile apps.

      Implementing Trained AI Models in Mobile App Development is transforming app experiences by integrating machine learning into iOS and Android platforms. From AI-powered personalization to advanced analytics, trained models empower intelligent decision-making and enhanced functionality.

  2. Jan 2024
  3. Sep 2023
    1. in 2018 you know it was around four percent of papers were based on Foundation models in 2020 90 were and 00:27:13 that number has continued to shoot up into 2023 and at the same time in the non-human domain it's essentially been zero and actually it went up in 2022 because we've 00:27:25 published the first one and the goal here is hey if we can make these kinds of large-scale models for the rest of nature then we should expect a kind of broad scale 00:27:38 acceleration
      • for: accelerating foundation models in non-human communication, non-human communication - anthropogenic impacts, species extinction - AI communication tools, conservation - AI communication tools

      • comment

        • imagine the empathy we can realize to help slow down climate change and species extinction by communicating and listening to the feedback from other species about what they think of our species impacts on their world!
  4. Apr 2023
    1. It was only by building an additional AI-powered safety mechanism that OpenAI would be able to rein in that harm, producing a chatbot suitable for everyday use.

      This isn't true. The Stochastic Parrots paper outlines other avenues for reining in the harms of language models like GPT's.

  5. Mar 2023
  6. Dec 2022
    1. Houston, we have a Capability Overhang problem: Because language models have a large capability surface, these cases of emergent capabilities are an indicator that we have a ‘capabilities overhang’ – today’s models are far more capable than we think, and our techniques available for exploring the models are very juvenile. We only know about these cases of emergence because people built benchmark datasets and tested models on them. What about all the capabilities we don’t know about because we haven’t thought to test for them? There are rich questions here about the science of evaluating the capabilities (and safety issues) of contemporary models. 
  7. Jun 2021
    1. many other systems that are already here or not far off will have to make all sorts of real ethical trade-offs

      And the problem is that, even human beings are not very sensitive to how this can be done well. Because there is such diversity in human cultures, preferences, and norms, deciding whose values to prioritise is problematic.

  8. Jan 2021
    1. Help is coming in the form of specialized AI processors that can execute computations more efficiently and optimization techniques, such as model compression and cross-compilation, that reduce the number of computations needed. But it’s not clear what the shape of the efficiency curve will look like. In many problem domains, exponentially more processing and data are needed to get incrementally more accuracy. This means – as we’ve noted before – that model complexity is growing at an incredible rate, and it’s unlikely processors will be able to keep up. Moore’s Law is not enough. (For example, the compute resources required to train state-of-the-art AI models has grown over 300,000x since 2012, while the transistor count of NVIDIA GPUs has grown only ~4x!) Distributed computing is a compelling solution to this problem, but it primarily addresses speed – not cost.