233 Matching Annotations
  1. Nov 2024
    1. Best suited for deployment of trained AI models in Android and iOS operating systems, TensorFlow Lite provides customers with on-device machine learning capability through mobile-optimized pre-trained models. It’s efficient while having low latency and compatibility for multiple languages which makes it very versatile. Developers can leverage its lightweight and mobile-optimized models to provide on-device AI functionality with minimal latency when implementing TensorFlow Lite in mobile apps.

      Implementing Trained AI Models in Mobile App Development is transforming app experiences by integrating machine learning into iOS and Android platforms. From AI-powered personalization to advanced analytics, trained models empower intelligent decision-making and enhanced functionality.

  2. Aug 2024
    1. we basically grow models of let's say same quality like all the others by using thousand time or ten thousand times less training data

      for - comparison - semantic folding vs normal machine learning - training dataset sizes and times

    1. La IA puede ser creada mediante diversas arquitecturas. La más utilizada en la actualidad es llamada Aprendizaje Automático (Machine Learning en inglés). A grandes rasgos, podemos decir que los sistemas de Aprendizaje Automático aprenden a emular algún comportamiento con base a ejemplos de dicho comportamiento. Estos ejemplos son presentados al sistema como datos. Por ejemplo, si quisiéramos crear un sistema de IA que clasifique imágenes de animales por especie, deberíamos mostrarle numerosas imágenes de ejemplares de cada una de las especies que queremos que aprenda a clasificar.

      .

  3. Jul 2024
  4. May 2024
  5. Feb 2024
    1. The alternative approach to image classification uses machine-learning techniques to identify targeted content. This is currently the best way to filter video, and usually the best way to filter text. The provider first trains a machine-learning model with image sets containing both innocuous and target content. This model is then used to scan pictures uploaded by users. Unlike perceptual hashing, which detects only photos that are similar to known target photos, machine-learning models can detect completely new images of the type on which they were trained.

      Machine learning for content scanning

  6. Jan 2024
    1. You know XGBoost, but do you know NGBoost? I'd passed over this one, mentioned to me by someone wanting confidence intervals in their classification models. This could be an interesting paper to add to the ML curriculum.

  7. Nov 2023
    1. Actor-critic is a temporal difference algorithm used in reinforcement learning. It consists of two networks: the actor, which decides which action to take, and the critic, which evaluates the action produced by the actor by computing the value function and informs the actor how good the action was and how it should adjust. In simple terms, the actor-critic is a temporal difference version of policy gradient. The learning of the actor is based on a policy gradient approach.

      Actor-critic

  8. Oct 2023
    1. Shayan Shirahmad Gale Bagi, Zahra Gharaee, Oliver Schulte, and Mark Crowley Generative Causal Representation Learning for Out-of-Distribution Motion Forecasting In International Conference on Machine Learning (ICML). Honolulu, Hawaii, USA. Jul, 2023.

    1. Wang et. al. "Scientific discovery in the age of artificial intelligence", Nature, 2023.

      A paper about the current state of using AI/ML for scientific discovery, connected with the AI4Science workshops at major conferences.

      (NOTE: since Springer/Nature don't allow public pdfs to be linked without a paywall, we can't use hypothesis directly on the pdf of the paper, this link is to the website version of it which is what we'll use to guide discussion during the reading group.)

    1. Discussion of the paper:

      Ghojogh B, Ghodsi A, Karray F, Crowley M. Theoretical Connection between Locally Linear Embedding, Factor Analysis, and Probabilistic PCA. Proceedings of the Canadian Conference on Artificial Intelligence [Internet]. 2022 May 27; Available from: https://caiac.pubpub.org/pub/7eqtuyyc

  9. Sep 2023
    1. During the discussion, Musk latched on to a key fact the team had discovered: The neural network did not work well until it had been trained on at least a million video clips.
    2. By early 2023, the neural network planner project had analyzed 10 million clips of video collected from the cars of Tesla customers. Did that mean it would merely be as good as the average of human drivers? “No, because we only use data from humans when they handled a situation well,” Shroff explained. Human labelers, many of them based in Buffalo, New York, assessed the videos and gave them grades. Musk told them to look for things “a five-star Uber driver would do,” and those were the videos used to train the computer.
    3. The “neural network planner” that Shroff and others were working on took a different approach. “Instead of determining the proper path of the car based on rules,” Shroff says, “we determine the car’s proper path by relying on a neural network that learns from millions of examples of what humans have done.” In other words, it’s human imitation. Faced with a situation, the neural network chooses a path based on what humans have done in thousands of similar situations. It’s like the way humans learn to speak and drive and play chess and eat spaghetti and do almost everything else; we might be given a set of rules to follow, but mainly we pick up the skills by observing how other people do them.
  10. Aug 2023
    1. Title: Delays, Detours, and Forks in the Road: Latent State Models of Training Dynamics Authors: Michael Y. Hu1 Angelica Chen1 Naomi Saphra1 Kyunghyun Cho Note: This paper seems cool, using older interpretable machine learning models, graphical models to understand what is going on inside a deep neural network

      Link: https://arxiv.org/pdf/2308.09543.pdf

  11. Jul 2023
  12. Jun 2023
    1. We use the same model and architecture as GPT-2

      What do they mean by "model" here? If they have retrained on more data, with a slightly different architecture, then the model weights after training must be different.

    1. Recent work in computer vision has shown that common im-age datasets contain a non-trivial amount of near-duplicateimages. For instance CIFAR-10 has 3.3% overlap betweentrain and test images (Barz & Denzler, 2019). This results inan over-reporting of the generalization performance of ma-chine learning systems.

      CIFAR-10 performance results are overestimates since some of the training data is essentially in the test set.

    1. Blog post comparing ASG (Auto Segmentation Criterion - yes, the last letter doesn't match) to CTC (Connectionist Temporal Classification) for aligning speech recognition model outputs with a transcript.

  13. May 2023
  14. Mar 2023
    1. we have turned to machine learning, an ingenious way of disclaiming responsibility for anything. Machine learning is like money laundering for bias. It's a clean, mathematical apparatus that gives the status quo the aura of logical inevitability. The numbers don't lie.

      Machine learning like money laundering for bias

  15. Feb 2023
    1. No new physics and no new mathematics was discovered by the AI. The AI did however deduce something from the existing math and physics, that no one else had yet seen. Skynet is not coming for us yet.

    1. https://pair.withgoogle.com/

      People + AI Research (PAIR) is a multidisciplinary team at Google that explores the human side of AI by doing fundamental research, building tools, creating design frameworks, and working with diverse communities.

    1. There’s a holy trinity in machine learning: models, data, and compute. Models are algorithms that take inputs and produce outputs. Data refers to the examples the algorithms are trained on. To learn something, there must be enough data with enough richness that the algorithms can produce useful output. Models must be flexible enough to capture the complexity in the data. And finally, there has to be enough computing power to run the algorithms.

      “Holy trinity” of machine learning: models, data, and compute

      Models in 1990s, starting with convolutional neural networks for computer vision.

      Data in 2009 in the form of labeled images from Stanford AI researchers.

      Compute in 2006 with Nvidia’s CUDA programming language for GPUs.

      AlexNet in 2012 combined all of these.

  16. Jan 2023
    1. We can have a machine learning model which gives more than 90% accuracy for classification tasks but fails to recognize some classes properly due to imbalanced data or the model is actually detecting features that do not make sense to be used to predict a particular class.

      Les mesures de qualite d'un modele de machine learning

  17. Dec 2022
    1. Dans le zero-shot learning, le modèle doit être capable de généraliser ce qu'il a appris sur des exemples précédents pour effectuer une tâche sur laquelle il n'a jamais été entraîné. Cela signifie que le modèle doit être capable de transférer ses connaissances acquises sur une tâche donnée à une nouvelle tâche, sans avoir besoin d'exemples d'entraînement spécifiques pour cette nouvelle tâche.

      0-shot learning

    1. The collocation results can be used to correct the sensor data to more closely match thedata from the reference instrument. This correction process helps account for known biasand unknown interferences from weather and other pollutants and is typically done bydeveloping an algorithm. An algorithm can be a simple equation or more sophisticatedprocess (e.g., set of rules, machine learning) that is applied to the sensor data. This sectionfurther discusses the process of correcting sensor data

      correction factors for collocated sensors using ML

    Tags

    Annotators

    1. Emergent abilities are not present in small models but can be observed in large models.

      Here’s a lovely blog by Jason Wei that pulls together 137 examples of ’emergent abilities of large language models’. Emergence is a phenomenon seen in contemporary AI research, where a model will be really bad at a task at smaller scales, then go through some discontinuous change which leads to significantly improved performance.

  18. Nov 2022
    1. Eamonn Keogh is an assistant professor of Computer Science at the University ofCalifornia, Riverside. His research interests are in Data Mining, Machine Learning andInformation Retrieval. Several of his papers have won best paper awards, includingpapers at SIGKDD and SIGMOD. Dr. Keogh is the recipient of a 5-year NSF CareerAward for “Efficient Discovery of Previously Unknown Patterns and Relationships inMassive Time Series Databases”.

      Look into Eamonn Keogh's papers that won "best paper awards"

    1. “The metaphor is that the machine understands what I’m saying and so I’m going to interpret the machine’s responses in that context.”

      Interesting metaphor for why humans are happy to trust outputs from generative models

    1. The rapid increase in both the quantity and complexity of data that are being generated daily in the field of environmental science and engineering (ESE) demands accompanied advancement in data analytics. Advanced data analysis approaches, such as machine learning (ML), have become indispensable tools for revealing hidden patterns or deducing correlations for which conventional analytical methods face limitations or challenges. However, ML concepts and practices have not been widely utilized by researchers in ESE. This feature explores the potential of ML to revolutionize data analysis and modeling in the ESE field, and covers the essential knowledge needed for such applications. First, we use five examples to illustrate how ML addresses complex ESE problems. We then summarize four major types of applications of ML in ESE: making predictions; extracting feature importance; detecting anomalies; and discovering new materials or chemicals. Next, we introduce the essential knowledge required and current shortcomings in ML applications in ESE, with a focus on three important but often overlooked components when applying ML: correct model development, proper model interpretation, and sound applicability analysis. Finally, we discuss challenges and future opportunities in the application of ML tools in ESE to highlight the potential of ML in this field.

      环境科学与工程(ESE)领域日益增长的数据量和复杂性,伴随着数据分析技术的进步而不断提高。先进的数据分析方法,如机器学习(ML) ,已经成为揭示隐藏模式或推断相关性的不可或缺的工具,而传统的分析方法面临着局限性或挑战。然而,机器学习的概念和实践并没有得到广泛的应用。该特性探索了机器学习在 ESE 领域革新数据分析和建模的潜力,并涵盖了此类应用所需的基本知识。首先,我们使用五个示例来说明 ML 如何处理复杂的 ESE 问题。然后,我们总结了机器学习在 ESE 中的四种主要应用类型: 预测、提取特征重要性、检测异常和发现新材料或化学品。接下来,我们介绍了 ESE 中机器学习应用所需的基本知识和目前存在的缺陷,重点介绍了应用机器学习时三个重要但经常被忽视的组成部分: 正确的模型开发、适当的模型解释和良好的适用性分析。最后,我们讨论了机器学习工具在 ESE 中的应用所面临的挑战和未来的机遇,以突出机器学习在这一领域的潜力。

    1. "On the Opportunities and Risks of Foundation Models" This is a large report by the Center for Research on Foundation Models at Stanford. They are creating and promoting the use of these models and trying to coin this name for them. They are also simply called large pre-trained models. So take it with a grain of salt, but also it has a lot of information about what they are, why they work so well in some domains and how they are changing the nature of ML research and application.

    1. Technology like this, which lets you “talk” to people who’ve died, has been a mainstay of science fiction for decades. It’s an idea that’s been peddled by charlatans and spiritualists for centuries. But now it’s becoming a reality—and an increasingly accessible one, thanks to advances in AI and voice technology. 
  19. Oct 2022
    1. There's no market for a machine-learning autopilot, or content moderation algorithm, or loan officer, if all it does is cough up a recommendation for a human to evaluate. Either that system will work so poorly that it gets thrown away, or it works so well that the inattentive human just button-mashes "OK" every time a dialog box appears.

      ML algorithms must work or not work

  20. Sep 2022
  21. Aug 2022
  22. Jul 2022
    1. because it only needs to engage a portion of the model to complete a task, as opposed to other architectures that have to activate an entire AI model to run every request.

      i don't really understand this: in z-code thre are tasks that other competitive softwares would need to restart all over again while z-code can do it without restarting...

  23. Jun 2022
    1. determine the caliphate; and another group led by Mu'awiya in the Levant, who demanded revenge for Uthman's blood. He defeated the first group in the Battle of the Camel; but in the end,

      this is another post

  24. Apr 2022
  25. Feb 2022
  26. Jan 2022
    1. We are definitely living in interesting times!

      The problem with Machine learning in my eyes seems to be the non-transparency in the field. After all what makes the data we are researching valuable. If he collect so much data why is only .5% being studied? There seems to be a lot missing and big opportunities here that aren't being used properly.

  27. Dec 2021
  28. Oct 2021
  29. Sep 2021
    1. a class of attacks that were enabled by Privacy Badger’s learning. Essentially, since Privacy Badger adapts its behavior based on the way that sites you visit behave, a dedicated attacker could manipulate the way Privacy Badger acts: what it blocks and what it allows. In theory, this can be used to identify users (a form of fingerprinting) or to extract some kinds of information from the pages they visit
  30. Jul 2021
  31. Jun 2021
    1. The problem is, algorithms were never designed to handle such tough choices. They are built to pursue a single mathematical goal, such as maximizing the number of soldiers’ lives saved or minimizing the number of civilian deaths. When you start dealing with multiple, often competing, objectives or try to account for intangibles like “freedom” and “well-being,” a satisfactory mathematical solution doesn’t always exist.

      We do better with algorithms where the utility function can be expressed mathematically. When we try to design for utility/goals that include human values, it's much more difficult.

    2. many other systems that are already here or not far off will have to make all sorts of real ethical trade-offs

      And the problem is that, even human beings are not very sensitive to how this can be done well. Because there is such diversity in human cultures, preferences, and norms, deciding whose values to prioritise is problematic.

  32. May 2021
  33. Apr 2021
    1. Machine learning app development has been gaining traction among companies from all over the world. When dealing with this part of machine learning application development, you need to remember that machine learning can recognize only the patterns it has seen before. Therefore, the data is crucial for your objectives. If you’ve ever wondered how to build a machine learning app, this article will answer your question.

    1. The insertion of an algorithm’s predictions into the patient-physician relationship also introduces a third party, turning the relationship into one between the patient and the health care system. It also means significant changes in terms of a patient’s expectation of confidentiality. “Once machine-learning-based decision support is integrated into clinical care, withholding information from electronic records will become increasingly difficult, since patients whose data aren’t recorded can’t benefit from machine-learning analyses,” the authors wrote.

      There is some work being done on federated learning, where the algorithm works on decentralised data that stays in place with the patient and the ML model is brought to the patient so that their data remains private.

  34. Mar 2021
  35. Feb 2021
  36. Jan 2021
    1. I present the Data Science Venn Diagram… hacking skills, math and stats knowledge, and substantive expertise.

      An understanding of advanced statistics is a must as the methodologies get more complex and new methods are being created such as machine learning

    1. Zappos created models to predict customer apparel sizes, which are cached and exposed at runtime via microservices for use in recommendations.

      There is another company named Virtusize who is doing the same thing like size predicting or recommendation

  37. Dec 2020
  38. Nov 2020
  39. Oct 2020
    1. A statistician is the exact same thing as a data scientist or machine learning researcher with the differences that there are qualifications needed to be a statistician, and that we are snarkier.
    1. numerically evaluate the derivative of a function specified by a computer program

      I understand what they're saying, but one should be careful here not to confuse themselves with numerical differentiation a la finite differnces

  40. Sep 2020
    1. For example, the one- pass (hardware) translator generated a symbol table and reverse Polish code as in conven- tional software interpretive languages. The translator hardware (compiler) operated at disk transfer speeds and was so fast there was no need to keep and store object code, since it could be quickly regenerated on-the-fly. The hardware-implemented job controller per- formed conventional operating system func- tions. The memory controller provided

      Hardware assisted compiler is a fantastic idea. TPUs from Google are essentially this. They're hardware assistance for matrix multiplication operations for machine learning workloads created by tools like TensorFlow.

  41. Aug 2020
  42. Jul 2020
    1. Determine if who is using my computer is me by training a ML model with data of how I use my computer. This is a project for the Intrusion Detection Systems course at Columbia University.
    1. Our membership inference attack exploits the observationthat machine learning models often behave differently on thedata that they were trained on versus the data that they “see”for the first time.

      How well would this work on some of the more recent zero-shot models?

    1. data leakage (data from outside of your test set making it back into your test set and biasing the results)

      This sounds like the inverse of “snooping”, where information about the test data is inadvertently built into the model.

  43. Jun 2020
  44. May 2020
    1. the network typically learns to useh(t)as a kind of lossysummary of the task-relevant aspects of the past sequence of inputs up tot

      The hidden state h(t) is a high-level representation of whatever happened until time step t.

    2. Parameter sharingmakes it possible to extend and apply the model to examples of different forms(different lengths, here) and generalize across them. If we had separate parametersfor each value of the time index, we could not generalize to sequence lengths notseen during training, nor share statistical strength across different sequence lengthsand across different positions in time. Such sharing is particularly important whena specific piece of information can occur at multiple positions within the sequence.

      RNN have the same parameters for each time step. This allows to generalize the inferred "meaning", even when it's inferred at different steps.

    1. Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed
  45. Apr 2020
    1. Python contributed examples¶ Mic VAD Streaming¶ This example demonstrates getting audio from microphone, running Voice-Activity-Detection and then outputting text. Full source code available on https://github.com/mozilla/DeepSpeech-examples. VAD Transcriber¶ This example demonstrates VAD-based transcription with both console and graphical interface. Full source code available on https://github.com/mozilla/DeepSpeech-examples.
    1. Python API Usage example Edit on GitHub Python API Usage example¶ Examples are from native_client/python/client.cc. Creating a model instance and loading model¶ 115 ds = Model(args.model) Performing inference¶ 149 150 151 152 153 154 if args.extended: print(metadata_to_string(ds.sttWithMetadata(audio, 1).transcripts[0])) elif args.json: print(metadata_json_output(ds.sttWithMetadata(audio, 3))) else: print(ds.stt(audio)) Full source code
    1. DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on Baidu's Deep Speech research paper. Project DeepSpeech uses Google's TensorFlow to make the implementation easier. NOTE: This documentation applies to the 0.7.0 version of DeepSpeech only. Documentation for all versions is published on deepspeech.readthedocs.io. To install and use DeepSpeech all you have to do is: # Create and activate a virtualenv virtualenv -p python3 $HOME/tmp/deepspeech-venv/ source $HOME/tmp/deepspeech-venv/bin/activate # Install DeepSpeech pip3 install deepspeech # Download pre-trained English model files curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.0/deepspeech-0.7.0-models.pbmm curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.0/deepspeech-0.7.0-models.scorer # Download example audio files curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.0/audio-0.7.0.tar.gz tar xvf audio-0.7.0.tar.gz # Transcribe an audio file deepspeech --model deepspeech-0.7.0-models.pbmm --scorer deepspeech-0.7.0-models.scorer --audio audio/2830-3980-0043.wav A pre-trained English model is available for use and can be downloaded using the instructions below. A package with some example audio files is available for download in our release notes.
    1. import all the necessary libraries into our notebook. LibROSA and SciPy are the Python libraries used for processing audio signals. import os import librosa #for audio processing import IPython.display as ipd import matplotlib.pyplot as plt import numpy as np from scipy.io import wavfile #for audio processing import warnings warnings.filterwarnings("ignore") view raw modules.py hosted with ❤ by GitHub View the code on <a href="https://gist.github.com/aravindpai/eb40aeca0266e95c128e49823dacaab9">Gist</a>. Data Exploration and Visualization Data Exploration and Visualization helps us to understand the data as well as pre-processing steps in a better way. 
    2. TensorFlow recently released the Speech Commands Datasets. It includes 65,000 one-second long utterances of 30 short words, by thousands of different people. We’ll build a speech recognition system that understands simple spoken commands. You can download the dataset from here.
    3. Learn how to Build your own Speech-to-Text Model (using Python) Aravind Pai, July 15, 2019 Login to Bookmark this article (adsbygoogle = window.adsbygoogle || []).push({}); Overview Learn how to build your very own speech-to-text model using Python in this article The ability to weave deep learning skills with NLP is a coveted one in the industry; add this to your skillset today We will use a real-world dataset and build this speech-to-text model so get ready to use your Python skills!
    1. Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research. Use Keras if you need a deep learning library that: Allows for easy and fast prototyping (through user friendliness, modularity, and extensibility). Supports both convolutional networks and recurrent networks, as well as combinations of the two. Runs seamlessly on CPU and GPU. Read the documentation at Keras.io. Keras is compatible with: Python 2.7-3.6.
    1. Installation in Windows Compatibility: > OpenCV 2.0 Author: Bernát Gábor You will learn how to setup OpenCV in your Windows Operating System!
    2. Here you can read tutorials about how to set up your computer to work with the OpenCV library. Additionally you can find very basic sample source code to introduce you to the world of the OpenCV. Installation in Linux Compatibility: > OpenCV 2.0
    1. OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and modify the code. The library has more than 2500 optimized algorithms, which includes a comprehensive set of both classic and state-of-the-art computer vision and machine learning algorithms. These algorithms can be used to detect and recognize faces, identify objects, classify human actions in videos, track camera movements, track moving objects, extract 3D models of objects, produce 3D point clouds from stereo cameras, stitch images together to produce a high resolution image of an entire scene, find similar images from an image database, remove red eyes from images taken using flash, follow eye movements, recognize scenery and establish markers to overlay it with augmented reality, etc. OpenCV has more than 47 thousand people of user community and estimated number of downloads exceeding 18 million. The library is used extensively in companies, research groups and by governmental bodies. Along with well-established companies like Google, Yahoo, Microsoft, Intel, IBM, Sony, Honda, Toyota that employ the library, there are many startups such as Applied Minds, VideoSurf, and Zeitera, that make extensive use of OpenCV. OpenCV’s deployed uses span the range from stitching streetview images together, detecting intrusions in surveillance video in Israel, monitoring mine equipment in China, helping robots navigate and pick up objects at Willow Garage, detection of swimming pool drowning accidents in Europe, running interactive art in Spain and New York, checking runways for debris in Turkey, inspecting labels on products in factories around the world on to rapid face detection in Japan. It has C++, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and Mac OS. OpenCV leans mostly towards real-time vision applications and takes advantage of MMX and SSE instructions when available. A full-featured CUDAand OpenCL interfaces are being actively developed right now. There are over 500 algorithms and about 10 times as many functions that compose or support those algorithms. OpenCV is written natively in C++ and has a templated interface that works seamlessly with STL containers.
    1. there is also strong encouragement to make code re-usable, shareable, and citable, via DOI or other persistent link systems. For example, GitHub projects can be connected with Zenodo for indexing, archiving, and making them easier to cite alongside the principles of software citation [25].
      • Teknologi Github dan Gitlab fokus kepada modus teks yang dapat dengan mudah dikenali dan dibaca mesin/komputer (machine readable).

      • Saat ini text mining adalah teknologi utama yang berkembang cepat. Machine learning tidak akan jalan tanpa bahan baku dari teknologi text mining.

      • Oleh karenanya, jurnal-jurnal terutama terbitan LN sudah lama memiliki dua versi untuk setiap makalah yang dirilis, yaitu versi PDF (yang sebenarnya tidak berbeda dengan kertas zaman dulu) dan versi HTML (ini bisa dibaca mesin).

      • Pengolah kata biner seperti Ms Word sangat bergantung kepada teknologi perangkat lunak (yang dimiliki oleh entitas bisnis). Tentunya kode-kode untuk membacanya akan dikunci.

      • Bahkan PDF yang dianggap sebagai cara termudah dan teraman untuk membagikan berkas, juga tidak dapat dibaca oleh mesin dengan mudah.

  46. Mar 2020
    1. a black software developer embarrassed Google by tweeting that the company’s Photos service had labeled photos of him with a black friend as “gorillas.”
    2. More than two years later, one of those fixes is erasing gorillas, and some other primates, from the service’s lexicon. The awkward workaround illustrates the difficulties Google and other tech companies face in advancing image-recognition technology
  47. Nov 2019
  48. Sep 2019
    1. At the moment, GPT-2 uses a binary search algorithm, which means that its output can be considered a ‘true’ set of rules. If OpenAI is right, it could eventually generate a Turing complete program, a self-improving machine that can learn (and then improve) itself from the data it encounters. And that would make OpenAI a threat to IBM’s own goals of machine learning and AI, as it could essentially make better than even humans the best possible model that the future machines can use to improve their systems. However, there’s a catch: not just any new AI will do, but a specific type; one that uses deep learning to learn the rules, algorithms, and data necessary to run the machine to any given level of AI.

      This is a machine generated response in 2019. We are clearly closer than most people realize to machines that can can pass a text-based Turing Test.

    1. Since all neurons in a single depth slice share the same parameters, the forward pass in each depth slice of the convolutional layer can be computed as a convolution of the neuron's weights with the input volume.[nb 2] Therefore, it is common to refer to the sets of weights as a filter (or a kernel), which is convolved with the input. The result of this convolution is an activation map, and the set of activation maps for each different filter are stacked together along the depth dimension to produce the output volume. Parameter sharing contributes to the translation invariance of the CNN architecture. Sometimes, the parameter sharing assumption may not make sense. This is especially the case when the input images to a CNN have some specific centered structure; for which we expect completely different features to be learned on different spatial locations. One practical example is when the inputs are faces that have been centered in the image: we might expect different eye-specific or hair-specific features to be learned in different parts of the image. In that case it is common to relax the parameter sharing scheme, and instead simply call the layer a "locally connected layer".

      important terms you hear repeatedly great visuals and graphics @https://distill.pub/2018/building-blocks/

    1. Here's a playground were you can select different kernel matrices and see how they effect the original image or build your own kernel. You can also upload your own image or use live video if your browser supports it. blurbottom sobelcustomembossidentityleft sobeloutlineright sobelsharpentop sobel The sharpen kernel emphasizes differences in adjacent pixel values. This makes the image look more vivid. The blur kernel de-emphasizes differences in adjacent pixel values. The emboss kernel (similar to the sobel kernel and sometimes referred to mean the same) givens the illusion of depth by emphasizing the differences of pixels in a given direction. In this case, in a direction along a line from the top left to the bottom right. The indentity kernel leaves the image unchanged. How boring! The custom kernel is whatever you make it.

      I'm all about my custom kernels!

    1. We developed a new metric, UAR, which compares the robustness of a model against an attack to adversarial training against that attack. Adversarial training is a strong defense that uses knowledge of an adversary by training on adversarially attacked images[3]To compute UAR, we average the accuracy of the defense across multiple distortion sizes and normalize by the performance of an adversarially trained model; a precise definition is in our paper. . A UAR score near 100 against an unforeseen adversarial attack implies performance comparable to a defense with prior knowledge of the attack, making this a challenging objective.

      @metric

  49. Aug 2019
    1. Using multiple copies of a neuron in different places is the neural network equivalent of using functions. Because there is less to learn, the model learns more quickly and learns a better model. This technique – the technical name for it is ‘weight tying’ – is essential to the phenomenal results we’ve recently seen from deep learning.

      This parameter sharing allows CNNs, for example, to need much less params/weights than Fully Connected NNs.

    2. The known connection between geometry, logic, topology, and functional programming suggests that the connections between representations and types may be of fundamental significance.

      Examples for each?

    3. Representations are Types With every layer, neural networks transform data, molding it into a form that makes their task easier to do. We call these transformed versions of data “representations.” Representations correspond to types.

      Interesting.

      Like a Queue Type represents a FIFO flow and a Stack a FILO flow, where the space we transformed is the operation space of the type (eg a Queue has a folded operation space compared to an Array)

      Just free styling here...

    4. In this view, the representations narrative in deep learning corresponds to type theory in functional programming. It sees deep learning as the junction of two fields we already know to be incredibly rich. What we find, seems so beautiful to me, feels so natural, that the mathematician in me could believe it to be something fundamental about reality.

      compositional deep learning

    5. Appendix: Functional Names of Common Layers Deep Learning Name Functional Name Learned Vector Constant Embedding Layer List Indexing Encoding RNN Fold Generating RNN Unfold General RNN Accumulating Map Bidirectional RNN Zipped Left/Right Accumulating Maps Conv Layer “Window Map” TreeNet Catamorphism Inverse TreeNet Anamorphism

      👌translation. I like to think about embeddings as List lookups

    1. As log-bilinear regression model for unsupervised learning of word representations, it combines the features of two model families, namely the global matrix factorization and local context window methods

      What does "log-bilinear regression" mean exactly?

  50. Jul 2019
    1. We will discuss classification in the context of supportclassificationvector machines

      SVMs aren't used that much in practice anymore. It's more of an academic fling, because they're nice to work with mathematically. Empirically, Tree Ensembles or Neural Nets are almost always better.

    1. Compared with neural networks configured by a pure grid search,we find that random search over the same domain is able to find models that are as good or betterwithin a small fraction of the computation time.
  51. Jun 2019
    1. To interpret a model, we require the following insights :Features in the model which are most important.For any single prediction from a model, the effect of each feature in the data on that particular prediction.Effect of each feature over a large number of possible predictions

      Machine learning interpretability

    1. By comparison, Amazon’s Best Seller badges, which flag the most popular products based on sales and are updated hourly, are far more straightforward. For third-party sellers, “that’s a lot more powerful than this Choice badge, which is totally algorithmically calculated and sometimes it’s totally off,” says Bryant.

      "Amazon's Choice" is made by an algorithm.

      Essentially, "Amazon" is Skynet.

    1. This problem is called overfitting—it's like memorizing the answers instead of understanding how to solve a problem.

      Simple and clear explanation of overfitting

  52. May 2019
    1. policy change index - machine learning on corpus of text to identify and predict policy changes in China

  53. Mar 2019
    1. Mention McDonald’s to someone today, and they're more likely to think about Big Mac than Big Data. But that could soon change: The fast-food giant has embraced machine learning, in a fittingly super-sized way.McDonald’s is set to announce that it has reached an agreement to acquire Dynamic Yield, a startup based in Tel Aviv that provides retailers with algorithmically driven "decision logic" technology. When you add an item to an online shopping cart, it’s the tech that nudges you about what other customers bought as well. Dynamic Yield reportedly had been recently valued in the hundreds of millions of dollars; people familiar with the details of the McDonald’s offer put it at over $300 million. That would make it the company's largest purchase since it acquired Boston Market in 1999.

      McDonald's are getting into machine learning. Beware.

  54. Feb 2019
    1. For instance, an aborigine who possesses all of our basic sensory-mental-motor capabilities, but does not possess our background of indirect knowledge and procedure, cannot organize the proper direct actions necessary to drive a car through traffic, request a book from the library, call a committee meeting to discuss a tentative plan, call someone on the telephone, or compose a letter on the typewriter.

      In other words: culture. I'm pretty sure that Engelbart would agree with the statement that someone who could order a book from a library would likely not know the best way to find a nearby water source, as the right kind of aborigine would know. Collective intelligence is a monotonically increasing store of knowledge that is maintained through social learning -- not just social learning, but teaching. Many species engage in social learning, but humans are the only primates with visible sclera -- the whites of our eyeballs -- which enables even infants to track where their teacher/parent is looking. I think this function of culture is what Engelbart would call "C work"

      A Activity: 'Business as Usual'. The organization's day to day core business activity, such as customer engagement and support, product development, R&D, marketing, sales, accounting, legal, manufacturing (if any), etc. Examples: Aerospace - all the activities involved in producing a plane; Congress - passing legislation; Medicine - researching a cure for disease; Education - teaching and mentoring students; Professional Societies - advancing a field or discipline; Initiatives or Nonprofits - advancing a cause.
      
      B Activity: Improving how we do that. Improving how A work is done, asking 'How can we do this better?' Examples: adopting a new tool(s) or technique(s) for how we go about working together, pursuing leads, conducting research, designing, planning, understanding the customer, coordinating efforts, tracking issues, managing budgets, delivering internal services. Could be an individual introducing a new technique gleaned from reading, conferences, or networking with peers, or an internal initiative tasked with improving core capability within or across various A Activities.
      
      C Activity: Improving how we improve. Improving how B work is done, asking 'How can we improve the way we improve?' Examples: improving effectiveness of B Activity teams in how they foster relations with their A Activity customers, collaborate to identify needs and opportunities, research, innovate, and implement available solutions, incorporate input, feedback, and lessons learned, run pilot projects, etc. Could be a B Activity individual learning about new techniques for innovation teams (reading, conferences, networking), or an initiative, innovation team or improvement community engaging with B Activity and other key stakeholders to implement new/improved capability for one or more B activities.
      

      In other words, human culture, using language, artifacts, methodology, and training, bootstrapped collective intelligence; what Engelbart proposed, then was to apply C work to culture's bootstrapping capabilities.

    1. Nearly half of FBI rap sheets failed to include information on the outcome of a case after an arrest—for example, whether a charge was dismissed or otherwise disposed of without a conviction, or if a record was expunged

      This explains my personal experience here: https://hyp.is/EIfMfivUEem7SFcAiWxUpA/epic.org/privacy/global_entry/default.html (Why someone who had Global Entry was flagged for a police incident before he applied for Global Entry).

    2. Applicants also agree to have their fingerprints entered into DHS’ Automatic Biometric Identification System (IDENT) “for recurrent immigration, law enforcement, and intelligence checks, including checks against latent prints associated with unsolved crimes.

      Intelligence checks is very concerning here as it suggests pretty much what has already been leaked, that the US is running complex autonomous screening of all of this data all the time. This also opens up the possibility for discriminatory algorithms since most of these are probably rooted in machine learning techniques and the criminal justice system in the US today tends to be fairly biased towards certain groups of people to begin with.

    3. It cited research, including some authored by the FBI, indicating that “some of the biometrics at the core of NGI, like facial recognition, may misidentify African Americans, young people, and women at higher rates than whites, older people, and men, respectively.

      This re-affirms the previous annotation that the set of training data for the intelligence checks the US runs on global entry data is biased towards certain groups of people.

  55. Nov 2018
  56. Sep 2018
    1. in equation B for the marginal of a gaussian, only the covariance of the block of the matrix involving the unmarginalized dimensions matters! Thus “if you ask only for the properties of the function (you are fitting to the data) at a finite number of points, then inference in the Gaussian process will give you the same answer if you ignore the infinitely many other points, as if you would have taken them all into account!”(Rasmunnsen)

      key insight into Gaussian processes

    1. predictive analysis

      Predictive analytics encompasses a variety of statistical techniques from data mining, predictive modelling, and machine learning, that analyze current and historical facts to make predictions about future or otherwise unknown events.

  57. Jul 2018
  58. course-computational-literary-analysis.netlify.com course-computational-literary-analysis.netlify.com
    1. There is here, moral, if not legal, evidence, that the murder was committed by the Indians.

      This is a very interesting take on "evidence" as being moral if not legal by Sergeant Cuff. It makes me question exactly what he means by that if there is a way to use computational analysis to find out. We could perhaps start by parsing out "evidence" throughout the text with a machine learning algorithm to help he define evidence and then, going forward, device a way (maybe with sentiment analysis) to determine moral evidence from legal evidence.

    1. ~32:00 What about the domain of the function being effectively lower dimensional, rather than a strongly regularity assumption? That would also work, right? Could this be the case for images? (what's the dimensionality of the manifold of natural images?)

      Nice. I like the idea of regularity <> low dimensional representation. I guess by that general definition, the above is a form of regularity..

      He comments about this on 38:30

    1. This system of demonstrating tasks to one robot that can then transfer its skills to other robots with different body shapes, strengths, and constraints might just be the first step toward independent social learning in robots. From there, we might be on the road to creating cultured robots.
    2. Soon we might add robots to this list. While our fanciful desert scene of robots teaching each other how to defuse bombs lies in the distant future, robots are beginning to learn socially. If one day robots start to develop and share knowledge independently of humans, might that be the seed for robot culture?
    3. his imaginary scene shows the power of learning from others. Anthropologists and zoologists call this “social learning”: picking up new information by observing or interacting with others and the things others produce. Social learning is rife among humans and across the wider animal kingdom. As we discussed in our previous post, learning socially is fundamental to how humans become fully rounded people, in all our diversity, creativity, and splendor.
    1. "It's so scary that it works," Perelman sighs. "Machines are very brilliant for certain things and very stupid on other things. This is a case where the machines are very, very stupid."
  59. Apr 2018
  60. Mar 2018
    1. Artificial intelligence (AI), machine learning and deep learning

      Explicación gráfica de artificial intelligence, machine learning y deep learning

  61. Dec 2017
    1. Most of the recent advances in AI depend on deep learning, which is the use of backpropagation to train neural nets with multiple layers ("deep" neural nets).

      Neural nets consist of layers of nodes, with edges from each node to the nodes in the next layer. The first and last layers are input and output. The output layer might only have two nodes, representing true or false. Each node holds a value representing how excited it is. Each edge has a value representing strength of connection, which determines how much of the excitement passes through.

      The edges in an untrained neural net start with random values. The training data consists of a series of samples that are already labeled. If the output is wrong, the edges are adjusted according to how much they contributed to the error. It's called backpropagation because it starts with the output nodes and works toward the input nodes.

      Deep neural nets can be effective, but only for single specific tasks. And they need huge sets of training data. They can also be tricked rather easily. Worse, someone who has access to the net can discover ways of adding noise to images that will make the net "see" things that obviously aren't there.

  62. Nov 2017
    1. UML automatically finds these hidden patterns to link seemingly unrelated accounts and customers. These links can be one of thousands of data fields that the UML model ingests.

      Why does this have to be done in a different system?

  63. Oct 2017
  64. Sep 2017
    1. Đầu tiên mình nghĩ bạn cần nắm về machine learning và algorithm, bạn có thể bắt đầu bằng các khóa học trên mạng. Mình recommend khóa học Machine Learning của Andrew Ng, khóa học này được coi là kinh thánh cho data scientist. Sau đó bạn có thể bắt đầu với Python hoặc R và tham gia challenge trên Kaggle. Kaggle là một platform để Data Scientist tham gia, kiếm tiền thưởng và cạnh tranh thứ hạng với nhau. Nhiều người cũng nói với mình Kaggle là con đường tốt nhất và ngắn nhất để đến với Data Science.

      Học cơ bản

  65. Aug 2017
  66. Jul 2017
  67. Jun 2017
  68. Apr 2017
    1. Detection of fake news in social media based on who liked it.

      we show that Facebook posts can be classified with high accuracy as hoaxes or non-hoaxes on the basis of the users who "liked" them. We present two classification techniques, one based on logistic regression, the other on a novel adaptation of boolean crowdsourcing algorithms. On a dataset consisting of 15,500 Facebook posts and 909,236 users, we obtain classification accuracies exceeding 99% even when the training set contains less than 1% of the posts.

    1. Obviously, in this situation whoever controls the algorithms has great power. Decisions like what is promoted to the top of a news feed can swing elections. Small changes in UI can drive big changes in user behavior. There are no democratic checks or controls on this power, and the people who exercise it are trying to pretend it doesn’t exist

    2. On Facebook, social dynamics and the algorithms’ taste for drama reinforce each other. Facebook selects from stories that your friends have shared to find the links you’re most likely to click on. This is a potent mix, because what you read and post on Facebook is not just an expression of your interests, but part of a performative group identity.

      So without explicitly coding for this behavior, we already have a dynamic where people are pulled to the extremes. Things get worse when third parties are allowed to use these algorithms to target a specific audience.

    3. any system trying to maximize engagement will try to push users towards the fringes. You can prove this to yourself by opening YouTube in an incognito browser (so that you start with a blank slate), and clicking recommended links on any video with political content.

      ...

      This pull to the fringes doesn’t happen if you click on a cute animal story. In that case, you just get more cute animals (an experiment I also recommend trying). But the algorithms have learned that users interested in politics respond more if they’re provoked more, so they provoke. Nobody programmed the behavior into the algorithm; it made a correct observation about human nature and acted on it.

    1. Really cool venue for publishing online, interactive articles for ML