- Last 7 days
-
www.youtube.com www.youtube.com
-
I have a great passion for the fetal brain research is that if we can really help now um how a parent is feeling it can really influence the neurod development of a child
for - fetal brain research - help with how a parent is feeling influences neural development of a child later on - Youtube - Prenatal and Perinatal Healing Happens in Layers - Kate White
-
- Oct 2024
-
x.com x.com
-
The similarity is because they are all saying roughly the same thing: Total (result) = Kinetic (cost) + Potential (benefit) Cost is either imaginary squared or negative (space-like), benefit is real (time-like), result is mass-like. Just like physics, the economic unfavourable models are the negative results. In economics, diversity of products is a strength as it allows better recovery from failure of any one, comically DEI of people fails miserably at this, because all people are not equal. Here are some other examples you will know if you do physics: E² + (ipc)² = (mc²)² (relativistic Einstein equation), mass being the result, energy time-like (potential), momentum the space-like (kinetic). ∇² - 1/c² ∂²/∂t² = (mc/ℏ)² (Klein-Gordon equation), mass is the result, ∂²/∂t² potential, ∇² is kinetic. Finally we have Dirac equation, which unlike the previous two as "sum of squares" is more like vector addition (first order differentials, not second). iℏγ⁰∂₀ψ + iℏγⁱ∂ᵢψ = mcψ First part is still the time-like potential, second part is the space-like kinetic, and the mass is still the result though all the same. This is because energy is all forms, when on a flat (free from outside influence) worksheet, acts just like a triangle between potential, kinetic and resultant energies. E.g. it is always of the form k² + p² = r², quite often kinetic is imaginary to potential (+,-,-,-) spacetime metric, quaternion mathematics. So the r² can be negative, or imaginary result if costs out way benefits, or work in is greater than work out. Useless but still mathematical solution. Just like physics, you always want the mass or result to be positive and real, or your going to lose energy to the surrounding field, with negative returns. Economic net loss do not last long, just like imaginary particles in physics.
in reply to Cesar A. Hidalgo at https://x.com/realAnthonyDean/status/1844409919161684366
via Anthony Dean @realAnthonyDean
-
- Oct 2023
-
postlab.psych.wisc.edu postlab.psych.wisc.edu
-
Figure 1.
The NCC and related processes represented in a diagram of the brain. Content-specific NCC are represented in red, full NCC are represented in orange (as a union of all content-specific NCC), neuronal activating systems and global enabling factors modulating full NCC activity are represented in green, processing loops modulating some content-specific NCC are represented in beige, sensory pathways modulating some content-specific NCC are represented in pink, and outputs from NCC are represented in blue.
-
For content-specific NCC, experimentscan be carefully designed to systematically investigate possibledissociations between the experience of particular conscious con-tents and the engagement of various cognitive processes, such asattention, decision-making, and reporting (Aru et al., 2012; Kochand Tsuchiya, 2012; Tsuchiya et al., 2015; Tsuchiya and Koch,2016).
-
Several complementary methods can be used to distill the trueNCC. For the full NCC, within-state paradigms can be used toavoid confounds due to changes in behavioral state and taskperformance as well as to dissociate unconsciousness from unre-sponsiveness
-
.
Recent research has placed emphasis on distinguishing "background conditions" that indirectly generate consciousness from neural processes that directly generate consciousness (or distinguishing consciousness itself from its precursors and consequences). Some neural processes, such as processing loops involved in executive functions, activity along sensory pathways, and activity along motor pathways may tangentially affect the full NCC via modulation of the content specific NCC.
-
The full NCC can be definedas the union of all content-specific NCC (Koch et al., 2016a).
-
scious percept (Crick and Koch, 1990). Content-specific NCCare the neural mechanisms specifying particular phenomenalcontents within consciousness, such as colors, faces, places, orthoughts.
-
The neural correlates of consciousness (NCC) are defined as theminimal neural mechanisms jointly sufficient for any one con-
-
- Jul 2023
-
openaccess.thecvf.com openaccess.thecvf.com
-
Xu, ICCV, 2019 "Temporal Recurrent Networks for Online Action Detection"
arxiv: https://arxiv.org/abs/1811.07391 hypothesis: https://hyp.is/go?url=https%3A%2F%2Fopenaccess.thecvf.com%2Fcontent_ICCV_2019%2Fpapers%2FXu_Temporal_Recurrent_Networks_for_Online_Action_Detection_ICCV_2019_paper.pdf&group=world
-
-
blogs.nvidia.com blogs.nvidia.com
Tags
- machine learning
- wikipedia:en=Self-supervised_learning
- neural networks
- cito:cites=doi:10.48550/arXiv.1706.03762
- wikipedia:en=BERT_(language_model)
- ai
- cito:cites=doi:10.48550/arXiv.2108.07258
- wikipedia:en=Artificial_neural_network
- wikipedia:en=Transformer_(machine_learning_model)
- wikipedia:en=Attention_(machine_learning)
Annotators
URL
-
- Jun 2023
-
cdn.openai.com cdn.openai.com
-
Recent work in computer vision has shown that common im-age datasets contain a non-trivial amount of near-duplicateimages. For instance CIFAR-10 has 3.3% overlap betweentrain and test images (Barz & Denzler, 2019). This results inan over-reporting of the generalization performance of ma-chine learning systems.
CIFAR-10 performance results are overestimates since some of the training data is essentially in the test set.
-
- May 2023
-
mml-book.github.io mml-book.github.io
-
It turns out that backpropagation is a special case of a general techniquein numerical analysis called automatic differentiat
Automatic differentiation is a technique in numerical analysis. That's why Real Analysis is an important Mathematics area that should be studied if one wants to go into AI research.
-
- Apr 2023
-
colab.research.google.com colab.research.google.com
- Dec 2022
- Nov 2022
-
www.researchgate.net www.researchgate.net
-
n recent years, the neural network based topic modelshave been proposed for many NLP tasks, such as infor-mation retrieval [11], aspect extraction [12] and sentimentclassification [13]. The basic idea is to construct a neuralnetwork which aims to approximate the topic-word distri-bution in probabilistic topic models. Additional constraints,such as incorporating prior distribution [14], enforcing di-versity among topics [15] or encouraging topic sparsity [16],have been explored for neural topic model learning andproved effective.
Neural topic models are often trained to mimic the behaviours of probabilistic topic models - I should come back and look at some of the works:
- R. Das, M. Zaheer, and C. Dyer, “Gaussian LDA for topic models with word embeddings,”
- P. Xie, J. Zhu, and E. P. Xing, “Diversity-promoting bayesian learning of latent variable models,”
- M. Peng, Q. Xie, H. Wang, Y. Zhang, X. Zhang, J. Huang, and G. Tian, “Neural sparse topical coding,”
Tags
Annotators
URL
-
- Oct 2022
-
www.robinsloan.com www.robinsloan.com
-
https://www.robinsloan.com/notes/writing-with-the-machine/
Related work leading up to this video: https://vimeo.com/232545219
-
- Jan 2022
-
vimeo.com vimeo.com
-
from: Eyeo Conference 2017
Description
Robin Sloan at Eyeo 2017 | Writing with the Machine | Language models built with recurrent neural networks are advancing the state of the art on what feels like a weekly basis; off-the-shelf code is capable of astonishing mimicry and composition. What happens, though, when we take those models off the command line and put them into an interactive writing environment? In this talk Robin presents demos of several tools, including one presented here for the first time. He discusses motivations and process, shares some technical tips, proposes a course for the future — and along the way, write at least one short story together with the audience: all of us, and the machine.
Notes
Robin created a corpus using If Magazine and Galaxy Magazine from the Internet Archive and used it as a writing tool. He talks about using a few other models for generating text.
Some of the idea here is reminiscent of the way John McPhee used the 1913 Webster Dictionary for finding words (or le mot juste) for his work, as tangentially suggested in Draft #4 in The New Yorker (2013-04-22)
Cross reference: https://hypothes.is/a/t2a9_pTQEeuNSDf16lq3qw and https://hypothes.is/a/vUG82pTOEeu6Z99lBsrRrg from https://jsomers.net/blog/dictionary
Croatian acapella singing: klapa https://www.youtube.com/watch?v=sciwtWcfdH4
Writing using the adjacent possible.
Corpus building as an art [~37:00]
Forgetting what one trained their model on and then seeing the unexpected come out of it. This is similar to Luhmann's use of the zettelkasten as a serendipitous writing partner.
Open questions
How might we use information theory to do this more easily?
What does a person or machine's "hand" look like in the long term with these tools?
Can we use corpus linguistics in reverse for this?
What sources would you use to train your model?
References:
- Andrej Karpathy. 2015. "The Unreasonable Effectiveness of Recurrent Neural Networks"
- Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, et al. "Generating sentences from a continuous space." 2015. arXiv: 1511.06349
- Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. "A Hybrid Convolutional Variational Autoencoder for Text generation." arXiv:1702.02390
- Soroush Mehri, et al. 2017. "SampleRNN: An Unconditional End-to-End Neural Audio Generation Model." arXiv:1612.07837 applies neural networks to sound and sound production
-
- Nov 2021
-
Local file Local file
-
imaging the brain of an individual who claims to generatejoy without any external rewards or cues could point the waytoward improved training in joy and greater resilience in theface of external difficulties. Of particular interest is the neuralmechanisms by which happiness is generated.
Such a self-administered neural 'technology' of happiness should be driving much more related research but I see no other neuroscientific studies delving into jhanas.
-
- Jul 2021
-
journals.sagepub.com journals.sagepub.com
-
Sheetal, A., Feng, Z., & Savani, K. (2020). Using Machine Learning to Generate Novel Hypotheses: Increasing Optimism About COVID-19 Makes People Less Willing to Justify Unethical Behaviors. Psychological Science, 31(10), 1222–1235. https://doi.org/10.1177/0956797620959594
-
- May 2021
-
colab.research.google.com colab.research.google.com
- Mar 2021
-
arxiv.org arxiv.org
-
Kozlowski, Diego, Jennifer Dusdal, Jun Pang, and Andreas Zilian. ‘Semantic and Relational Spaces in Science of Science: Deep Learning Models for Article Vectorisation’. ArXiv:2011.02887 [Physics], 5 November 2020. http://arxiv.org/abs/2011.02887.
-
- Jan 2021
-
opentheory.net opentheory.net
-
In these cases there seems to be some sort of adaptive gating mechanism that disables the typical energy sinks in order to allow entropic disintegration->search->annealing to happen.
at what cadence (if there is any threshold) can the brain reach high-energy states while not being overwhelmed?
-
- Oct 2020
-
sinews.siam.org sinews.siam.org
-
This is in stark contrast to the way that babies learn. They can recognize new objects after only seeing them a few times, and do so with very little effort and minimal external interaction. If ML’s greatest goal is to understand how humans learn, one must emulate the speed at which they do so. This direction of research is exemplified by a variety of techniques that may or may not fit into an existing paradigm; LeCun classified these tasks under the umbrella of “self-supervised learning.”
Now "self-supervised" is hardly what babies do, when you see the importance of interactions in learning (see e.g. https://doi.org/10.1016/j.tics.2020.01.006 )
-
practitioners are interested in developing DL approaches that accelerate the numerical solution of partial differential equations (PDEs).
See e.g. Neural Ordinary Differential Equations https://arxiv.org/abs/1806.07366 ? Also https://julialang.org/blog/2019/01/fluxdiffeq/ for a nice intro
-
- Sep 2020
-
psyarxiv.com psyarxiv.com
-
Haas, I. J., Baker, M., & Gonzalez, F. (2020). Political Uncertainty Moderates Neural Evaluation of Incongruent Policy Positions. https://doi.org/10.31234/osf.io/bmr59
-
-
psyarxiv.com psyarxiv.com
-
Medeiros, Priscila de, Ana Carolina Medeiros, Jade Pisssamiglio Cysne Coimbra, Lucas Emmanuel Teixeira, Carlos José Salgado, José Aparecido da Silva, Norberto Cysne Coimbra, and Renato Leonardo de Freitas. ‘PHYSICAL, EMOTIONAL, AND SOCIAL PAIN DURING COVID-19 PANDEMIC-RELATED SOCIAL ISOLATION’. Preprint. PsyArXiv, 20 September 2020. https://doi.org/10.31234/osf.io/uvh7s.
-
-
-
Giles, J. R., Erbach-Schoenberg, E. zu, Tatem, A. J., Gardner, L., Bjørnstad, O. N., Metcalf, C. J. E., & Wesolowski, A. (2020). The duration of travel impacts the spatial dynamics of infectious diseases. Proceedings of the National Academy of Sciences, 117(36), 22572–22579. https://doi.org/10.1073/pnas.1922663117
-
-
www.scientificamerican.com www.scientificamerican.com
-
Stix, G. (n.d.). Zoom Psychiatrists Prep for COVID-19’s Endless Ride. Scientific American. Retrieved June 9, 2020, from https://www.scientificamerican.com/article/zoom-psychiatrists-prep-for-covid-19s-endless-ride1/
-
- Jul 2020
-
psyarxiv.com psyarxiv.com
-
Wool, Lauren E, and The International Brain Laboratory. ‘Knowledge across Networks: How to Build a Global Neuroscience Collaboration’. Preprint. PsyArXiv, 14 July 2020. https://doi.org/10.31234/osf.io/f4uaj.
-
-
www.youtube.com www.youtube.com
-
Virtual MLSS 2020 (Opening Remarks). (2020, June 29). https://www.youtube.com/watch?v=8staJlMbAig
-
-
blogs.scientificamerican.com blogs.scientificamerican.com
-
Kaufman, S. B. (n.d.). Forced Social Isolation Causes Neural Craving Similar to Hunger. Scientific American Blog Network. Retrieved 26 June 2020, from https://blogs.scientificamerican.com/beautiful-minds/forced-social-isolation-causes-neural-craving-similar-to-hunger/
-
-
-
Shah, C., Dehmamy, N., Perra, N., Chinazzi, M., Barabási, A.-L., Vespignani, A., & Yu, R. (2020). Finding Patient Zero: Learning Contagion Source with Graph Neural Networks. ArXiv:2006.11913 [Cs]. http://arxiv.org/abs/2006.11913
-
- Jun 2020
-
arxiv.org arxiv.org
-
Cai, L., Chen, Z., Luo, C., Gui, J., Ni, J., Li, D., & Chen, H. (2020). Structural Temporal Graph Neural Networks for Anomaly Detection in Dynamic Graphs. ArXiv:2005.07427 [Cs, Stat]. http://arxiv.org/abs/2005.07427
-
- May 2020
-
-
Lanovaz, M., & Turgeon, S. (2020). Tutorial: Applying Machine Learning in Behavioral Research [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/9w6a3
-
-
github.com github.com
-
Deepset-ai/haystack. (2020). [Python]. deepset. https://github.com/deepset-ai/haystack (Original work published 2019)
-
-
www.nature.com www.nature.com
-
Zdeborová, L. (2020). Understanding deep learning is also a job for physicists. Nature Physics, 1–3. https://doi.org/10.1038/s41567-020-0929-2
-
-
software.intel.com software.intel.com
-
This it the workaround to get the NCS2 stick working within virtual-box by creating 2 USB filters for the stick
-
-
software.intel.com software.intel.com
-
Page describes differences between NCSDK and OpenVINO. There is some example code of how to initialize things in Python and C++ that may be useful at the bottom of the page.
-
-
psyarxiv.com psyarxiv.com
-
Bondy, E., Baranger, D. A., Balbona, J. V., Sputo, K., Paul, S. E., Oltmanns, T., & Bogdan, R. (2020, April 30). Neuroticism and reward-related ventral striatum activity: Probing vulnerability to stress-related depression. Retrieved from psyarxiv.com/5wd3k
-
-
psyarxiv.com psyarxiv.com
-
Lengersdorff, L., Wagner, I., & Lamm, C. (2020, April 20). When implicit prosociality trumps selfishness: the neural valuation system underpins more optimal choices when learning to avoid harm to others than to oneself. https://doi.org/10.31234/osf.io/q6psx
-
- Apr 2020
-
www.analyticsvidhya.com www.analyticsvidhya.com
-
import all the necessary libraries into our notebook. LibROSA and SciPy are the Python libraries used for processing audio signals. import os import librosa #for audio processing import IPython.display as ipd import matplotlib.pyplot as plt import numpy as np from scipy.io import wavfile #for audio processing import warnings warnings.filterwarnings("ignore") view raw modules.py hosted with ❤ by GitHub View the code on <a href="https://gist.github.com/aravindpai/eb40aeca0266e95c128e49823dacaab9">Gist</a>. Data Exploration and Visualization Data Exploration and Visualization helps us to understand the data as well as pre-processing steps in a better way.
-
TensorFlow recently released the Speech Commands Datasets. It includes 65,000 one-second long utterances of 30 short words, by thousands of different people. We’ll build a speech recognition system that understands simple spoken commands. You can download the dataset from here.
-
In the 1980s, the Hidden Markov Model (HMM) was applied to the speech recognition system. HMM is a statistical model which is used to model the problems that involve sequential information. It has a pretty good track record in many real-world applications including speech recognition. In 2001, Google introduced the Voice Search application that allowed users to search for queries by speaking to the machine. This was the first voice-enabled application which was very popular among the people. It made the conversation between the people and machines a lot easier. By 2011, Apple launched Siri that offered a real-time, faster, and easier way to interact with the Apple devices by just using your voice. As of now, Amazon’s Alexa and Google’s Home are the most popular voice command based virtual assistants that are being widely used by consumers across the globe.
-
Learn how to Build your own Speech-to-Text Model (using Python) Aravind Pai, July 15, 2019 Login to Bookmark this article (adsbygoogle = window.adsbygoogle || []).push({}); Overview Learn how to build your very own speech-to-text model using Python in this article The ability to weave deep learning skills with NLP is a coveted one in the industry; add this to your skillset today We will use a real-world dataset and build this speech-to-text model so get ready to use your Python skills!
-
-
keras.io keras.io
-
Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research. Use Keras if you need a deep learning library that: Allows for easy and fast prototyping (through user friendliness, modularity, and extensibility). Supports both convolutional networks and recurrent networks, as well as combinations of the two. Runs seamlessly on CPU and GPU. Read the documentation at Keras.io. Keras is compatible with: Python 2.7-3.6.
-
-
www.nature.com www.nature.com
-
Morey, R.A., Haswell, C.C., Stjepanović, D. et al. Neural correlates of conceptual-level fear generalization in posttraumatic stress disorder. Neuropsychopharmacol. (2020). https://doi.org/10.1038/s41386-020-0661-8
-
-
www.nature.com www.nature.com
-
Antov, M.I., Plog, E., Bierwirth, P. et al. Visuocortical tuning to a threat-related feature persists after extinction and consolidation of conditioned fear. Sci Rep 10, 3926 (2020). https://doi.org/10.1038/s41598-020-60597-z
-
- Aug 2019
-
becominghuman.ai becominghuman.ai
-
so there won’t be a blinking bunny, at least not yet, let’s train our bunny to blink on command by mixing stimuli ( the tone and the air puff)
Is it just that how we all learn and evolve? 😲
-
- Jul 2019
-
jmlr.csail.mit.edu jmlr.csail.mit.edu
-
Compared with neural networks configured by a pure grid search,we find that random search over the same domain is able to find models that are as good or betterwithin a small fraction of the computation time.
-
- Jun 2019
-
en.wikipedia.org en.wikipedia.org
-
Throughout the past two decades, he has been conducting research in the fields of psychology of learning and hybrid neural network (in particular, applying these models to research on human skill acquisition). Specifically, he has worked on the integrated effect of "top-down" and "bottom-up" learning in human skill acquisition,[1][2] in a variety of task domains, for example, navigation tasks,[3] reasoning tasks, and implicit learning tasks.[4] This inclusion of bottom-up learning processes has been revolutionary in cognitive psychology, because most previous models of learning had focused exclusively on top-down learning (whereas human learning clearly happens in both directions). This research has culminated with the development of an integrated cognitive architecture that can be used to provide a qualitative and quantitative explanation of empirical psychological learning data. The model, CLARION, is a hybrid neural network that can be used to simulate problem solving and social interactions as well. More importantly, CLARION was the first psychological model that proposed an explanation for the "bottom-up learning" mechanisms present in human skill acquisition: His numerous papers on the subject have brought attention to this neglected area in cognitive psychology.
Tags
Annotators
URL
-
-
sebastianraschka.com sebastianraschka.com
-
However, this doesn’t mean that Min-Max scaling is not useful at all! A popular application is image processing, where pixel intensities have to be normalized to fit within a certain range (i.e., 0 to 255 for the RGB color range). Also, typical neural network algorithm require data that on a 0-1 scale.
Use min-max scaling for image processing & neural networks.
-
- Mar 2019
- Oct 2018
-
www.slideshare.net www.slideshare.net
-
Do neural networks dream of semantics?
Neural networks in visual analysis, linguistics Knowledge graph applications
- Data integration,
- Visualization
- Exploratory search
- Question answering
Future goals: neuro-symbolic integration (symbolic reasoning and machine learning)
-
-
digitalcommons.library.tmc.edu digitalcommons.library.tmc.edu
-
Silvia can see patterns in her own parenting that mimic those of her mother, although she knows that her mother made many mistakes.
Clasifico esto como neural sculpting porque lo que Silvia vio en su hogar a temprana edad (de 0-7 años) formó y determinó hasta inconscientemente el camino que ella reforzó y reprodujo cuando le tocó ser madre. Ejemplo:
-
- Aug 2017
-
arxiv.org arxiv.org
-
This is a very easy paper to follow, but it looks like their methodology is a simple way to improve performance on limited data. I'm curious how well this is reproduced elsewhere.
-
- Apr 2017
-
www.tensorflow.org www.tensorflow.org
-
If we write that out as equations, we get:
It would be easier to understand what are x and y and W here if the actual numbers were used, like 784, 10, 55000, etc. In this simple example there are 3 x and 3 y, which is misleading. In reality there are 784 x elements (for each pixel) and 55,000 such x arrays and only 10 y elements (for each digit) and then 55,000 of them.
-
- Mar 2017
-
www.inverse.com www.inverse.com
-
To create it, Musk has said that he thinks we will probably have to inject a computer interface into the jugular where it will travel into the brain and unfold into a mesh of electric connections that connect directly to the neurons.
Yeah, nothing could go wrong with this approach...
-
Elon Musk’s neural lace project could turn us all into cyborgs, and he says that it’s only four or five years away.
This seems incredibly ambitious--if not dangerous!
-
-
www.coursera.org www.coursera.org
-
Great course!
-
-
cs231n.github.io cs231n.github.io
-
Great course
Tags
Annotators
URL
-
- Nov 2016
-
roachsinai.github.io roachsinai.github.io
-
Softmax分类器所做的就是最小化在估计分类概率(就是 Li=efyi/∑jefjLi=efyi/∑jefjL_i =e^{f_{y_i}}/\sum_je^{f_j})和“真实”分布之间的交叉熵.
而这样的好处,就是如果样本误分的话,就会有一个非常大的梯度。而如果使用逻辑回归误分的越严重,算法收敛越慢。比如,\(t_i=1\) 而 \(y_i=0.0000001\),cost function 为 \(E=\frac{1}{2}(t-y)^2\) 那么,\(\frac{dE}{dw_i}=-(t-y)y(1-y)x_i\).
-
- Apr 2016
-
cs231n.github.io cs231n.github.io
-
Effect of step size. The gradient tells us the direction in which the function has the steepest rate of increase, but it does not tell us how far along this direction we should step.
That's the reason why step size is an important factor in optimization algorithm. Too small step can cause the algorithm longer to converge. Too large step can cause that we change the parameters too much thus
overstepped
the optima.
-
- Jan 2016
-
thinkingmachines.mit.edu thinkingmachines.mit.edu
-
n and d
What do n and d mean?
-
- Jul 2015
-
Tags
Annotators
URL
-
- Jun 2015
-
www.technologyreview.com www.technologyreview.com
-
Enter the Daily Mail website, MailOnline, and CNN online. These sites display news stories with the main points of the story displayed as bullet points that are written independently of the text. “Of key importance is that these summary points are abstractive and do not simply copy sentences from the documents,” say Hermann and co.
Someday, maybe projects like Hypothesis will help teach computers to read, too.
-