unconventional ones that require creative thinking.
E.g., sitting in a car and the dog running outside of the car = walking the dog. But would AI recognise it at such?
unconventional ones that require creative thinking.
E.g., sitting in a car and the dog running outside of the car = walking the dog. But would AI recognise it at such?
Bongrad problems
See images below; in a Bonged problem the left set of squares contain a set of shapes that have a defining/grouping feature, and the right set of squares has a defining feature that is different from the left set - your goal is to indicate the difference.
specific entities or general concepts,
It was supposed to contain all the unwritten knowledge that we have, enabling AI to function at human level in vision, language, planning, reasoning and other domains.
analogy
analogies underlie our abstraction and formation of concept abilities! Emmanuel Sander: "Without concepts there can be no thought, and without analogies there can be no concepts".
Lakoff and Johnson’s claim
They claimed that our understand of essentially all abstract concepts comes about va metaphors based on core physical knowledge.
simple word-based cues were removed,
e.g., removing the word 'not' from a sentence - the presence of these types of words can help predict the correct answer.
scaling up
"Scale is all you need" idea in AI.
sound wave
This is a freaky idea then you consider the amount of people that have electronics like Alexa in their house!
fail to capture the deeper meaning of an image
Take for example the image of the soldier arriving at home and greeting her dog. For us this image has a large emotional load, but for an AI it will be an image of a dog and a women....
ones
A lot of "very poor" ratings, but also a lot of "amazing" ratings.
(LSTM)
Forget old information: Use the forget gate to determine what to remove from the cell state. Update memory: Use the input gate and candidate values to add new relevant information to the cell state. Generate output: Use the cell state, filtered through the output gate, to produce the hidden state.
statistical machine translation (SMT)
Following the AI trend at that time, SMT relied on learning from data rather than humans specifying rules.
point.
There is thus a multi-dimensional space in which words can be mapped, regarding their relationship to each other. Take for example charm, which can be linked to charisma, but also to bracelet!
the meaning of a word can be defined in terms of other words it tends to occur with, and the words that tend to occur with those words,
"You shall know a word by the company it keeps!"
positive
"Despite the heavy subject matter, there's enough humor to keep it from becoming too dark" "there's nothing here that is disturbing or horrific as some people have suggested" "I was a little too young to see this movie when it first came out"
mini-reviews
"The plot is heavy and a sense of humor is largely missing" "a little too dark for my taste" "it felt as if the producers tried to make it as disturbing and horrific as they possibly could" etc...
NLP
Natural Language Processing
four-year-old child brings to understanding language.
Take for example the hamburger story! To correctly interpret this an AI would need to know that a hamburger is food, burnt to a crisp is not rare, that seen as the man stormed out of the restaurant it is likely that he didn't eat the burger, etc! So much background information / information that you have to read between the lines to get!
.
AND humans do not always display ethical / moral behaviour...
clear and consistent.
We seem to have different values when we reason about other people than when we reason about ourselves.
.
This is the case due to the fact that in this situation you are actively inflicting harm to yourself, something that I think we are evolutionarily programmed to avoid...
3.
Asimov states that this rule would inevitably fail, using a story he illustrated this point: say, you were to command a rule to go towards a dangerous substance, which, following law 2 the robot will do, but once it nears, the third law kicks in, trapping the robot in an endless loop.
?
What weighs heavier; the potential benefits of AI or the potential risks?
.
Once something is achieved it is no longer 'intelligent'.
convolutional neural network
Input layer + output layer with many hidden layers in between.
.
This has serious implications! It can also be done with audio, and can, for example, mess up self-driving cars' recognition of road markings!
.
E.g., Will Landecker's DNN trained to classify images as 'contains animal' or 'doesn't contain animal'. It was accurate in this, but later it turned out that the network had learned to associate blurry backgrounds with 'contains animal'. It used a shortcut!
“epoch,”
"events" / this process happening once is an epoch, it happens multiple times in multiple epochs to improve the performance.
convolutions
Describes the overlap.
neocognitron
Some success in recognising handwritten digits.
cognitron
One of the earliest deep neural networks.
.
Furthermore, AI can't recognise the nuances and emotional depth that is present in a lot of images: e.g., it can't deduce a story based off of a picture like we people can....
Kapor argues that AI lacks the experiential, embodied learning and emotional cognition fundamental to human intelligence, while Kurzweil counters that advances in virtual reality and a reverse-engineered brain could simulate such experiences.
Without a human body and everything accompanying this, a machine will never be able to learn everything needed to pass this strict Turing test.
.
He also noted that Kurzweil was clever in his use of the 'Christopher Columbus ploy': "they all laughed".
"Singularity,"
"A future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed". --> when AI exceeds human intelligence.
Turing test
The Turing test is passed if a computer can interact with a person without being identified as a machine / computer.
“bad at logic, good at Frisbee.”
You do it automatically - without conscious thought // sub symbolic AI is uninterpretable but it does everything
.
Back-propagation takes the error observed at the output, and sends it backwards to assign proper blame to each weight in the network - determining how much to change each weight to reduce the error. Learning is thus gradually modifying the weights so that the outputs ERROR gets as close as possible to 0 on all training examples.
.
Perceptron + hidden unit = neural network Multilayer neural network + multiple hidden units = deep network
.
Later, they delve into all the common knowledge that we have, out intuition, ability to use metaphors and abstract thinking etc.
big four “founders”
Marvin Minsky, John McCarthy, Herbert Simon and Allen Newell
perceptron-learning algorithm
One of Rosenblatt's major contributions.
labeled examples,
A set of the examples are positive (e.g., 8s written by different people), and a set of the examples are negative (e.g., other digits written by different people). This is the training set. There is also a test set, used to evaluate performance.
subsymbolic
Subsymbolic AI is based on mathematical logic, it is essentially a stack of equations and does not contain human-understandable language.
symbolic
Symbolic AI is typically understandable to humans, so it is written in language.
competence, autonomy, and relatedness:
Self-determination theory?
chronic traumatic encephalopathy (CTE).
Chronic traumatic encephalopathy is a neurodegenerative disease linked to repeated trauma to the head. The encephalopathy symptoms can include behavioral problems, mood problems, and problems with thinking. The disease often gets worse over time and can result in dementia.
hardiness
control, commitment and challenge.
Type of sport and the required motor skills
Are not taken into account.
BMD
Body mass density
ATHENA
adolescent-girl focussed
ATLAS
Adolescent-boy focussed
amenorrhea
Lack of menstruation
Chronic exercise,
Intense exercise on a daily / regular basis
overtraining
Imbalance between training, other life stressors and recovery
recovery paradox
high stressors and therefore a high need to recover, but because of these stressors not being able to recover!
consolidation
Making something stronger or more solid
Self-Determination Theory
Autonomy, competence and relatedness --> wellbeing etc.
remuneration
Financial compensation / money!
focal dystonia,
Focal dystonia, also called focal task-specific dystonia, is a neurological condition that affects a muscle or group of muscles in a specific part of the body during specific activities, causing involuntary muscular contractions and abnormal postures.
skills
Higher self-monitoring; e.g., walking down the stairs when highly anxious and focussing on how you take each single step
parasympathetic nervous system
Digestive, rest system
sympathetic nervous system (SNS),
"fight or flight" system
abruptly
Like the Cusp Catastrophe model does...
Dot Probe Test
Shorter reaction time to dots on the spot of the emotional stimulus indicate attentional bias.
Bracing and Performance Effects
Bracing --> a.k.a. double pull effect
Emotional Stroop Test
A person with competition anxiety for example will take longer to name the color when there are competition-loss related words.