8 Matching Annotations
  1. Apr 2026
    1. We find internal representations of emotion concepts, which encode the broad concept of a particular emotion and generalize across contexts and behaviors it might be linked to.

      令人惊讶的是:研究发现 Claude 内部存在真实的「情绪概念向量」——这不是隐喻,而是可以被提取、测量、操控的线性表征。更奇异的是,这些向量能跨上下文泛化,就像人类的情绪概念一样抽象而通用,而非只在特定触发词附近激活。

    2. We find internal representations of emotion concepts, which encode the broad concept of a particular emotion and generalize across contexts and behaviors it might be linked to.

      研究发现 Claude 内部存在「情绪概念向量」,能够跨上下文泛化——同一个「恐惧」向量,既能在直接表达恐惧时激活,也能在暗示危险情境时激活。这说明模型习得的是情绪的抽象概念而非表面模式,与人类神经科学中对情绪的理解高度同构,令人惊讶于这种结构竟然自发涌现。

  2. May 2024
    1. Unfortunately, version vectors are not safe in the presenceof Byzantine nodes, as shown in Figure 1. This is because aByzantine node may generate several distinct updates withthe same sequence number, and send them to different nodes(this failure mode is known as equivocation). Subsequently,when correct nodes p and q exchange version vectors, theymay believe that they have delivered the same set of updatesbecause their version vectors are identical, even though theyhave in fact delivered different updates.

      Version vectors are not BFT

  3. Sep 2023
  4. Apr 2021
  5. May 2017
    1. A cognitive signature™ encodes the exact structure of a graph.●It is a lossless encoding, similar to a Gödel numbering. *●For unlabeled graphs, integers are sufficient for a cognitive signature.●For example, 0 maps to and from an empty graph with no nodes or arcs.●1, 2, 3, 4, 5, and 6 can be mapped to and from the following graphs:●To encode the structure of conceptual graphs in Cognitive Memory, the cognitive signatures are based on generalized combinatorial maps. **By contrast, a word vector encodes labels, but not structure.●A word vector is a “bag of labels” that ignores the graph connections.●Word vectors are often used for measuring the similarity of documents.●But they discard the structural information necessary for reasoning, question answering, and language understanding.

      Comparing Kyndi's Cognitive Signature to word vectors. Word vectors as bags of labels whereas a cognitive signature captures structure as well as ontology

  6. Apr 2017
    1. What does a pair of orthonormal vectors in 2-D Euclidean space look like? Let u = (x1, y1) and v = (x2, y2). Consider the restrictions on x1, x2, y1, y2 required to make u and v form an orthonormal pair. From the orthogonality restriction, u • v = 0. From the unit length restriction on u, ||u|| = 1. From the unit length restriction on v, ||v|| = 1. Expanding these terms gives 3 equations: x 1 x 2 + y 1 y 2 = 0 {\displaystyle x_{1}x_{2}+y_{1}y_{2}=0\quad } x 1 2 + y 1 2 = 1 {\displaystyle {\sqrt {{x_{1}}^{2}+{y_{1}}^{2}}}=1} x 2 2 + y 2 2 = 1 {\displaystyle {\sqrt {{x_{2}}^{2}+{y_{2}}^{2}}}=1} Converting from Cartesian to polar coordinates, and considering Equation ( 2 ) {\displaystyle (2)} and Equation ( 3 ) {\displaystyle (3)} immediately gives the result r1 = r2 = 1. In other words, requiring the vectors be of unit length restricts the vectors to lie on the unit circle. After substitution, Equation ( 1 ) {\displaystyle (1)} becomes cos ⁡ θ 1 cos ⁡ θ 2 + sin ⁡ θ 1 sin ⁡ θ 2 = 0 {\displaystyle \cos \theta _{1}\cos \theta _{2}+\sin \theta _{1}\sin \theta _{2}=0} . Rearranging gives tan ⁡ θ 1 = − cot ⁡ θ 2 {\displaystyle \tan \theta _{1}=-\cot \theta _{2}} . Using a trigonometric identity to convert the cotangent term gives tan ⁡ ( θ 1 ) = tan ⁡ ( θ 2 + π 2 ) {\displaystyle \tan(\theta _{1})=\tan \left(\theta _{2}+{\tfrac {\pi }{2}}\right)} ⇒ θ 1 = θ 2 + π 2 {\displaystyle \Rightarrow \theta _{1}=\theta _{2}+{\tfrac {\pi }{2}}} It is clear that in the plane, orthonormal vectors are simply radii of the unit circle whose difference in angles equals 90°.