1,366 Matching Annotations
  1. Mar 2019
    1. The first step is lexical analysis, i.e. word segmentation and part-of-speech (POS)tagging. The words and POS labels are used as features in the subsequent models. Forthe shared task we used HanLP [1] as our Chinese lexical analyzer.

      SLU 模型做法:

      • 1 第一步是词汇分析,也就是分词,然后词性标注。本文用的是HanLP做词性分析。

      • 2 第二步是槽位边界检测。这个任务看成一个用BILOU进行序列标注的。我们用了基于字和词的序列标注。基于字的 版本是用一个window为7的CRF,用此法特征和词典特征,另外基于词的的CRF模型是window size为5的词法特征,词性特征和词典特征。词典特征是指“当前字词是否 prefix/infix/suffix 在实体词典中某个条目关系。”每个CRF输出n(3)个输出,这整个2n个输出用到下一步。用基于字的序列标注是为了弥补分词效果差带来的可能影响。

      • 3 第三部是槽位类型识别。用的是LR+L正则分类器,预测出的slot,上下文的字词,上下文的词性标注作为特征。

      • 4 第四步是槽位纠正。这个是为了解决因为ASR导致的错误识别造成的结果。用的是一个基于搜索的方法。鉴于已经有各种槽位类型的词典,如果一个预测出来的槽位s类型T没有在对应的槽位词典中,那么就用s作为查询词来在根据最小编辑距离来查询槽位词典中的记录。这个操作会进行两次,一个是s作为中文字符,另一个是s作为拼音来查询。最好的结果是从这两个查处的结果中重新排序后得到的。

      • 5 最后一步是意图分类。用的是XGBoost及其默认参数。用到的特征是单词token,query length,以及前面步骤预测出来的槽位。

    2. Each rule is of the form “if thequery q is listed in a particular lexicon L, and the preceding queries and their predicteddomain labels satisfy certain conditions, then q is assigned a certain intent label and,with the exception of short commands, the entire q is regarded as a slot of the typecorresponding to L.” The rules are arranged in sequential order in accordance with theirpriorities

      规则的具体形式是,"如果query q被列在了一个特定的词汇表L,并且其前面的queries和它们预测出来的领域标签满足特定条件,那么q就可以被打上一个特定的意图标签,并且对于短的命令来说,整个query q是当作对应于L的一个槽位类型".所有规则按照优先级顺序组织的。

    3. Figure 1 shows the framework of our SLU system, which consists of the context-dependent rules for entity-only queries and the context-independent model for querieswith IISPs. The entire system feeds the query to the rules first. If the rule-based compo‐nent returns null result, that means the query is judged to contain IISPs and the model-based component will continue to process it. Otherwise, it means the query is regardedas entity-only and the result of the rules is returned as the final output

      一个query首先经过基于规则的无明显意图词的判定过程,如果是空的话那就意味含有IISPs基于模型的组件会继续来处理,否则的话也就意味着query被看作是只有实体的,那么规则的结果就作为最终结果直接返回。

    4. s in real use cases of dialog systems, the queries in the shared task can be roughlydivided into two kinds, viz. queries with intent-indicating salient phrases and querieswithout. By intent-indicating salient phrase (IISP) it is meant a phrase in the query thatshows the intent of the query. E.g. the phrase “” in the query “” andthe phrases “” in the query “” are IISPs.

      可以把预料文本分成2类,一类是有明显的预示意图的词语,另一类是没有。

    1. A composite process, remember, is organized from both human processes and computer processes

      Human-system/Tool-system fine-grained intersections and compositionalities. There are UX and UI levels here. There are likely further levels having to do with intentionalities of (semi-) autonomous Tool Systems, how they are to be guided by Human intentionalities, and - most importantly - how humans fully ascertain and guide their own intentionalities.

  2. Feb 2019
    1. Spoken language understanding (SLU) comprises two tasks, intent identification andslot filling. That is, given the current query along with the previous queries in the samesession, an SLU system predicts the intent of the current query and also all slots (entitiesor labels) associated with the predicted intent. The significance of SLU lies in that eachtype of intent corresponds to a particular service API and the slots correspond to theparameters required by this API. SLU helps the dialog system to decide how to satisfythe user’s need by calling the right service with the right information

      SLU有俩事,意图识别+填槽。

      实践中的困难:

      • 1 意图分类的复杂性
      • 2 世界知识
      • 3 用户状态
    1. 对话管理也可以看成是一个分类任务,即每个对话状态和一个合适的对话动作相对应.和其它有监督的学习任务一样,分类器可以从标注的语料库中训练得到.但是,在某状态下系统应该选择的动作不能仅仅是模仿在训练数据中同一状态对应的动作,而应该是选择合适的动作能够导致一个成功的对话.因此,把对话过程看成是一个决策过程更为合适,从而根据对话的整体成功来优化动作的选择过程[32].因而这是一个规划问题,并且可以用强化学习[33]方法学习获得最优的结果
    2. 对话系统从本体构成和业务逻辑角度,可分为领域任务型和开放型的信息交互.领域任务型系统针对具体应用领域,具有比较清晰的业务语义单元的定义、本体结构以及用户目标范畴,例如航班查询、视频搜索、设备控制等等,这类交互往往是以完成特定的操作任务作为交互目标;而开放型信息交互则不针对特定领域,或说面向非常广泛的领域,交互目标并非业务任务,而是满足用户其它方面的需求,例如开放的百科问答、聊天等.它虽然能一定程度上显示人工智能的力量,但因其并不专注于帮助人解决现实任务问题,其实际使用范围较为狭窄.近年来,随着移动终端的高速发展,面向任务的自然人机对话系统和相关的认知控制理论得到了越来越多的学术和产业界重视,这也是本文讨论的重点
    1. We com-plement recent work by showing the effec-tiveness of simple sequence-to-sequenceneural architectures with a copy mecha-nism. Our model outperforms more com-plex memory-augmented models by 7% inper-response generation and is on par withthe current state-of-the-art on DSTC2, areal-world task-oriented dialogue dataset

      用一个带有copy机制的简单seq2seq框架超过现有最好的真实DSTC2 7个点。

    2. A Copy-Augmented Sequence-to-Sequence Architecture GivesGood Performance on Task-Oriented Dialogue

      Task-oriented dialogue focuses on conversational agents that participate in dialogues with user goals on domain-specific topics. In contrast to chatbots, which simply seek to sustain open-ended meaningful discourse, existing task-oriented agents usually explicitly model user intent and belief states. This paper examines bypassing such an explicit representation by depending on a latent neural embedding of state and learning selective attention to dialogue history together with copying to incorporate relevant prior context. We complement recent work by showing the effectiveness of simple sequence-to-sequence neural architectures with a copy mechanism. Our model outperforms more complex memory-augmented models by 7% in per-response generation and is on par with the current state-of-the-art on DSTC2, a real-world task-oriented dialogue datase

    1. Symptom ExtractionWe follow the BIO(begin-in-out) schema for symptom identification(Figure 1). Each Chinese character is assigned alabel of ”B”, ”I” or ”O”. Also, each extractedsymptom expression is tagged withTrueorFalseindicating whether the patient suffers from thissymptom or not. In order to improve the anno-tation agreement between annotators, we createtwo guidelines for the self-report and the conver-sational data respectively. Each record is anno-tated by at least two annotators. Any inconsis-tency would be further judged by the third one.The Cohen’s kappa coefficient between two anno-tators are71%and67%for self-reports and con-versations respectively

      症状数据抽取,BIO格式。每个中文字符标注为“B","I","O".每个抽取出的症状根据病人真实情况打标为“True","False"。3人2个都标过的才有效,第三人评判。Cohhen kappa 相关性来作为标注标准。

    2. In this paper, we make a move to builda dialogue system for automatic diagno-sis. We first build a dataset collected froman online medical forum by extractingsymptoms from both patients’ self-reportsand conversational data between patientsand doctors. Then we propose a task-oriented dialogue system framework tomake the diagnosis for patients automat-ically, which can converse with patients tocollect additional symptoms beyond theirself-reports. Experimental results on ourdataset show that additional symptoms ex-tracted from conversation can greatly im-prove the accuracy for disease identifica-tion and our dialogue system is able tocollect these symptoms automatically andmake a better diagnosis

      In this paper, we make a move to builda dialogue system for automatic diagno-sis. We first build a dataset collected froman online medical forum by extractingsymptoms from both patients’ self-reportsand conversational data between patientsand doctors. Then we propose a task-oriented dialogue system framework tomake the diagnosis for patients automat-ically, which can converse with patients tocollect additional symptoms beyond theirself-reports. Experimental results on ourdataset show that additional symptoms ex-tracted from conversation can greatly im-prove the accuracy for disease identifica-tion and our dialogue system is able tocollect these symptoms automatically andmake a better diagnosis

      从一个在线医疗论坛抽取来病人的病情自述以及和医生的对话过程作为训练数据,结果表明从对话过程获得的病情描述能大幅提高医生对疾病的诊断,并且论文的对话系统能够有效的收集到这些信息帮助诊断。

    1. To overcome this issue, weexplore data generation using templates and terminologies and data augmentationapproaches. Namely, we report our experiments using paraphrasing and wordrepresentations learned on a large EHR corpus with Fasttext and ELMo, to learn aNLU model without any available dataset. We evaluate on a NLU task of naturallanguage queries in EHRs divided in slot-filling and intent classification sub-tasks.On the slot-filling task, we obtain a F-score of 0.76 with the ELMo representation;and on the classification task, a mean F-score of 0.71. Our results show that thismethod could be used to develop a baseline system

      在生物医药领域很缺数据,为了解决这个问题,常识了基于模版,术语大的数据扩展技术。先在大的数据集上用ELMo来构建词向量。把任务评估分成两个子任务来进行,slot-filling和意图分类。

      偏应用的一篇文章,结果也说明不了什么

    1. PyDial: A Multi-domain Statistical Dialogue System Toolkit

      一个开源的端到端的统计对话系统工具。

      其总的架构包含Sematic Decode,Belief Tracker,Policy Reply System,Language generator. 整体来说整个系统都支持了基于规则的判断过程,也融合了模型的支持。源码值得一看的。

  3. www.iro.umontreal.ca www.iro.umontreal.ca
    1. For the slot filling task, the input is the sentence consisting of a sequence of words, L, and the output is a sequence of slot/concept IDs, S, one for each word. In the statistical SLU systems, the task is often formalized as a pattern recognition problem: Given the word sequence L, the goal of SLU is to find the semantic representation of the slot sequence 푆that has the maximum a posterioriprobability 푃(푆|퐿).

      对于填槽任务,输入是一个有一系列词组成的语句,输出是每个词对应的slot/concept IDs。在统计SLU系统里,这个任务可以看作是:给定词序列L,SLU的目标是找到一个slot 序列来最大化后验概率P(S/L).

    1. the dataset used in our experiment hasonly the tags of filled information slots extracted by patternmatching between dialogue log and final order information

      用到的数据集是一个coffee ordering的对话过程的数据,31567通对话,142412对会话。数据只有用正则匹配出来的填充的标签信息。

    2. the agent model swaps the in-put and output sequences, and it also takes the tag of filledinformation slots as an input which is extracted from dia-logue in previous turns by pattern matching with the orderinformation in ground truth

      agent model 构建前先预训练。网络结构和user model一样,但是输入和输出反转,同时也把之前对话中已经填充的槽位信息作为输入。但是这俩部分信息并不是简单的直接拼接在一起,而是来学习适合的attention 权重来更好的利用注意力机制。此外任何其他额外的语义意图标签都不必用。

    3. By directly learningfrom the raw dialogue logs, the network takes the agent ut-Figure 2: The network structure: encoder-decoder structurewith the attention mechanismteranceXa:xa1;xa2;:::;xanas the input sequence and takescorresponding user utteranceYu:yu1;yu2;:::;yumas the tar-get sequence.

      User model.直接用双向的LSTM,以agent的utterance作为X,对应的用户的utterance作为Y。

    4. In the task-oriented dialogues, a user usually firstly showsthe intention to the agent and then answers the agent’s ques-tions one by one to specify the demand.

      这个认识是说在通常场景中用户先表达出意图然后回答agent的一个个问题来具体自己的需求。

      用户通常是被动的,偶尔的有一轮问题。换句话说用户基本上都是在一轮中回答由agent提出的问题。所以可以基于一个用户只需要考虑一轮回答来给出回复这样的假设来构建user model,,让agent model来处理多轮对话。

    5. we propose a uSer andAgent Model IntegrAtion (SAMIA) framework inspired byan observation that the roles of the user and agent models areasymmetric. Firstly, this SAMIA framework model the usermodel as a Seq2Seq learning problem instead of ranking ordesigning rules. Then the built user model is used as a lever-age to train the agent model by deep reinforcement learning.In the test phase, the output of the agent model is filtered bythe user model to enhance the stability and robustness. Ex-periments on a real-world coffee ordering dataset verify theeffectiveness of the proposed SAMIA framework.

      吐槽现有机器人比较low都是手工规则,强化学习只适用有限的几个场景。所以受用户和agent角色的不对称关系造了samia。首先是用户模型不是规则或者排序而是seq2seq,然后基于用户模型来用强化学习构建agent。

    6. ntegrating User and Agent Models: A Deep Task-Oriented Dialogue System Weiyan Wang, Yuxiang WU, Yu Zhang, Zhongqi Lu, Kaixiang Mo, Qiang Yang (Submitted on 10 Nov 2017) Task-oriented dialogue systems can efficiently serve a large number of customers and relieve people from tedious works. However, existing task-oriented dialogue systems depend on handcrafted actions and states or extra semantic labels, which sometimes degrades user experience despite the intensive human intervention. Moreover, current user simulators have limited expressive ability so that deep reinforcement Seq2Seq models have to rely on selfplay and only work in some special cases. To address those problems, we propose a uSer and Agent Model IntegrAtion (SAMIA) framework inspired by an observation that the roles of the user and agent models are asymmetric. Firstly, this SAMIA framework model the user model as a Seq2Seq learning problem instead of ranking or designing rules. Then the built user model is used as a leverage to train the agent model by deep reinforcement learning. In the test phase, the output of the agent model is filtered by the user model to enhance the stability and robustness. Experiments on a real-world coffee ordering dataset verify the effectiveness of the proposed SAMIA framework.

    1. Deep Reinforcement Learning for Dialogue Generation

      Recent neural models of dialogue generationoffer great promise for generating responsesfor conversational agents, but tend to be short-sighted, predicting utterances one at a timewhile ignoring their influence on future out-comes. Modeling the future direction of a di-alogue is crucial to generating coherent, inter-esting dialogues, a need which led traditionalNLP models of dialogue to draw on reinforce-ment learning. In this paper, we show how tointegrate these goals, applying deep reinforce-ment learning to model future reward in chat-bot dialogue. The model simulates dialoguesbetween two virtual agents, using policy gradi-ent methods to reward sequences that displaythree useful conversational properties: infor-mativity, coherence, and ease of answering (re-lated to forward-looking function). We evalu-ate our model on diversity, length as well aswith human judges, showing that the proposedalgorithm generates more interactive responsesand manages to foster a more sustained conver-sation in dialogue simulation. This work marksa first step towards learning a neural conversa-tional model based on the long-term success ofdialogues.

    1. Dialog System & Technology Challenge 6 Overview of Track 1 - End-to-End Goal-Oriented Dialog learning

      End-to-end dialog learning is an important research subject inthe domain of conversational systems. The primary task consistsin learning a dialog policy from transactional dialogs of a givendomain. In this context, usable datasets are needed to evaluatelearning approaches, yet remain scarce. For this challenge, atransaction dialog dataset has been produced using a dialogsimulation framework developed and released by Facebook AIResearch. Overall, nine teams participated in the challenge. Inthis report, we describe the task and the dataset. Then, we specifythe evaluation metrics for the challenge. Finally, the results ofthe submitted runs of the participants are detailed.

    1. Intent Detection for code-mix utterances in task oriented dialogue systems

      Intent detection is an essential component of taskoriented dialogue systems. Over the years, extensiveresearch has been conducted resultingin many state of the art modelsdirected towards resolving user’sintents indialogue. A variety of vector representation for user utterances have been explored for the same. However, these models and vectorization approaches have more so been evaluated in a single language environment. Dialogue systems generally have to deal with queries in different languages and most importantly Code-Mix form of writing. Since Code-Mix texts are not bounded by a formal structure they are difficult to handle. We thus conduct experiments across combinations of models and various vector representations for Code-Mix as well as multi-language utterancesand evaluate how these models scale to a multi-languageenvironment. Our aim is to find the best suitable combination of vector representation and models for the process of intent detection for code-mix utterances. We have evaluated the experiments on two different dataset consisting of only Code-Mix utterances and the otherdataset consisting of English, Hindi, and Code-Mix( English-Hindi) utterances

    1. Improving Semantic Parsing for Task Oriented Dialog

      Semantic parsing using hierarchical representations has recently been proposedfor task oriented dialog with promising results. In this paper, we present three dif-ferent improvements to the model: contextualized embeddings, ensembling, andpairwise re-ranking based on a language model. We taxonomize the errors pos-sible for the hierarchical representation, such as wrong top intent, missing spansor split spans, and show that the three approaches correct different kinds of errors.The best model combines the three techniques and gives 6.4% better exact matchaccuracy than the state-of-the-art, with an error reduction of 33%, resulting in anew state-of-the-art result on the Task Oriented Parsing (TOP) dataset

  4. Jan 2019
    1. For large-scale software systems, Van Roy believes we need to embrace a self-sufficient style of system design in which systems become self-configuring, healing, adapting, etc.. The system has components as first class entities (specified by closures), that can be manipulated through higher-order programming. Components communicate through message-passing. Named state and transactions support system configuration and maintenance. On top of this, the system itself should be designed as a set of interlocking feedback loops.

      This is aimed at System Design, from a distributed systems perspective.

    1. 计算机领域在分布式处理过程中追求高效、一致。对错误数据记录的修复和更正,通常会另行设计一套机制来保证。相对传统数据库,区块链由于需要保证事后数据的不可篡改,引入了共识机制,为错误的出现和修复提供更多的容忍度。这一重要思想通常被许多区块链设计者所忽略,众多项目纷纷追求提高短交易及确认速度,这会导致弱化甚至牺牲其他节点对数据的验证过程。同时,更早更快的确认也会带来问题。参与生成数据的节点需要满足生成数据不能出错等更严苛要求,导致现在很多区块链项目的在落地过程中出现困难。因为系统使用方会背上了数据必须一次性正确输入的包袱,需要非常保守和谨慎地选择上链数据。最终,区块链落地应用范围的狭窄,许多存在出错可能性的数据难以结合区块链的优点参与业务升级改造。

      <big>评:</big><br/><br/> 传统数据库与区块链式处理,哪个才是更佳的业务模式?这个问题的回答早已在我们的日常工作中得以体现,但却迫于某种难以逾越的权力边界而成了难言之隐。「事中容错,事后一致」是一种颇为崇高的境界,甚至可以从中一窥理想社会的光耀图景,但人们目前尚未能大规模应用这套 workflow,究其原因,并非目标遥远,而是由于决策权被少部分人掌控着,和数据打交道的主体只是把数据当作本职工作,并未主动贡献、积极参与。系统使用方背上的不是「数据必须一次性正确输入」的包袱,他们直面的,是将权利拱手让人后的自责,是与民主开放的理想世界背道而驰的困惑。

  5. wendynorris.com wendynorris.com
    1. Zack [42] distinguished these four termsaccording to two dimensions: the nature of what is being processed and the consti-tution of the processing problem.The nature of what is being processed is either information or frames of ref-erence. With information, we mean “observations that have been cognitively pro-cessed and punctuated into coherent messages” [42]. Frames of reference [4, p.108], on the other hand, are the interpretative frames which provide the context forcreating and understanding information. There can be situations in which there is alack of information or a frame of reference, or too much information or too manyframes of reference to process.

      Description of information processing challenges and breakdowns.

      Uncertainty -- not enough information

      Complexity -- too much information

      Ambiguity -- lack of clear meaning

      Equivocality -- multiple meanings

    2. Ta b l e 3DERMIS design premises [29]

      Muhren and Walle use the 6 of the 9 most relevant design premises for the future information system design guidelines for DERMIS, another crisis management system

      Information focus (dealing with complexity)

      Crisis memory (creating historical frames of reference)

      Exceptions as norms (support changing frames of reference in fluid, unpredictable scenario)

      Scope and nature of crisis (support adaptable management depending on type of crisis)

      Information validity and timeliness (synergy of coping with uncertainty and creating frames of reference from relevant, known information)

      Free exchange of information (synergy of social context and creating useful/sharable frames of reference)

    3. The problems of managing information and managing frames of reference are“tightly linked in a mutually interacting loop” and require “managing informationand the systems that provide it” [42]. IS have been generally designed to overcomethe information problems from Table 1. Most IS are aimed at either storing and re-trieving information to reduce uncertainty, such as database management systemsand document repositories, or at analyzing and processing large amounts of infor-mation to reduce complexity, such as decision support systems [31]. However, aswe have previously discussed, information related strategies are not always helpfulin coping with a variety of potential meanings.Problems of interpretation and the creation and management of frames of refer-ence, which aids Sensemaking, have generally not been taken into account whendesigning IS. Most IS currently seem tointend the opposite because they aim atreplacing or suppressing the possibility tomake sense of situations.

      Description of problem in integrating sensemaking (interpretive information process) into structured data systems.

      information =/= data

    4. Sensemaking is about contextual rationality, built out of vaguequestions, muddy answers, and negotiated agreements that attempt to reduce ambi-guity and equivocality. The genesis of Sensemaking is a lack of fit between whatwe expect and what we encounter [40]. With Sensemaking, one does not look at thequestion of “which course of action should we choose?”, but instead at an earlierpoint in time where users are unsure whether there is even a decision to be made,with questions such as “what is going on here, and should I even be asking this ques-tion just now?” [40]. This shows that Sensemaking is used to overcome situationsof ambiguity. When there are too many interpretations of an event, people engagein Sensemaking too, to reduce equivocality.

      Definition of sensemaking and how the process interacts with ambiguity and equivocality in framing information.

      "Sensemaking is about coping with information processing challenges of ambiguity and equivocality by dealing with frames of reference."

    5. Decision making is traditionally viewed as a sequential process of problem classifi-cation and definition, alternative generation, alternative evaluation, and selection ofthe best course of action [26]. This process is about strategic rationality, aimed atreducing uncertainty [6, 36]. Uncertainty can be reduced through objective analysisbecause it consists of clear questions for which answers exist [5, 40]. Complex-ity can also be reduced by objective analysis, as it requires restricting or reducingfactual information and associated linkages [42]

      Definition of decision making and how this process interacts with uncertainty and complexity in information.

      "Decision making is about coping with information processing challenges of uncertainty and complexity by dealing with information"

    6. The central problem requiring Sensemaking ismostly that there are too many potential meanings, and so acquiring informationcan sometimes help but often is not needed. Instead, triangulating information [34],socializing and exchanging different points of view [20], and thinking back of pre-vious experiences to place the current situation into context, as the retrospectionproperty showed us, are a few strategies that are likely to be more successful forSensemaking.

      Strategies for sensemaking

    7. Just as the information processing challenges from Table 1 are not mutually ex-clusive, Sensemaking and decision making cannot be separated, but instead operatesimultaneously. Meaning must be established and then sufficiently negotiated priorto acting on information [42]: Sensemaking shapes events into decisions, and deci-sion making clarifies what is happening [40].

      Interaction between sensemaking and decision making

    8. Weick et al. [41, p. 419] formulate a gripping conclusion on what the sevenSensemaking properties are all about: “Taken together these properties suggest thatincreased skill at Sensemaking should occur when people are socialized to makedo, be resilient, treat constraints as self-imposed, strive for plausibility, keep show-ing up, use retrospect to get a sense of direction, and articulate descriptions thatenergize. These are micro-level actions. They are small actions. But they are smallactions with large consequences.”

      Description of how the seven properties interact to foster sensemaking.

    9. Crisis environments are characterized by various types of information problemsthat complicate the response, such as inaccurate, late, superficial, irrelevant, unreli-able, and conflicting information [30, 32]. This poses difficulties for actors to makesense of what is going on and to take appropriate action. Such issues of informationprocessing are a major challenge for the field of crisis management, both concep-tually and empirically [19].

      Description of information problems in crisis environments.

    10. We use the theory of Sensemaking to study exactly this: how people makesense of their environment, and how they give meaning to what is happening. Sense-making is a crucial process in crises, as the manner and thereby the success of howone deals with crucial events is determined by the grasp one has of a situation.

      Sensemaking frame used in this study relies on work by Weick, et al.

    1. Value Sensitive Design (VSD) emphasizes consideration of stakeholder values when making design decisions [5]. Applying this rationale to the goal of leveraging the capacity of digital workers during crisis events, we identify design solutions that fit the underlying community dynamics, including current work practices, organizational structures, and motivations of digital volunteer work.

      Description of developing the design agenda, values, and needs assessment

      Cites Value Sensitive Design

    1. By ignoring the diversity and discord of the ‘goals’ of theparticipants involved, the differentiation of strategies, and the incongruence of theconceptual frames of reference within a cooperating ensemble, much of the currentCSCW research evades the problem of how to provide computer support for peoplecooperating through the establishment of a common information space.

      Has this design challege been adequately addressed in CSCW (and CHI, for that matter) in the last 30-ish years?

    2. On the one hand, the visibility requirement is amplified by this divergence. Thatis, knowledge of the identity of the originator and the situational context motivat-ing the production and dissemination of the information is required so as to enableany user of the information to interpret the likely motives of the originator. On theother hand, however, the visibility requirement is moderated by the divergence ofinterests and motives. A certain degree of opaqueness is required for discretionarydecision making to be conducted in an environment charged with colliding inter-ests. Hence,visibility must be bounded.

      What role does system meta data (version control, user history, etc.) play in bounding the visibility of decision making?

      This also seems to be an area ripe for more collaborative design approaches (participatory, reflective, feminist, etc.)

    3. Thus, a computer-basedsystem supporting cooperative work involving decision making should enhancethe ability of cooperating workers to interrelate their partial and parochial domainknowledge and facilitate the expression and communication of alternative perspec-tives on a given problem. This requires a representation of the problem domainas a whole as well as a representation, in some form, of the mappings betweenperspectives on that problem domain.

      This seems to still be a major challenge in information system design as well as collaborative workflow. Even if the information/meta context is made available, do people use it?

  6. Dec 2018
    1. Theproblem, then, was centered by social scientists in the process of design. Cer-tainly, many studies in CSCW, HCI, information technology, and informa-tion science at least indirectly have emphasized a dichotomy betweendesigners, programmers, and implementers on one hand and the social ana-lyst on the other.

      Two different camps on how to resolve this problem:

      1) Change more flexible social activity/protocols to better align with technical limitations 2) Make systems more adaptable to ambiguity

    2. In particular, concurrency control problems arise when the software, data,and interface are distributed over several computers. Time delays when ex-changing potentially conflicting actions are especially worrisome. ... Ifconcurrency control is not established, people may invoke conflicting ac-tions. As a result, the group may become confused because displays are incon-sistent, and the groupware document corrupted due to events being handledout of order. (p. 207)

      This passage helps to explain the emphasis in CSCW papers on time/duration as a system design concern for workflow coordination (milliseconds between MTurk hits) versus time/representation considerations for system design

  7. Nov 2018
  8. Oct 2018
  9. Aug 2018
    1. CONCLUSIONS: These findings suggest that higher standing BP is a biomarker that helps identify persons with combat PTSD who are likely to benefit from prazosin. These results also are consistent with α1AR activation contributing to PTSD pathophysiology in a subgroup of patients.

      This is precisely the results I would expect. However, I completely disagree with their interpretation.

      People with high blood pressure (BP) can tolerate a reduction in BP without instigating compensatory mechanisms. People with normal or low BP would invoke compensation by the sympathetic nervous system in response to alpha blockade. This would counteract the depressant effects of adrenergic antagonism. Indeed, adrenaline and noradrenaline elevate in response to standing, which I find to be an obvious prediction. Thus, the lack of benefit from prazosin in these subjects may be mediated by an increase in adrenergic receptor activation other than the apha1-adrenoreceptor; in particular, the beta-adrenergic receptors are likely at fault. Propranolol, a beta-blocker, is used for PTSD, so this mechanism seems well substantiated.

      The study apparently found benefit for patients with BP over 110 (with more benefit for higher BP). Thus, I would conclude that systolic pressure below 110 induce compensation.

  10. Jul 2018
    1. Your digestive (say: dye-JES-tiv) system started working even before you took the first bite of your pizza. And the digestive system will be busy at work on your chewed-up lunch for the next few hours — or sometimes days, depending upon what you've eaten. This process, called digestion,

      A good website option to give students for online collaborative inquiry (when having students research, list this website)

    1. At least as far as these gentlemen were concerned, this was a talk about the future of technology. Taking their cue from Elon Musk colonizing Mars, Peter Thiel reversing the aging process, or Sam Altman and Ray Kurzweil uploading their minds into supercomputers, they were preparing for a digital future that had a whole lot less to do with making the world a better place than it did with transcending the human condition altogether and insulating themselves from a very real and present danger of climate change, rising sea levels, mass migrations, global pandemics, nativist panic, and resource depletion. For them, the future of technology is really about just one thing: escape.

      So often we consider technology as being about particular things, but it can be much more fruitful when thinking of it as a system.

    1. Perelman says his Babel Generator also proves how easy it is to game the system. While students are not going to walk into a standardized test with a Babel Generator in their back pocket, he says, they will quickly learn they can fool the algorithm by using lots of big words, complex sentences, and some key phrases - that make some English teachers cringe. "For example, you will get a higher score just by [writing] "in conclusion,'" he says.
  11. May 2018
  12. Apr 2018
  13. Mar 2018
    1. Scheduling of this kind is a fundamental operating-system function.Almost all computer resources are scheduled before use

      调度是操作系统的基本功能, 几乎所有的计算机资源在使用前均先调度

    Tags

    Annotators

  14. Feb 2018
  15. Dec 2017
  16. Nov 2017
    1. an environment unlike anything they will encounter outside of school

      Hm? Aren’t they likely to encounter Content Management Systems, Enterprise Resource Planning, Customer Relationship Management, Intranets, etc.? Granted, these aren’t precisely the same think as LMS. But there’s quite a bit of continuity between Drupal, Oracle, Moodle, Sharepoint, and Salesforce.

    2. mandate the use of "learning management systems."

      Therein lies the rub. Mandated systems are a radically different thing from “systems which are available for use”. This quote from the aforelinked IHE piece is quite telling:

      “I want somebody to fight!” Crouch said. “These things are not cheap -- 300 grand or something like that? ... I want people to want it! When you’re trying to buy something, you want them to work at it!”

      In the end, it’s about “procurement”, which is quite different from “adoption” which is itself quite different from “appropriation”.

    3. institutional demands for enterprise services such as e-mail, student information systems, and the branded website become mission-critical

      In context, these other dimensions of “online presence” in Higher Education take a special meaning. Reminds me of WPcampus. One might have thought that it was about using WordPress to enhance learning. While there are some presentations on leveraging WP as a kind of “Learning Management System”, much of it is about Higher Education as a sector for webwork (-development, -design, etc.).

    1. (At the time, Stephen Downes mocked me for thinking that this was an important aspect of LMS design to consider.)

      An interesting case where Stephen’s tone might have drowned a useful discussion. FWIW, flexible roles and permissions are among the key things in my own personal “spec list” for a tool to use with learners, but it’s rarely possible to have that flexibility without also getting a very messy administration. This is actually one of the reasons people like WordPress.

    2. Do you know what the feature set was that had faculty from Albany to Anaheim falling to their knees, tears of joy streaming down their faces, and proclaiming with cracking, emotion-laden voices, "Finally, an LMS company that understands me!"?

      While this whole bit is over-the-top, à la @mfeldstein67, must admit that my initial reaction was close to that. For a very similar reason. Still haven’t had an opportunity to use Canvas with learners, but the overall workflow for this type of feature really does make a big difference. The openness aspect is very close to gravy. After all, there are ways to do a lot of work in the open without relying on any LMS. But the LMS does make a huge difference in terms of such features as quickly grading learners’ work.

    3. Why, they would build an LMS. They did build an LMS. Blackboard started as a system designed by a professor and a TA at Cornell University. Desire2Learn (a.k.a. Brightspace) was designed by a student at the University of Waterloo. Moodle was the project of a graduate student at Curtin University in Australia. Sakai was built by a consortium of universities. WebCT was started at the University of British Columbia. ANGEL at Indiana University.
  17. courses.openulmus.org courses.openulmus.org
    1. An institution has implemented a learning management system (LMS). The LMS contains a learning object repository (LOR) that in some aspects is populated by all users across the world  who use the same LMS.  Each user is able to align his/her learning objects to the academic standards appropriate to that jurisdiction. Using CASE 1.0, the LMS is able to present the same learning objects to users in other jurisdictions while displaying the academic standards alignment for the other jurisdictions (associations).

      Sounds like part of the problem Vitrine technologie-éducation has been tackling with Ceres, a Learning Object Repository with a Semantic core.

    1. Enhanced learning experience Graduate students now receive upgraded iPads, and all students access course materials with Canvas, a new learning management software. The School of Aeronautics is now the College of Aeronautics; and the College of Business and Management is hosting a business symposium Nov. 15.

      This from a university which had dropped Blackboard for iTunes U.

  18. Oct 2017
    1. It’s precisely to meet these demands that Cegid recently launched a Learning Management System (LMS) specifically dedicated to Healthcare, a sector that is converting more and more to cloud-based systems.

      Norman's Law of eLearning Tool Convergence

      Any eLearning tool, no matter how openly designed, will eventually become indistinguishable from a Learning Management System once a threshold of supported use-cases has been reached.

  19. Sep 2017
    1. Over the course of many years, every school has refined and perfected the connections LMSs have into a wide variety of other campus systems including authentication systems, identity management systems, student information systems, assessment-related learning tools, library systems, digital textbook systems, and other content repositories. APIs and standards have decreased the complexity of supporting these connections, and over time it has become easier and more common to connect LMSs to – in some cases – several dozen or more other systems. This level of integration gives LMSs much more utility than they have out of the box – and also more “stickiness” that causes them to become harder to move away from. For LMS alternatives, achieving this same level of connectedness, particularly considering how brittle these connections can sometimes become over time, is a very difficult thing to achieve.
  20. Aug 2017
    1. This has much in common with a customer relationship management system and facilitates the workflow around interventions as well as various visualisations.  It’s unclear how the at risk metric is calculated but a more sophisticated predictive analytics engine might help in this regard.

      Have yet to notice much discussion of the relationships between SIS (Student Information Systems), CRM (Customer Relationship Management), ERP (Enterprise Resource Planning), and LMS (Learning Management Systems).

    1. If you frequent the Overwatch sub-Reddit, or any forum that regularly discuses the game, you will have no doubt ran into many reports of players getting abused by others during matches, especially in Competitive Play.

      Finally game companies figured out reporting system to decrease cheating. However, some inevitable problems are still no ignorable. Although reporting system has made cheating detection more accurate and effective, there could also be some malicious reports that can have honest players be banned. So each solution has its advantages and disadvantages and both of them would be deadly.

  21. May 2017
    1. Mackenzie River
      The Mackenzie River is a major river system in northwestern North America. It is exceeded only in basin size by the Mississippi-Missouri system. The entire Mackenzie River system is 2,635 miles long and passes through many lakes before emptying into the Beaufort Sea of the Arctic Ocean. The Mackenzie River alone is 1,025 miles long when measured from Great Slave Lake. It begins at Great Slave Lake where the elevation is 512 feet above sea level. Great Slave Lake can be as deep as 2,000 feet in certain places. It is filled with clear water on the eastern side and shallow, murky water on the western side. The headwaters of the Mackenzie River include numerous large rivers. The drainage basins of the Mackenzie River include the Liard River, Peace River, and Athabasca River. The ice that forms on the Mackenzie River over the winter months begins the break up in early to mid-May in the southern sections. Ice covering some portions of the Mackenzie River can break up as late as the end of May. The Mackenzie River basin is home to a very small and sparse population despite the natural resources available in this area. This area is home to muskrat, marten, beaver, lynx, and fox. Pulpwood and other small conifer trees can be found here. Petroleum and natural gas are usually the underlying reason larger settlements have formed in this area (Robinson 1999). 
      

      References

      Robinson, J. Lewis. 1999. Mackenzie River. July 26. Accessed May 2017, 2017. https://www.britannica.com/place/Mackenzie-River#ref466063.

    2. Alyeska oil pipeline
      The oil discovered in the Prudhoe Bay oil field in the North Slope region of Alaska in 1968 was the “largest oil field discovered in North America.” In 1969, a Trans-Alaska pipeline to transport oil from the North Slope was proposed by the Trans-Alaska Pipeline System. The Trans-Alaska Pipeline System was comprised of three major oil corporations. Despite many other ideas and suggestions to transport this oil, the oil industry reached a consensus in favor of the pipeline proposal of the Trans-Alaska Pipeline System (Busenberg, 2013). Construction of the Alyeska oil pipeline, also known as the Alaska pipeline or trans-Alaska pipeline, began in 1975. This pipeline was built by the Alyeska Pipeline Service Company, a group that was made up of seven different oil companies. In certain regions, the pipeline is buried underground, but where there is permafrost, the pipeline is constructed above the ground. The pipeline crosses over 800 river and streams and passes through three mountain ranges. The first oil was delivered from Prudhoe Bay to Valdez on June 20, 1977. This oil had to travel through the 789 mile long pipeline to reach its destination (Alaska Public Lands Information Centers, n.d.). See below for a link to “Pipeline! The story of the building of the trans-Alaska pipeline” video posted on YouTube by the Alaska National Parks service. 
      

      https://www.youtube.com/watch?v=WmO6loYsm4Q

      References

      Alaska Public Lands Information Centers. (n.d.). The Trans-Alaska Pipeline. Retrieved from Alaska Public Lands Information Centers: https://www.alaskacenters.gov/the-alyeska-pipeline.cfm

      Busenberg, G. J. (2013). The Trans-Alaska Pipeline System. In G. J. Busenburg, Oil and Wilderness in Alaska (pp. 11-43). Georgetown University Press.

  22. enst31501sp2017.courses.bucknell.edu enst31501sp2017.courses.bucknell.edu
    1. Trans-Alaska pipeline,

      This map shows the 800-mile Trans-Alaska Pipeline System (TAPS), also called the Alyeska Pipeline, that was built in the 1970s with 11 pumping stations that transports crude oil from Prudhoe Bay to Port Valdez. The pipeline cost around $8 billion to build. The link below provides facts on the pipeline provided by the Alyeska Pipeline Service Company: http://www.alyeska-pipe.com/TAPS/PipelineFacts

      About the Trans-Alaska Pipeline System. Accessed April 30, 2017. http://www.treasure-hunt.alaska.edu/ch5/info_pipeline.html.

  23. Apr 2017
    1. Great Slave Lake

      The Great Slave Lake was found in 1771 by Samuel Hearne (Ernst). Many others passed through during the Klondike Gold Rush in 1896-1899, but the region surrounding the Great Slave Lake remained greatly unoccupied. In 1930, a radioactive uranium mineral called pitchblende, or uraninite, was discovered on the shore of the Great Slave Lake and incentivized colonizers. 1934, gold was discovered on Yellowknife Bay, which led to a Yellowknife community settlement. Today, additional communities in this region include Hay River, Fort Resolution, Fort Providence, and Behchoko. The Great Slave Lake is the fifth largest lake is North America and is part of the Mackenzie River System. The Lake gets its name from a tribe of Native Americans called Slavery First Nations (National Geographic). This tribe fished for sustenance and did not explore farther than their immediate surroundings. Their neighbors, the Cree, thought the tribe was weak and often called them awonak, which means slaves. Explorer Peter Pond named the lake the Slave Lake in 1785 and then the Great Slave Lake in 1790. The Lake is known for its variety of types of fish, including trout, pike, and Arctic grayling. The Great Slave Lake is covered in snow and ice 8 months out of the year. The Great Slave Lake region is also the home to the largest intact forest in the world, the Boreal Forest, which contains evergreens, bogs, shallow lakes, and ponds (Pala). This Great Slave Lake cove is the habitat for caribou, waterfowl, beavers, and many fish species.

      Ernst, Chloe. "The History and Sites of Great Slave Lake: A Visitor's Guide.” PlanetWare.com. Accessed April 06, 2017. http://www.planetware.com/northwest-territories/great-slave-lake-cdn-nt-ntgs.htm.

      National Geographic, February 2002, 1. Global Reference on the Environment, Energy, and Natural Resources (accessed April 5, 2017). http://find.galegroup.com/grnr/infomark.do?&source=gale&idigest=6f8f4a3faafd67e66fa023866730b0a1&prodId=GRNR&userGroupName=bucknell_it&tabID=T003&docId=A83374988&type=retrieve&PDFRange=%5B%5D&contentSet=IAC-Documents&version=1.0.

      Pala, Christopher. "Forests forever. (Forest conservation in Canada)." Earth Island Journal, September 22, 2010.

    1. This leads to the second point I once made: that students no longer need to actually read the material to get impressive grades, which contributes to both student and administrator scorn for the affected disciplines. This point caused some push-back, since professors and fellow students noted that if I wasn’t reading the material, it was my own fault for not getting the full benefit of the course. I agreed, but countered that if the difference between my reading very little of the material instead of it all was a 10 to 15 percent bump in my final grade, what did that imply about the value of said material to the course? Srigley argues that less than 20 percent of his students even access the weekly readings for his courses, largely because they know they don’t have to ­– “they can get an 80 without ever opening a book.”

      Again, this implies that the professor should care. One of the principles behind my grading system is that I don't. People are welcome to do whatever they want and they get the same grade, unless they do exceptional work.

      This also implies that grades are somehow the currency of learning and that if you are getting good grades without learning, then you are somehow "winning."

      This is a misunderstanding of grades. They are really the bits of an expert system that converts qualitative evaluation of individual performances into a final score that helps people categories graduates. So they are secondary to the actual learning and performance.

  24. Mar 2017
    1. Now they recognize they are not essential

      In the late 1800s and early 1900s northern explorers depended on the indigenous people. The natives knew the land, the climate, and the wildlife. Because of their knowledge, the indigenous northerners served as local guides in this harsh and uninviting place. The native people also served as interpreters for researchers and were a lifeline for those that had little-to-no knowledge of how to survive in that kind of environment. However, they were not always seen as important figures. As southern technologies became more and more prominent in the far north, native peoples were pushed aside. “The airplane and helicopter strained relations among researchers and northerners. These technologies relieved field-workers from establishing extensive and regular relationships with locals as guides, interpreters, and informants. Permafrost scientists in particular could produce knowledge about the Arctic environment without Inuit expertise and apply that research in governmental construction projects without consulting locals” (108). The Inuits began to view the government scientists as pests, “they arrived in summer ‘in lusty swarm’ and were just as annoying” (108). Many researchers come during the warm months and gather information that allows them to cut ties to the indigenous people. The use of modern technology in the north forces Inuit to work menial jobs and completely change their way of life in order to survive in the modernizing landscape. While the industrial system has brought many valuable things to them, the Inuit are no longer needed or heard. If it is in the best interest of the oil industry, a pipeline would be built right over their homeland, even if they are still on it.

      Annotation drawn form Stuhl, Andrew. Unfreezing the Arctic: Science, Colonialism, and the Transformation of Inuit Lands. Chicago: The University of Chicago Press, 2016.

  25. Jan 2017
  26. Nov 2016
    1. Technologies & education system changing in recent times tutors should consider educating students with the latest technologies available. In the evolution of technology with apps, projector screens, Digital media, and last but not the least online learning platforms some fundamentals of teaching remains as it is, so if tutors can implement these basic ideas in his/her tutoring style with students can cope up with the evolution of digital education.

    2. According to the report of the UK Government, Department of Education (published on January 10, 2014), there were altogether 24372 public schools comprising of 16818 State-funded primary schools, 3268 State funded and 2420 other Secondary schools and 4476 independent schools. There are also 1039 special (state funded) and non-maintained schools. The same report says that there were 438000 teachers in state-funded schools in England on a full-time equivalent basis in 2012. The numbers at both ends have naturally increased at the time of writing this article. But the big question that is looming over the education system of England is whether, even after the best effort by the UK Government and the State schools, the public school system is successful in educating their children properly or not.

  27. Oct 2016
  28. Sep 2016
    1. The Swedish school system has wholeheartedly, and probably too quickly and eagerly, embraced this new agenda. Last fall, 200 teachers attended a major government-sponsored conference discussing how to avoid "traditional gender patterns" in schools. At Egalia, one model Stockholm preschool, everything from the decoration to the books and toys are carefully selected to promote a gender-equal perspective and to avoid traditional presentations of gender and parenting roles

      Swedish school system has enforced use of hen

  29. Jul 2016
  30. Jun 2016
  31. Jan 2016
    1. If you ain't talking about the teacher in the classroom, I ain't listening. Teacher quality matters. Too many in the profession are quick to awfulize students in poverty to rationalize poor results. Better teaching inspires students and gets better results. Better teaching engages students and keeps them in classrooms, rather than the streets. Better teaching is the one thing we never really talk about. Better teaching is the only mechanism we have left.

      What are some ways to significantly improve teaching in these communities? The teaching doesn't happen in a vacuum and we need a plan to counteract the systemic forces at work that maintain the status quo.

  32. Nov 2015
    1. systems analysis in this regard demands an ethnographic retooling,one in which ethnography might need to be conducted in government centers far from where theactual roads are constructed and might take into account politicians, technocrats, economists, en-gineers, and road builders, as well as road users themselves

      Understanding this paradigm, what does it mean to hold hearings and lectures. What does it say about the relocation of authority away from target location. Does this provide any insight on the dynamics of social and political justice/injustice within an economic nation?

  33. Oct 2015
  34. Jun 2015
    1. We do not want to leave the school system behind. We need to keep driving toward where we want everyone to be versus waiting until everyone is ready. The end goal will involve the Internet, and there needs to be a framework for it.

      But we do want to leave it behind--the words we use--"school system" tell us exactly what is at the center--schools. What we learn are learn systems where learning is at the center which implies tacit-wise that the learner is at the center.

      (http://gph.is/1e82Pef)

      learner centric

  35. Mar 2015
    1. lowRISC is producing fully open hardware systems. From the processor core to the development board, our goal is to create a completely open computing eco-system. Our open-source SoC (System-on-a-Chip) designs will be based on the 64-bit RISC-V instruction set architecture. Volume silicon manufacture is planned as is a low-cost development board. There are more details on our plans in these slides from a recent talk lowRISC is a not-for-profit organisation working closely with the University of Cambridge and the open-source community.
  36. Sep 2013