114 Matching Annotations
  1. May 2023
  2. Apr 2023
  3. Jun 2019
  4. inst-fs-dub-prod.inscloudgate.net inst-fs-dub-prod.inscloudgate.net
    1. For a machine learning model to be trusted/ used one would need to be confident in its capabilities of dealing with all possible scenarios. To that end, designing unit test cases for more complex and global problems could be costly and bordering on impossible to create.

      Idea: We need a basic guideline that researchers and developers can adhere to when defining problems and outlining solutions, so that model interpretability can be defined accurately in terms of the problem statement.

      Solution: This paper outlines the basics of machine learning interpretability, what that means for different users, and how to classify these into understandable categories that can be evaluated. This paper highlights the need for interpretability, which arises from incompleteness,either of the problem statement, or the problem domain knowledge. This paper provides three main categories to evaluating a model/ providing interpretations:

      • Application Grounded Evaluation: These evaluations are more costly, and involve real humans evaluating real tasks that a model would take up. Domain knowledge is necessary for the humans evaluating the real task handled by the model.
      • Human Grounded Evaluation: these evaluations are simpler than application grounded, as they simplify the complex task and have humans evaluate the simplified task. Domain knowledge is not necessary in such an evaluation. (However such explanations can be skewed toward human trust as opposed to faithfulness to the model)
      • Functionally Grounded Evaluation: No humans are involved in this version of evaluation, here previously evaluated models are perfected or tweaked to optimize certain functionality. Explanation quality is measured by a formal definition of interpretability.

      This paper also outlines certain issues with the above three evaluation processes, there are certain questions that need answering before we can pick an evaluation method and metric. -To highlight the factors of interpretability, we are provided with the Data-driven approach. Here we analyze each task and the various methods used to fulfill the task and see which of these methods and tasks are most significant to the model.

      • We are introduced to the term latent dimensions of interpretability, i.e. dimensions that are inferred not observed. These are divided into task related latent dimensions and method related latent dimensions, these are a long list of factors that are task specific or method specific.

      Thus this paper provides a basic taxonomy for how we should evaluate our model, and how these evaluations differ from problem to problem. The ideal scenario outlined is that researchers provide the relevant information to evaluate their proposition correctly (correctly in terms of the domain and the problem scope).

  5. May 2019
    1. With growing use of ML and AI solutions to complex problems, there is a rise in need for understanding and explaining these models appropriately however these explanations vary in how well they adhere to the model/ explain the decisions in a human understandable way.

      Idea : There is no standard method of categorizing interpretation methods/ explanations, and no good working practices in the field of interpretability.

      Solution : This paper explores and categorizes different approaches to interpreting machine learning models. The three main categories this paper proposes are:

      • Processing: interpretation approach that uses surrogate models to explain complex models
      • Representation: interpretation approach that analyzes intermediate data representations in models with transferability of data/ layers
      • Explaining Producing: interpretation approach in which the trained model as part of it's processing also generates an explanation for its process.

      In this paper we see different approaches to interpretation in detail, analyzing what the major component is to the interpretation, And which proposed category the explanation method would fall under. The paper goes into detail about other research papers that also deal with categorizing or exploring explanations, and the overall meaning of explainability in other domains.

      This paper also touches on how "completeness" (defined as how close the explanation is to the underlying model) and "interpretation" (defined as how easily humans can understand/ trust the model) do have tradeoffs, the author argues that these tradeoffs not only exist in the final explanation, but within each category the definition of completeness would be different and the metric used to measure this would change, which makes sense when you think that different users have different viewpoints on how a model should behave, and what the desired explanation for a result is.

    1. proposed an approachwhich explains individual predictions of any classi€er by generat-ing locally interpretable models. Œey then approximate the globalbehavior of the classi€er by choosing certain representative in-stances and their corresponding locally interpretable models

      This approach seems similar to small interpretable ensembles, where we approximate the global interpretation from smaller local interpretations of a model.

    2. Model interpretations must be true to the model but must also promote human understanding of the working of the model. To this end we would need an interpretability model that balances the two.

      Idea : Although there exist model interpretations that balance fidelity and human cognition on a local level specific to an underlying model, there are no global model agnostic interpretation models that can achieve the same.

      Solution:

      • Break up each aspect of the underlying model into distinct compact decision sets that have no overlap to generate explanations that are faithful to the model, and also cover all possible feature spaces of the model.
      • How the solution dealt with:
        • Fidelity (staying true to the model): the labels in the approximation match that of the underlying model.
        • Unambiguity (single clear decision): compact decision sets in every feature space ensures unambiguity in the label assigned to it.
        • Interpretability (Understandable by humans): Intuitive rule based representation, with limited number of rules and predicates.
        • Interactivity (Allow user to focus on specific feature spaces): Each feature space is divided into distinct compact sets, allowing users to focus on their area of interest.
      • Details on a “decision set”:
        • Each decision set is a two-level decision (a nested if-then decision set), where the outer if-then clause specifies the sub-space, and the inner if-then clause specifies the logic of assigning a label by the model.
        • A default set is defined to assign labels that do not satisfy any of the two-level decisions
        • The pros of such a model is that we do not need to trace the logic of an assigned label too far, thus less complex than a decision tree which follows a similar if-then structure.

      Mapping fidelity vs interpretability

      • To see how their model handled fidelity vs interpretability, they mapped the rate of agreement (number of times the approximation label of an instance matches the blackbox assigned label) against pre-defined interpretability complexity defining terms such as:
        • Number of predicates (sum of width of all decision sets)
        • Number of rules (a set of outer decision, inner decision, and classifier label)
        • Number of defined neighborhoods (outer if-then decision)
      • Their model reached higher agreement rates to other models at lower values for interpretability complexity.
    1. Any cognitive features that are important to model user trust orany measures of functional interpretability should be explicitly included as a constraint.A feature that is missing from our explanation strategy’s loss function will contribute tothe implicit cognitive bias. Relevant cognitive features may differ across applications andevaluation metrics.
    2. Model Interpretability aims at explaining the inner workings of a model promoting transparency of any decisions made by the model, however for the sake of human acceptance or understanding, these explanations seem to be more geared toward human trust than remaining faithful to the model.

      Idea There is a distinct difference and tradeoff between persuasive and descriptive Interpretations of a model, one promotes human trust while the other stays truthful to the model. Promoting the former can lead to a loss in transparency of the model.

      Questions to be answered:

      • How do we balance between a persuasive strategy and a descriptive strategy?
      • How do we combat human cognitive bias?

      Solutions:

      • Separating the descriptive and persuasive steps:
        • We first generate a descriptive explanation, without trying to simplify it
        • In our final steps we add persuasiveness to this explanation to make it more understandable
      • Explicit inclusion of cognitive features:
        • We would include attributes that affect our functional measures of interpretability to our objective function.
        • This approach has some drawbacks however:
          • we would need to map the knowledge of the user which is an expensive process.
          • Any features that we fail to add to the objective function would add to the human cognitive bias
          • Increased complexity in optimizing of a multi-objective loss function.

      Important terms:

      • Explanation Strategy: An explanation strategy is defined as an explanation vehicle coupled with the objective function, constraints, and hyper parameters required to generate a model explanation
      • Explanation model: An explanation model is defined as the implementation of an explanation strategy, which is fit to a model that is to be interpreted.
      • Human Cognitive Bias: if an explanation model is highly persuasive or tuned toward human trust as opposed to staying true to the model, the overall evaluation of this explanation would be highly biased compared to a descriptive model. This bias can lead from commonalities between human users across a domain, expertise of the application, or the expectation of a model explanation. Such bias is known as implicit human cognitive bias.
      • Persuasive Explanation Strategy: A persuasive explanation strategy aim at convincing a user/ humanizing a model so that the user feels more comfortable with the decisions generated by the model. Fidelity or truthfulness to the model in such a strategy can be very low, which can lead to ethical dilemmas as to where to draw the line between being persuasive and being descriptive. Persuasive strategies do promote human understanding and cognition, which are important aspects of interpretability, however they fail to address the certain other aspects such as fidelity to the model.
      • Descriptive Explanation Strategy: A descriptive explanation strategy stays true to the underlying model, and generates explanations with maximum fidelity to the model. Ideally such a strategy would describe exactly what the inner working of the underlying model is, which is the main purpose of model interpretation in terms of better understanding the actual workings of the model.
  6. Oct 2017
    1. More information about the University Heights P-Patch Community Garden may be found at its specific Seattle Department of Neighborhoods website.   More information about the University District P-Patch Community Garden may be found at its specific Seattle Department of Neighborhoods website.

      These two links are different, although the sites that they link to look similar.

    2. Beacon Food Forest

      The Beacon Food Forest is especially cool, because (1) it's massive, and (2) it has such a massive variety of foods from all over the world. It's an ethnobotanist's dream garden!

    3. More information may be found at the King County Department of Parks and Recreation's (trail-specific) website.

      Be aware -- the P&R trail-specific website is different from the general, park-related website.

    4. This trail is incredible for birdwatching, as well as a great place to bring the family dog to enjoy some funky "sniffs."

      Honestly, I would LOVE to trailer a horse out here and tramp around!

  7. Aug 2017
    1. Title: Meet A Local Filmmaker: Cameron Macgowan

      This is an illustrative portrait of one Calgary based filmmaker. It is an intimate article that not only highlights the individual, but also reveals the film-making scene in Calgary. Compare this focused and light piece to the expansive and dense website résumé cbattle.com.

    1. Ottawa Shooting October 2014

      Although it only includes news articles from immediately after the event, the collections of articles gives a look into reactions to the event, and what was known at first. What should also be noted in this collection is the Twitter public list compiled by @globeandmail, which showcases tweets surrounding the shooting, and the memorial process afterwards.

    1. Title: Aboriginal Youth Identity Series: First Nations Contributions - Edukit

      Another example of this collection's focus on education. Contains both content for students, and lesson plans for teachers. Gives insight into pedagogy from its extensive lesson plans. Part of a larger series called Aboriginal Youth Identity Series

    2. Title: Nature's Law

      Representative of this collection's information on First Nation People. An insightful archive for focusing on conceptions of law within aboriginal cultures.It is thorough, and made possible by the efforts of experts.Dense with text, which provides lots of information, but comes at the cost of being less accessible.

    1. How do I manually add a new user to an account?Links to an external site. How do I merge users in an account?Links to an external site. How do I delete a user from an account?Links to an external site. How do I edit a user's name, time zone, or email in an account?Links to an external site. How do I manage a user's login information in an account?Links to an external site.

      Remove

    1. Learning Tools Interoperability (LTI) Learning Tools Interoperability (LTI) is the process by which external content and resources are linked to Canvas. The tools that are currently available for UW-Madison Canvas users can be found the Canvas - Enabled Application Configurations KB document. UW-Madison employs a process by which individuals can request that additional external tools be vetted and integrated with Canvas. That form can be found this form (Links to an external site.)Links to an external site. and the entire review and integration process is generally expected to take 3-4 months. This KB document outlines further details related to external integrations and LTI.

      This needs to be broken out

    1. Long-suffering advocate for compensation

      Not only news media, but personal stories being published and saved. Individuals tend to be left out of many narratives around Indigenous rights in Canada, and Residential Schools.

    2. Manitoba's ongoing involvement with the Truth and Reconciliation Commission.

      This will be an interesting archive to watch. It's an event, but an ongoing event. Historians can track the effects of the Truth and Reconciliation commission and use this archive as a case study for one part of Canada.

    1. Response to Maclean's article

      Archiving material related to a specific event. What makes an event important enough to archive? Since these records were created while the event was occurring, how is it determined that an event should be actively archived?

  8. Jul 2017
    1. Volunteerism in Alberta: 100 years of Celebrating Community

      Another example of this collection's focus on Alberta's heritage. This archive's topic of volunteers provides an insightful look at social and local history.

    2. Alberta, Naturally!

      Not only is it useful for showing Alberta's natural history, but it is also a great example of a website being used for educational purposes. Although the site includes multimedia files, many can't be accessed or played.

    3. Alberta Inventors and Inventions: A Century of Patents

      A unique collection that showcases Alberta's history through an exhaustive list of patents. Definitely useful for many areas of history: local, social, cultural, technological.

    1. 54 Total Results

      Many of the pages include social media sites used to spread news, opinions, and plans. It makes this event an example of how social media - such as youtube, twitter, and facebook - can spread the word about important issues from sources other than news reporters and gives a voice directly to actors involved in the dispute rather than information mediated through a secondary source.

    2. B.C. Teachers' Labour Dispute (2014)

      A much smaller and contained collection than some others because it refers to a specific event rather than documenting/archiving changes in particular organization or institution over time.

    1. Cree Language and Culture Twelve-Year Program Kindergarten to Grade 12

      Also includes programs that start at various levels of education, from a full 12 year program starting at the very earliest levels of education to a 3 year program for senior levels of education.

    2. Chinese Language Arts 10-20-30

      Bilingual programs not only in both official Canadian languages (as evidenced by the number of French curriculum guides), but in other languages that make up a large portion of Canada's multi-cultural population.

    3. Aboriginal Studies 10-20-30

      Primarily, the course guide can provide historians with insights not only into the importance of Aboriginal Studies in Alberta, but also into the history that led to the creation of the education program as well as the major themes considered most important to be learned and passed on.

      While there are a great number of captures, there hasn't been an update to the page since its first capture in 2010, which, with the contemporary climate around Indigenous history in Canada and Indigenous rights, suggests that while this is an important topic, it's main themes and concerns may not be providing students with the proper depth of contemporary understanding of the issue.

    4. Alberta Education

      The preservation not only of education curriculum, but a record of how that curriculum changed in a five year period (between 2010-2015 are the current archive dates) could allow historians to examine how priorities have changed, what is considered important to teach the next generation and the expectations held by teachers and government organizations for the children they are educating.

    1. Collection of defunct pages

      While all of the pages were captured during the same three day period in 2015, there is still this record of webpages that were created, though they are no longer in use. This creates an archival record of what was considered important not only to publish in the first place, but was also acknowledged as important to preserve.