33 Matching Annotations
  1. Last 7 days
    1. Related work

      Organize in subheadings. Rewrite so it is less of a "shopping list" of articles and more of a story. Connect articles to each other and to your project.

    2. The objective of the tool is to be able to predict the outcomes of NBA games based on the daily schedule of the NBA. The tool will read in the day’s games and using the already trained model predict the winner of the game. The backend of this tool will use historical data from the past 5 NBA seasons. This data will be cleaned and sorted and used when creating the predictive model using machine learning libraries. The model will then be used to predict the games. The current day’s games will be pulled also from the NBA-API. These teams will be used in the model to predict the winner of the match up. The front end of the tool will be a dynamic dashboard that will be updated daily so users can look at the predicted outcomes for that days NBA games. This provides a friendly user experience as it is simple and clean to look at and understand. The dashboard will show the home and away teams name and display their logo. The time that the game is played at will also be shown in the dashboard for those who might be sports betting now when they need to place their bets by. Additionally the predicted winner of the game will be shown in the same row as the games for the day. Finally there will be a confidence level for how accurate the model believes this prediction is. Additional information like confidence will give the user more insight into the predictive capabilities of the tool. Overall this tool will provide a resource for fans, analysts, and sports bettors to get a better insight with NBA games. Fans will gain more insight into their favorite teams. Analysts can use this information and data to better assess and understand why teams are winning and losing. Sports better can use the predictions to inform and make better bets based on the winner and losers of the game as well as the confidence level of the prediction. With the growing trend is data and analytics this tool will make it easier for all to understand how they affect the outcomes of NBA games

      Write it as if the project has been completed.

    3. I am developing a tool that will predict the outcome of NBA games. This tool will use NBA box score data such as points, rebounds, and three point percentage as well as advanced analytics like player efficiency rating and plus minus score. These analytics will be from the past five years of the NBA. The data collected from previous teams’ games will be used in machine learning algorithms which allows for more accurate predictions rather than the traditional win loss analysis. This tool will be beneficial for all sports fans from the casual viewer to NBA enthusiasts. This tool offers insight into game dynamics and sports betting opportunities. All of the predictions and games will be provided on a user friendly dashboard that dynamically updates daily for users to see. With the growing interest with basketball analytics around the world, this tool aims to give insight to how fans interact with the game and make informed predictions.

      Do not use future tense.

    1. Related work

      Need to fix citations so they appear in proper format.

    2. Ethical Implcations

      Break it up into paragraphs and cite.

    3. Goals of the Project

      Break it up into paragraphs to make it easier to read. Cite when stating factual information or defining terms.

    4. The question that is trying to be answered while completing this project is, is there media bias in the National Basketball Association.

      Rephrase

    5. The global sports market was valued at over $500 billion in recent years, with major leagues like the NBA contributing significantly to this figure. The NBA, specifically, generates over $10 billion annually through a combination of ticket sales, broadcasting rights, sponsorships, and merchandise.

      Citation?

    1. Ethical Implications

      This needs to include ethical implications of your work.

    2. Recommender systems, once a phenomenon, have transcended the way consumers and producers collect and use information in everyday business. Modern computational decision-making, enabling users to efficiently navigate through the available options in various domains, such as e-commerce, media consumption, and online services. The use of multiplex algorithms to filter and disseminate content, based on users’ explicit preferences or implicit behavior patterns, are often derived from their previous interaction history. By analyzing large-scale datasets, recommender systems can identify patterns and make personalized predictions, offering users relevant suggestions that might otherwise be disregarded. For instance, platforms like Amazon and YouTube deploy collaborative filtering, content-based filtering, and hybrid models to recommend products or media that align with the user’s historical interactions, effectively providing tailored content at scale. Deploying a filtration system on their website allow user to ixnay their information cost by feeding their algorithm with recommended websites or products based on the users behavior and consumer patterns. For example, if someone who has a Hulu or Netflix subscription and is projecting one of their favorite shows on these streaming platforms. Often, Hulu and Netflix, would record and keep track of the shows you had watched and after you have finished watching a series they will recommend another series that has a similar narrative and genre that piques your interest. These recommender systems are deployed in hopes that their patrons stay entertained and loyal to their platform by recommending my shows and series that may cater towards your preference. This approach not only enhances user experience but also reduces decision fatigue by narrowing down superfluous information spaces, demonstrating the potential of algorithm-driven systems in optimizing decision support. However, despite the widespread adoption, recommender systems may appear as a superior piece of article intelligence, like anything, there are always flaws and blunders that are presented that can pose serious problems to companies and industries [17]. These systems heavily rely on underlying models—such as collaborative filtering, matrix factorization, or graph-based algorithms—that analyze user-item interactions, but they may still struggle with issues like data sparsity, cold-start problems, and bias.[18] As research progresses, there is growing interest in integrating more sophisticated graph-based techniques and machine learning models to enhance the adaptability and accuracy of these systems. Graph theory, for example, can be leveraged to capture intricate relationships between users, items, and contextual data, providing more dynamic and context-aware recommendations. While these advances hold promise, it is essential to recognize that recommender systems, though powerful, remain tools to assist decision-making rather than replace human judgment, particularly in more subjective or personal contexts where contextual nuance and emotional intelligence play a key role.

      This should probably be under the previous heading "State of the Art"?

    3. Current State of the Art

      This section needs to provide a summary of the State of the Art

    4. this comparative framework

      Which comparative framework?

    5. The project implements the following graph-based similarity measures: Jaccard Similarity: Measures the overlap between neighbors of two nodes by dividing the intersection size by the union size. It’s effective for sparse networks with binary relationships. Adamic-Adar: A more nuanced similarity that gives higher weights to common neighbors that are less connected, assuming that rare connections carry more significant information. Resource Allocation: Models resource distribution on a network by assessing how “resources” (recommendations) flow from one node to another based on their shared neighbors. Preferential Attachment: This approach predicts new links by assuming that nodes with higher degrees are more likely to form new connections, reflecting a “rich get richer” phenomenon. Common Neighbors: Counts the number of shared neighbors between two nodes. The higher the number, the more likely the nodes are connected. MaxOverlap: A modified Jaccard similarity focusing on maximizing shared nodes between neighbors. Association Strength: Calculates the expected overlap between two nodes, considering the degree of each node and the size of the graph. Cosine Similarity: Computes the cosine of the angle between the degree vectors of two nodes, measuring similarity in a continuous, normalized way. To test the methods, I use similarity coefficients, statistics graphs such as cumulative gain chart, precision recall curve, time chart, ROC curve etc Regardless of the algorithm chosen, evaluation can be conducted using: - Precision-Recall Curves - Mean Reciprocal Rank (MRR) - Normalized Discounted Cumulative Gain (nDCG) - Cumulative Gain Charts - Time-based comparisons for evolving graphs

      This does not make sense. Rewrite using cohesive sentences that connect to each other and portray the project goals.

    1. Related work

      Nicely organized!

    2. Research Questions

      This should be a subsection within the Goals section.

    3. Goals of the Project

      Needs to be streamlined a bit to make it clear what exactly the goals are. You are stating several purpose/aim/goals and it should be clear how they all connect and align.

    4. Humanism during the Renaissance period stood for the idea that humanity was a divine being capable of achieving remarkable things. However, humanism is close minded in the fact that the ideal form of humanity is the white male figure. Moreover, anything other than the ideal form is automatically considered to be less than human. This is where posthumanism responded and aims to reevaluate humanity through alternative lenses and frameworks of experience. Technology has often been a way to explore these ideas of posthumanism in a way that is open minded

      A citation?

    5. [58]

      Include page number for quoted reference

    6. Many of humanity’s fears of technology come from a fear of replacement. The fear that generative artificial intelligence will replace artists, writers, videographers, and more. Even more people are afraid of the physical replacement with robots. Media outlets at one point or another have reported robots being used to replace workers in manufacturing, the food industry, and even for social work.

      Need sources for these statements, especially media outlets reports.

    7. Many are concerned with how generative artificial intelligence is gaining a presence not only online but is making its way into the formal art world.

      Need to a citation for this statement

    1. Competitors

      Need to make it clear how your tool is similar or different from each of these.

    2. This was the major discussion among most articles that have to do with learning Chinese as a second language. These articles referenced different technologies Chinese learners learners use, recommend, and were working on developing.

      This needs to be moved closer to these articles. By this point, a reader does not know which articles you are referring to here.

    3. Online Chinese Learning Applications

      Need to make it clear how your tool is similar or different from each of these.

    4. This case study

      Why is it a case study? This is the first time you are mentioning a case study. Need to make it clearer.

    5. Chines

      Run the text through a spellchecker

    6. Current State of the Art

      Needs to be expanded. If your project is on learning in general, not just chinese, then state of the art needs to cover this.

    7. A user will use this tool by logging on to the website and creating an account if they are first time users. If they have used the tool before they will just log in in order to see their progress. A user will see all the sets given to them and all the sets they have created. In order to start a study session a user will select the set that they wish to study. Once they select the set they will then see all the cards in that set loaded onto the screen with their mastery level of each card visible based on the color boarder surrounding the card. At the bottom of the screen will then be an option for the user to select “start” once clicking this button it will bring the user to a page asking what they would like to study in their session. They will then select which category they want on the front of their card and which they would like on the front of the card. They can select multiple options for either the front or the back. They will next select the “start” button and the study session will start. Users will also be able to create their own sets and add the different layers that they would like to focus on. If a card does’t have a definition or term in one layer of the flashcard it will just display “None” for that section. Though this tool will have premade sets for the Chinese languages HSK sets users can apply it to learn many different subjects.

      Needs to be rewritten in the present tense.

    8. This tool is meant to help extend learning longevity and increase the depth of knowledged learned as well.

      What is "This tool"? This is the first time you are mentioning your tool. Something like instead "We introduce, Multilayered Flashcards, a tool that is ..." might better indicate that you are introducing your tool.

    9. [38], [25], [7]

      Make sure citations are listed in order

  2. Dec 2024
    1. Introduction

      Need to clearly state what your project is doing. Also make sure all text flows logically with transitions between sections.

    2. Motivation

      Motivation for the project is not clearly stated. Yes, you are motivating graph theoretic methodology in recommendation systems but what is a motivation for comparing algorithms?

  3. Nov 2024