5 Matching Annotations
  1. Last 7 days
    1. some practices that can make those discussions easier, by starting with constraints that even skeptical developers can see the value in: Build tools around verbs, not nouns. Create checkEligibility() or getRecentTickets() instead of getCustomer(). Verbs force you to think about specific actions and naturally limit scope.Talk about minimizing data needs. Before anyone creates an MCP tool, have a discussion about what the smallest piece of data they need to provide for the AI to do its job is and what experiments they can run to figure out what the AI truly needs.Break reads apart from reasoning. Separate data fetching from decision-making when you design your MCP tools. A simple findCustomerId() tool that returns just an ID uses minimal tokens—and might not even need to be an MCP tool at all, if a simple API call will do. Then getCustomerDetailsForRefund(id) pulls only the specific fields needed for that decision. This pattern keeps context focused and makes it obvious when someone’s trying to fetch everything.Dashboard the waste. The best argument against data hoarding is showing the waste. Track the ratio of tokens fetched versus tokens used and display them in an “information radiator” style dashboard that everyone can see. When a tool pulls 5,000 tokens but the AI only references 200 in its answer, everyone can see the problem. Once developers see they’re paying for tokens they never use, they get very interested in fixing it.

      some useful tips to keep MCPs straightforward and prevent data blobs that are too big. - use verbs not nouns for mcp tool names (focuses on the action, not the object upon you act) - think/talk about n:: data minimalisation - break it up, reads separate from reasoning steps. Keeps everything focused on the specific context. - dashboard the ratio of tokens fetched versus tokens used in answers. Lopsided ratios indicate you're overfeeding the system.

    2. In an extreme case of data hoarding infecting an entire company, you might discover that every team in your organization is building their own blob. Support has one version of customer data, sales has another, product has a third. The same customer looks completely different depending on which AI assistant you ask. New teams come along, see what appears to be working, and copy the pattern. Now you’ve got data hoarding as organizational culture.

      MCP data hoarding leads to parallel data households, exactly the type of thing we spent a lot of energy on to reduce

    3. data hoarding trap find themselves violating the principle of least privilege: Applications should have access to the data they need, but no more

      n:: Principle of least privilege: applications only should have access to data they need, and never more. Data hoarding in MCPs goes beyond that.

    1. Resources support two discovery patterns: Direct Resources - fixed URIs that point to specific data. Example: calendar://events/2024 - returns calendar availability for 2024 Resource Templates - dynamic URIs with parameters for flexible queries. Example: travel://activities/{city}/{category} - returns activities by city and category travel://activities/barcelona/museums - returns all museums in Barcelona Resource Templates include metadata such as title, description, and expected MIME type, making them discoverable and self-documenting.

      Resources can be invoked w fixed and dynamic URIs

    2. Resources expose data from files, APIs, databases, or any other source that an AI needs to understand context. Applications can access this information directly and decide how to use it - whether that’s selecting relevant portions, searching with embeddings, or passing it all to the model.

      resources are just that, read only material to invoke. API, filesystem, databases etc.