some practices that can make those discussions easier, by starting with constraints that even skeptical developers can see the value in: Build tools around verbs, not nouns. Create checkEligibility() or getRecentTickets() instead of getCustomer(). Verbs force you to think about specific actions and naturally limit scope.Talk about minimizing data needs. Before anyone creates an MCP tool, have a discussion about what the smallest piece of data they need to provide for the AI to do its job is and what experiments they can run to figure out what the AI truly needs.Break reads apart from reasoning. Separate data fetching from decision-making when you design your MCP tools. A simple findCustomerId() tool that returns just an ID uses minimal tokens—and might not even need to be an MCP tool at all, if a simple API call will do. Then getCustomerDetailsForRefund(id) pulls only the specific fields needed for that decision. This pattern keeps context focused and makes it obvious when someone’s trying to fetch everything.Dashboard the waste. The best argument against data hoarding is showing the waste. Track the ratio of tokens fetched versus tokens used and display them in an “information radiator” style dashboard that everyone can see. When a tool pulls 5,000 tokens but the AI only references 200 in its answer, everyone can see the problem. Once developers see they’re paying for tokens they never use, they get very interested in fixing it.
some useful tips to keep MCPs straightforward and prevent data blobs that are too big. - use verbs not nouns for mcp tool names (focuses on the action, not the object upon you act) - think/talk about n:: data minimalisation - break it up, reads separate from reasoning steps. Keeps everything focused on the specific context. - dashboard the ratio of tokens fetched versus tokens used in answers. Lopsided ratios indicate you're overfeeding the system.