Think for a minute about consequentialism. On this view, we should do whatever results in the best outcomes for the most people. One of the classic forms of this approach is utilitarianism, which says we should do whatever maximizes ‘utility’ for most people. Confusingly, ‘utility’ in this case does not refer to usefulness, but to a sort of combo of happiness and wellbeing. When a utilitarian tries to decide how to act, they take stock of all the probable outcomes, and what sort of ‘utility’ or happiness will be brought about for all parties involved. This process is sometimes referred to by philosophers as ‘utility calculus’. When I am trying to calculate the expected net utility gain from a projected set of actions, I am engaging in ‘utility calculus’ (or, in normal words, utility calculations).
There is no "one size fits all" for ethics usage in terms of data. I think what matters most are base questions on "how people are affected" and "what actions are deemed universally appropriate". Consequentially, no two people have the same morals as the other. While it would be nice if everyone had the same standard to follow the same set of rules so that it would be easy for everyone to follow the rules, the real world isn't like that and we have to be sensitive to these things because it could be a cultural belief.