Normalization refers to each fact being expressed in one place. The objective is to divide up your information into as many standard subsets as is practical. However, atomic specificity and perfection are impossible and not going to help anyone. Getting to granular may make a huge, unwieldy dataset. Ultimately, analysis will likely require recombining data together again, but that task will be straightforward if your data is normal. Whether you’re working in a relational database or performing analyses on derived tables, appropriate normalization may vary. But considering normalization of data from the start can keep things clean.
So far, my research has involved tabling ingredients in ancient recipes. There is a certain level of extra granularity I need to provide to account for vague descriptions, but my initial model of separating ingredients created an absolute mess.