How We Get Order in the Data

Just like there’s no one-size-fits-all recipe for good cooking, there’s no definitive guide on how to sort and clean data for every situation.
However, just as there are good and bad cooks, there are also good and bad ways to handle data. It all boils down to some essential principles like common sense, careful preparation, diligent research, and clear planning. It’s not exactly rocket science!

...or is it?

Let’s look at a major data error that led to a catastrophic failure of NASA’s most advanced spacecraft.

The important cleanup between collection and application

In a library, books are sorted by genre and theme, and then alphabetised. This is to make it easier to find the books you are looking for.
When we sort and clean up data, the goal is largely the same. We aim to categorise information in a sensible way, tailored to a specific goal, to make the composition of data understandable and easier for humans and machines to navigate in.
To do this, we need somewhere to structure the data, like a spreadsheet or a database. The tool you use will in turn dictate what you can do with the data.
In some cases, a good, old-fashioned spreadsheet and manual plotting do the trick. Have you ever sorted a table—for instance, invoices—by ascending value? Then you’ve done more than just sorting data. You’ve sorted according to a specific numeric sequence.
But for more complex tasks, like advanced data analysis or handling big data, we need more sophisticated tools than manual plotting.
While Excel is a great tool, it does have its limits. It’s not what large companies like DNB and Telenor use when analysing billions of transactions or monitoring activity in the mobile network. Instead, they use databases and other storage structures such as those you learned about in the previous chapter, often built on a cloud-based platform like Microsoft Azure. These platforms offer tailored solutions for collecting, processing, and using data.
In the next topic, we’ll dive deeper into data science, which involves using advanced statistics, programming, and machine learning to work with data. But for now, let’s start with a simpler example.

A simple project

In the previous topic, we discussed the factors we need to consider in order to maintain good data quality. We used salary levels as an example. Let’s make that more specific now:
Suppose you have a school project where you want to analyse salary trends and equality in the IT industry. Your aim is not to conduct a large-scale, representative study like what Statistics Norway might do. Instead, you want to create a relatively simple overview that gives you an idea about the situation.
You get five small companies involved in the study. On average, they have 20 employees each. What you want to find out, among other things, is whether there are any signs of systematic wage discrimination and wage growth by gender, type of job, and department. You are not a professional data scientist and you want to keep this fairly simple. So what do you do?
Let’s break this down step by step:

1. Identifying data

You want to compare salaries and their growth using variables like gender, age, position, and department. Some companies employ overseas developers, so you also include location as a variable. Now you need to gather all the necessary data.
Each company sends you the information you need. The employees have agreed to share the relevant details, and all have been anonymised, but they could still conceivably be identifiable as individuals. The data set should therefore be kept confidential and any findings should be generalised.

2. Choice of tools

Much like a chef needs a kitchen, you need a place to work with the data. For most people, this would be a spreadsheet. Programs like Microsoft Excel, Google Sheets and Numbers are relatively easy to get started with, while also being very powerful. Up to a certain point, of course.
Going back to the cooking analogy: If you plan to upscale and professionalise your service, you’ll reach a point where you need more advanced tools, like databases, cloud services, APIs and more. However, that doesn’t mean you need an industrial kitchen to create gourmet food.
So, for now, we’ll stick with our “home kitchen”—good old Excel—for manual data entry and analysis.

3. Cleaning up the data

Then there’s the cleaning up.
Now that we’re talking about cooking (yes, it’s a good metaphor, so bear with us!), this corresponds to what chefs call “prep”: You want to check the quality of the ingredients, measure correctly, chop vegetables, find the sauces and spices you want to use, and set everything ready on the counter.
If you keep everything tidy—if you know where everything is, and what needs to be done—it will be much easier dealing with two pots about to boil over and a countdown clock that’s beeping later.
So, what’s the Excel equivalent of the chefs’ prep? In order to understand this, we need to apply everything we learned about data quality.
Here’s a simplified checklist for this project:
  • Sorting: Sort the data so that each row and column has similar values. For example, keep data about “Income” in one column, “Gender” in another, and “Department” in a third.
  • Variables: Make sure all variables are present. Do you want to be able to compare based on education and seniority? Then you need separate columns for these topics and ensure that you have the necessary data.
  • Naming: Be consistent in how you name variables. Avoid, for example, using “Salary” in one place, “Income” in another, and “Wages” in a third. Here you can use the “Find and Replace” function to clean up. Also, spell-check, remove unnecessary spaces, and so on. Consistency is key to ensuring everything is properly structured and sortable.
  • Formatting: Verify that numbers and values are correctly formatted. For example, ensure that numbers are indeed formatted as numbers, not text (otherwise, they won’t be included in calculations).
  • Units: If the companies are based in Norway and are paid in Norwegian kroner, but overseas developers are paid in Euros, you need to convert everything to the same currency to make it comparable.
  • Duplicates: Find and remove duplicates and overlapping data.
The key here is to maintain oversight and control: You want to avoid overlapping data, incomplete data, unreliable data, missing values and so on. The data needs to be structured in a way that makes it clear and comparable.
Most importantly, you must trust your data quality. In other words, the data must accurately represent what it’s supposed to in the real world. It should be accurate, complete, current, reliable, consistent, and synced, without biases and preconceptions that might skew the results.

4. From data to insight

Once you have identified, collected, structured, and cleaned up the data, the real fun begins.
Thanks to your well-structured data, you can use Excel’s formulas and functions to calculate averages, generate graphs and charts, and take other steps to uncover and highlight the insights you’re seeking.
Visualisations, which we’ll delve into more in the following chapter, offer a much simpler way to understand data compared to just looking at endless rows and columns of numbers. For instance, basic bar charts can easily show the average salary based on various factors like company, department, gender, age, and level of experience.
Or imagine a line graph, (you know, those curves that look like a mountain range in a coordinate system), where “developers” get a red line, “designers” yellow, and “salespeople” blue. The Y-axis shows average monthly income for each group, while the X-axis shows yearly development. If you break down each department by gender as well, you could possibly start to notice some noteworthy trends.
Once your data is well-structured and ready, a world of possibilities opens up. We’ll explore these in greater detail in Chapter 5.
Sometimes, gathering, cleaning, and preparing data can be a daunting task, especially when dealing with complex problems or large, unorganised datasets.
Before we move on to the actual utilisation of data in the next chapter, we’ll first look a little bit closer at the cleaning and structuring. We’ll examine more complex scenarios and explore more advanced tools for organising and sorting data.