Post by account_disabled on Dec 14, 2023 12:04:08 GMT 1
We ignore natural fluctuations in search rankings and inherent differences between keywords. This time, I wanted to take a closer look at the pitfalls. I will focus on the winner. The table below shows the overall ranking is the day winners (day is month) within each keyword tracking set. I only included subdomains that had a month-day ranking is at least: Putting aside the usual statistical suspects (small sample sizes for certain keywords, unique strengths and weaknesses is our dataset, etc.), what's wrong with this analysis? Of course, there are different ways to report % gain (such as absolute change vs. relative percentage.
But I honestly reported the absolute numbers, and the relative change is accurate. to run the C Level Contact List numbers one day later, we ignore the reality that most core updates are multi-day (a trend that seems to have continued with the May core updates, as evidenced by our original chart). We also failed to take into account domains that may have historically had inconsistent rankings (more on this later). What happens if we compare day to day data? Which story do we tell? The table below adds relative percentage gains by day.
I've kept the same subdomains and will continue to sort them by percentage gained by day for consistency: Even just comparing the first two days is the rollout, we can see that things have changed quite a bit. The question is: Which story do we want to tell? Often, we don't even look at lists, but anecdotes based on our own customers or curated data. Consider this story: If this were our only view is the data, we might conclude that the update was intensified on both days, with the second day rewarding more sites. We could even start crafting a story about how demand for the app grew, or how certain news sites received awards.
But I honestly reported the absolute numbers, and the relative change is accurate. to run the C Level Contact List numbers one day later, we ignore the reality that most core updates are multi-day (a trend that seems to have continued with the May core updates, as evidenced by our original chart). We also failed to take into account domains that may have historically had inconsistent rankings (more on this later). What happens if we compare day to day data? Which story do we tell? The table below adds relative percentage gains by day.
I've kept the same subdomains and will continue to sort them by percentage gained by day for consistency: Even just comparing the first two days is the rollout, we can see that things have changed quite a bit. The question is: Which story do we want to tell? Often, we don't even look at lists, but anecdotes based on our own customers or curated data. Consider this story: If this were our only view is the data, we might conclude that the update was intensified on both days, with the second day rewarding more sites. We could even start crafting a story about how demand for the app grew, or how certain news sites received awards.