Analytics Charter

Opensignal’s analytics are designed to produce meaningful reports and insights that allow an objective comparison of different networks.

Our analytics encompass data science best practices to ensure that results can be trusted and are not misleading.

We believe these analytics policies, when combined with the measurement philosophy laid out in our Experience Charter, lead the industry and produce the most accurate reports and insights which reveal the true experience.

  • Consistent methodology: Opensignal’s analytics methodology has been developed independently over several years and examined by many industry stakeholders. Fundamental to our approach is that we never change our methodology to suit the needs of a particular country or operator. Our methodology only changes as we make improvements which are consistently applied. By doing this we ensure that global comparisons can be made.
  • Real users with equal weighting: Our analytics are designed to ensure that each user has an equal impact on the results, and that only real users are counted: “one user, one vote”. Our analysis ensures as far as possible that we only include data from devices that display normal user behavior and not the behavior of test and engineering devices, research projects, etc.
  • Full disclosure of sample parameters: Opensignal results will always be accompanied with details on the sample period they were drawn from, the size of our population of active devices and samples, and the methodology that went into calculating them.
  • Confidence intervals always displayed: All information in our reports is presented with confidence intervals, which represent the precision to which a given result can be stated. Confidence intervals are standard scientific practice when reporting on sampled data and Opensignal uses best-practice statistical methods for doing this.
  • Only statistically significant conclusions drawn: If a difference between operators for a given metric is not statistically significant we declare it a draw and report it as such. Where confidence intervals are large we do not report the data at all to ensure that we only publish results in which we are confident.
  • Only an operator’s direct customers are included: To ensure that our data truly represents the experience of an operator’s own customers we exclude measurements generated by inbound and outbound international roamers and MVNOs as they may be subject to different service agreements. Measurements generated by national roamers are attributed to the user’s home operator as that is part of their network experience.
  • Consistent standard time intervals used: We normally only report on standard time intervals and the most recent data, we do not “cherry-pick” an arbitrary time period in a report to support a predetermined conclusion. Where a different time window is being used this will always be clearly identified.
  • Comparisons must be like-for-like operators: In comparisons we only include operators whose network is widely represented across the area under study. Regional operators are not shown in national comparisons.