When people use mobile and broadband connections, they care most about how they are able to communicate, play games or consume content. We measure the real-world experience of consumers as they use these networks, collected independent of any carrier-specific data feeds.
Our process begins with collecting billions of measurements daily from a diverse set of research participants covering a wide range of users, demographics, and devices.
Our insights come from a diverse and representative panel. With consent, our data collection process is automated, tests run all the time, in all the places people live, work and play. Indoor and outdoors, irrespective of whether participants connect over wi-fi or cellular connections, providing the largest and most representative data set.
Opensignal tests are conducted to common internet endpoints including Content Delivery Networks (CDNs) such as Google, Akamai and Amazon. This is identical to the way connections are made each day to typical websites and content and provides a truly representative end-to-end view of network experience.
Our measurements are designed so that operators cannot optimize their networks to treat our traffic differently and therefore impact the results without making actual improvements to their networks. This approach is unique to Opensignal, and supports the needs of consumers who spend their time using applications like video and games, providing network operators with actionable insights that support improvements based on how consumers actually use their digital devices.
We take extensive measures to ensure that the privacy of research participants is respected through the entire process and that data is only used for the depersonalized purposes that they permit when they opt-in. The details can be found in our Data Privacy Charter.
The processing and analysis for all of the measurements we collect is based on best-in-class data science methods following the principles of our Experience Charter and our Analytics Charter and is explained in more detail below.
The process is designed to ensure that incorrect or potentially distorting data is not able to influence the results and that results are shown in a way that can be clearly understood and relied upon.
Opensignal collects measurements of network experience quality and speed based on both user-initiated and regular periodic tests.
The majority of measurements are generated through regularly scheduled periodic tests, executed independently and at random intervals to capture what users are experiencing at a typical moment in time. This approach is recognized as best practice by a number of official bodies including the FCC in the U.S.
When any application downloads data there is an initial ramp-up period as the connection is established where the download speed will not be the same as the stable speed achieved once the download is in progress.
A consumer's experience on today's networks is based on speed-sensitive applications such as streaming video or large file downloads. These are influenced mainly by the stable speed (sometimes called the “goodput”) throughout the task, not the initial speed when the action is triggered. Conversely applications such as web browsing are influenced heavily by ramp-up time and latency. Our speed tests look at the need to accomplish a task rather than the test of speed as a standalone metric.
We use a fixed time test since our focus is on measuring the user experience of applications such as streaming and large file downloads. As these experiences are influenced primarily by the stable speed, we use a fixed time test, rather than a simple fixed file size download. A fixed time period enables the speed measured to be a much closer representative of the stable speed the user experiences through the application.
As well as being more representative, this fixed time approach makes comparisons between widely different network speeds more meaningful. A file of a few MB downloaded over a 3G network will take several seconds and the speed measured will be influenced primarily by the goodput and only slightly by the ramp-up. The download of the same file over an 5G network will take a much shorter time and the overall speed measured will be dominated by the ramp-up time and unrelated to the speed seen by a streaming application.
The Opensignal Video Experience measurement directly streams sample video from typical content providers and measures a range of parameters that directly impact the user experience, such as the loading time (the time taken for the video to start) and the video stalling ratio (the proportion of users who experience an interruption in playback after video begins streaming) for different picture qualities or bit rates.
To calculate our video metric, we use video content providers selected to represent typical user experience.
Our Reliability and Consistent Quality (CQ) measurements capture the foundational experiences without which consumers remain frustrated. Importantly we are measuring both a user's ability to connect to the internet (not just the signal availability) and their ability to perform common, foundational tasks when they are connected.
We use the TCP and UDP protocols to measure round trip traffic to internet end points and capture internet fundamentals like connectivity, latency, packet loss and jitter.
Opensignal uses a rigorous post-processing system that takes the raw measurements and calculates robust and representative metrics. This includes a number of steps to quality-assure the measurements.
For example, if a user failed to download any content, this measurement is eliminated and treated as a “failed test” rather than being included in the average speed calculation.
Similarly, when calculating metrics on a given network technology (e.g. 4G), measurements where a network type change is detected (e.g. from 5G to 4G) during the duration of the measurement are not included.
We automatically filter out certain entries, (e.g. when a phone is in a call) which are known to produce non-typical results.
To ensure that the results only reflect the experience of customers who bought the operator’s own branded service, we treat separately results from Mobile Virtual Network Operator (MVNO) subscribers and subscribers who are roaming to those of the Mobile Network Operators. These subscribers may be subject to different Quality of Service (QoS) restrictions than an operator’s own customers and so their experience may be different.
For fixed broadband connections we consider the consumer facing brands in our reporting, unless we stated otherwise. Where wholesale infrastructure exists we consider the results for individual brands as peering connections and CPE selections can have significant impact on the customer experience.
We consolidate data into technology types — e.g. when considering 5G connections, we include mmW, low band and mid-band connections into a single technology type unless stated otherwise.
We calculate a single average per samping device to ensure every device has an equal effect on the overall result. Essentially, we employ a “one device, one vote” policy in our calculations.
We eliminate a percentage of extreme high and low values. This removal of extremes is common data science practice and ensures the average calculated represents typical user experience.
Our reporting follows consistent definitions of output that provide consistency across our insights, public reports and the output in our product solutions.
Per device values are combined using a simple average to yield the Opensignal metrics that are found in our reports and analysis.
We provide an upper and lower estimate of confidence interval per operator, calculated using recognized standard techniques based on the sample size of measurements.
Confidence intervals provide information on the margins of error or the precision in the metric calculations. They represent the range in which the true value is very likely to be, taking into account the entire range of data measurements.
Whenever the confidence intervals for two or more operators overlap in a particular metric, the result is a statistical tie.
This is because one operator's apparent lead may not hold true once measurement uncertainty is taken into account. For this reason, awards could have multiple winners in our reports.
A common practice in reports from other sources is to “cherry-pick” geographic boundaries or time periods to be able to make an observation about a specific operator.
For example, highlighting performance for a particular area of a city, or over a particular time period.
We do not do this and only report on standardized geographical boundaries (where available) and over the entire period covered by the measurements.
Our reporting timetable is under our control and not released to operators in advance. This ensures that reports represent the consistent experience of the majority of users.
In addition to our methodologies and processes, Opensignal abides by a core set of principles as it relates to our definition of network experience, our commitment to analytical rigor, and the integrity of our independence from operator influence.
The following Charters were developed to affirm these standards and commitments.
Sign up to our newsletter to receive the latest Opensignal reports in your inbox