EN FR
EN FR


Section: New Results

Service Transparency

From Network-level Measurements to Expected QoE

Participants: Chadi Barakat, Thierry Spetebroot, Muhammad Jawad Khokhar, Damien Saucez and Nawfal Abbassi Saber.

Internet applications, especially those of multimedia type and in a mobile context, are very sensitive to the delivery service they get from the network. However, the relation between this network service and the quality of these applications as perceived by the end users is often unknown and hard to be quantified. Some of the applications dispose of their own quality estimation techniques such as Skype and Viber. Others leave the users to their own interpretation of the quality they perceive. Linking the quality of Internet applications as perceived by the Internet users to network-level measurements such as bandwidth or delay is more than ever necessary. Such dependence, known in the literature as linking Quality of Experience (QoE) to Quality of Service (QoS) parameters, serves many purposes. On one side it allows the estimation of the quality an Internet user will obtain before launching the application or even before heading to the place where she/he will connect. On the other side, it helps network operators properly dimension their networks so that to anticipate service degradation and optimize the quality they deliver. The correlation of quality measurements among users, or for the same user among different of his/her locations, can help in troubleshooting the reasons of any degraded quality.

Our project, called ACQUA, aims at the estimation of the quality of Internet applications at the access departing from network-level measurements. It leverages measurements done at the network level as done today (bandwidth, delay, loss rate, etc), and applies over them well calibrated models to estimate/predict the quality of experience for main applications even before launching them. ACQUA is an extensible solution in terms of the applications it can track. It allows a fine-grained profiling of the Internet access at the level of application quality. In a recent work, we have proved the feasibility of the approach with the Skype use case. We have integrated into ACQUA a new model based on decision trees for the estimation of Skype QoE. The model has been validated with both local controlled and PlanetLab experiments. In 2016, we focused on the popular YouTube use case. We set up a new experimental setup to automatically stream videos, change network conditions, and write down the corresponding Quality of Experience (modeled as a function of application level Quality of Service metrics). One of the challenges we had to face is the reduction of the complexity of experimentation that we had to solve using sampling techniques. The first results are very promising as we can considerably reduce the complexity of experimentation while reaching high level of accuracy in the prediction of Youtube Quality of Experience. A paper is currently under submission illustrating the methodology and the obtained results. More details on this approach and on our project ACQUA can be found in section 5.1 and on the project web page http://project.inria.fr/acqua/.

Testing for Traffic Differentiation with ChkDiff: The Downstream Case

Participants: Ricardo Ravaioli and Chadi Barakat.

In the past decade it has been found that some Internet operators offer degraded service to selected user traffic by applying various differentiation techniques. If from a legal point of view many countries have discussed and approved laws in favor of Internet neutrality, confirmation with measuring tools for even an experienced user remains hard in practice. In this contribution, we extend and complete our tool ChkDiff, previously presented for the upstream case, by checking for shaping also on the user’s downstream traffic. After attempting to localize shapers at the access ISP on upstream traffic, we replay downstream traffic from a measurement server and analyze per-flow one-way delays and losses, while taking into account the possibility of multiple paths between the two endpoints. As opposed to other proposals in the literature, our methodology does not depend on any specific Internet application a user might want to test and it is robust to evolving differentiation techniques that alter delays or induce losses. In a recent publication [22], we provide a detailed description of the downstream tool and a validation in the wild for wired, WiFi and 3G connections. This work is the result of collaboration with the SIGNET group at I3S in the context of a PhD thesis funded by the UCN@Sophia Labex and defended in 2016.

Traceroute facility for Content-Centric Network

Participant: Thierry Turletti.

In the context of the UHD-on-5G associated team with our colleagues at NICT, Japan, we have proposed the Contrace tool for Measuring and Tracing Content-Centric Networks (CCNs). CCNs are fundamental evolutionary technologies that promise to form the cornerstone of the future Internet. The information flow in these networks is based on named data requesting, in-network caching, and forwarding – which are unique and can be independent of IP routing. As a result, common IP-based network tools such as ping and traceroute can neither trace a forwarding path in CCNs nor feasibly evaluate CCN performance. We designed Contrace, a network tool for CCNs (particularly, CCNx implementation running on top of IP) that can be used to investigate 1) the Round-Trip Time (RTT) between content forwarder and consumer, 2) the states of in-network cache per name prefix, and 3) the forwarding path information per name prefix.  This tool can estimate the content popularity and design more effective cache control mechanisms in experimental networks. We have published an Internet-Draft [30] describing the specification of Contrace.

How news media use Twitter to attract traffic?

Participants: Arnaud Legout, Maksym Gabielkov.

Online news domains increasingly rely on social media to drive traffic to their website. Yet we know surprisingly little about how social media conversation mentioning an online article actually generates a click to it. Posting behaviors, in contrast, have been fully or partially available and scrutinized over the years. While this has led to to multiple assumptions on the diffusion of information, each were designed or validated while ignoring this important step.

We present in  [18] a large scale, validated and reproducible study of social clicks – that is also the first data of its kind – gathering a month of web visits to online resources that are located in 5 leading news domains and that are mentioned in the third largest social media by web referral (Twitter). Our dataset amounts to 2.8 million posts, together responsible for 75 billion potential views on this social media, and 9.6 million actual clicks to 59,088 unique resources. We design a reproducible methodology, carefully corrected its biases, enabling data sharing, future collection and validation. As we prove, properties of clicks and social media Click-Through-Rates (CTR) impact multiple aspects of information diffusion, all previously unknown. Secondary resources, that are not promoted through headlines and are responsible for the long tail of content popularity, generate more clicks both in absolute and relative terms. Social media attention is actually long-lived, in contrast with temporal evolution estimated from posts or impressions. The actual influence of an intermediary or a resource is poorly predicted by their posting behavior, but we show how that prediction can be made more precise.

ReCon: Revealing and Controlling PII Leaks in Mobile Network Traffic

Participant: Arnaud Legout.

It is well known that apps running on mobile devices extensively track and leak users' personally identifiable information (PII); however, these users have little visibility into PII leaked through the network traffic generated by their devices, and have poor control over how, when and where that traffic is sent and handled by third parties. In this paper, we present the design, implementation, and evaluation of ReCon: a cross-platform system that reveals PII leaks and gives users control over them without requiring any special privileges or custom OSes. ReCon leverages machine learning to reveal potential PII leaks by inspecting network traffic, and provides a visualization tool to empower users with the ability to control these leaks via blocking or substitution of PII. We evaluate ReCon's effectiveness with measurements from controlled experiments using leaks from the 100 most popular iOS, Android, and Windows Phone apps, and via an Institutional Review Board approved user study with 92 participants. We show that ReCon is accurate, efficient, and identifies a wider range of PII than previous approaches.