MAT Working Group
26 October 2016

CHAIR: Hello, and good afternoon. And welcome to the MAT Working Group.

Just a little bit confused... can we get the introduction slide on the screen?

Anyway, thank you and welcome, we are starting out ‑‑ here is the agenda, that's the one I wanted to see.

I'd like to ‑‑ we do have the first agenda point is the scribe and the stenographer and the Jabber, so we have them sitting over here, it's actually very difficult to see any participants from here, so, they are sitting somewhere in the dark over here waving. Excellent. Thank you very much for helping us out.

I'd like to remind you of the microphone etiquette, in particular in this room so we're going to have a bunch of presentations and they will take questions afterwards, and here, as, you know, we do not have the line up with the microphone, so Christian will point at you and when you are required to speak, so you raise your hand if you want to pose a question, he will keep the queue, and when he points to you, that's the time when you are allowed to press the speaker button on your microphone. Please state your name and affiliation, and remember to be respectful of the content that's going on and the people in the room and the people you are talking to

Any questions for this? No, excellent. I hear that people are very good at using this room so I hope we are going to do the same thing in this Working Group.

We have put the minutes from the last meeting on the website. If there is anything who has any comments for that, now is the time to speak. No? That's excellent. And finally, I just want to remind you of the agenda, which is here.

And I'd like to point out that as opposed to the first agenda we published, we have swapped the two first talks, but it is online in this form now but I just wanted to make sure that everybody was aware of it if they wanted to do that.

So, I'd like to introduce Anna, who is going to come up and talk a little bit about Wi‑Fi measurements.

ANNA WILSON: Thank you very much. I really appreciate the agenda being swapped. I am currently sort of skiving off my own Working Group who are in there and handling it perfectly without me. So I think I'll just step down. But I really appreciate it. I'm going to be here for the first little while and then I should go join them. But I wanted to talk about Wi‑Fi.

How is the Wi‑Fi this week? Is it good? Yes, pretty okay I'm hearing. Who has had no problems at all? Cool. Who has had a few problems? A few. And for whom is it just not working at all? No one. That's pretty good.

That's also, I think, in some ways, about as precise a measurement you can get of a Wi‑Fi sometimes. It's really hard to do. In a room like this, there are 18 access points in this room, I asked during lunch, there is another ten in the room next door and there is more all over the place, and it's really hard to measure this kind of of thing well, because when you have a wired network, you control all the physics, you have all the counters, you can see what's going on. But in a wireless network there is all sorts of things going on that you just can't account for.

And there is certain things you can do. But you know, it's tough. And I'm interested in this, because this setup here is, in many ways, quite like the setup in a university.

There is sort of three conventional ways when you are trying to work out how well Wi‑Fi is working and just how well your Internet connectivity in general is working, there is the smoking kind of thing. This is about getting a historical perspective. The quality of this data depends on what you are pinging and how representative that is of the rest of your network.

The second is you turn the equipment itself into a probe, and this works in access points. There are certain controllers and certain hardware that will allow you to put the equipment into a diagnostic mode. We kind of do this as a matter of of course in some ways with physical links and router counters, with access points you have to put them into a diagnostic mode and that's annoying because you have lost that access point. There are 18 in this room but you are fewer of those actually working, that can compound a problem. It also gets you only so much data. These things were pretty good at giving you things like RF quality. Not so good at telling you certain other things. It's not the same as the user sees.

The third of course is dedicated probes. Now we're talking. These are wonderful because you get really objective measurements, you know what you are getting is real and they are very reliable. But, it's still not quite the same as what the user sees. You can't ‑‑ we had what, 400 people in this room yesterday, you can't put 400 class probes in this room. Even putting 18, there is a cost there not just in terms of the probe but also in cabling and power and providing that to all the different places. So, there must be a better way. And we have an idea. We had an idea in J on, being the project that surrounds the national research networks in Europe.

And we're trying this idea out and since we're in early days but we're at the point where we think you might be interested as well and it works like this:

Step 1, is rip‑off Geoff Huston. You can never go wrong by starting here. Geoff measures the Internet using browsers, people's browsers, mine and yours, he does it by taking out ads in Google and he measures the entire Internet like this. That's great. I don't want to do that, I am not interested in the entire Internet. I am interested in one room like this room. Or a campus or something like that. So I started wondering could we do something like what he does, but do it on a small scale and get something useful out of that.

What we are really interested in when we're talking about connectivity quality, for the backbone I think we know backbone quality very well, but we still find ourselves struggling to understand what it's like for the actual end user and what we're really interested in there I think is performance per access point. There are 18 APs in this room. Is yours working? Is yours working? Is yours working? Oh it's not. I wonder why that is, now we can look, that's what we're trying to find out.

So, how can we get that with browser based measurements? Well, we're looking really for three things. We want to do a performance test of some sort, as non invasive as we can. We want to know when that happened and we want to know which access point you are connected to. What can you get from the browser. Well, there are JavaScript tools for doing something like this. Boomerang is one, net test. You can download them. They are usually used for checking the performance of your own website, but it's more or less the same thing at the end of the day. If you have a website where users frequently visit it, for example the RIPE 73 website, which we didn't put this on, by the way, but you get this, you can get the time stamp and a performance result and an IP address. But you can't get the ethernet details. It doesn't work. The browser won't let you get that.

However, in ENRON land, in particular in universities and other higher education and further education institutions we have Eduroam, this is basically a giant radius network in some ways, and it allows me to go for example to Switzerland and my HEAnet ID works there on EduRoam the same SSID there as it does back at home. From those logs, we can get your IP address, when you had that, and which access point you are connected to. So we can put two and two together and get four, in theory. The question is how to really try this out.

I said we're not running this here. But we have run it at some events in the past. In fact, we have been working on this for a bit over a year now and we started this project just before TNC last year in port owe, and this literally happened a week before TNC, which is a conference roughly about the same size as RIPE. I mailed the organisers and said, I have got some JavaScript, could you put it on your website please? And they said yes. Which I wasn't expecting, and then I had some work to do, a lot actually, but it meant we were getting performance tests, they were triggered only to work from the site itself, triggered by IP address, because we weren't interested in you know someone is on a mobile phone three countries away, we didn't want that test, and then we were able to get our hands on the Eduroam logs there and correlate not all of those committeeses but quite a lot of them where the access point they were connected to and it looks like this. This was the download speed for those tests, we were able to identify which access points were on and this is it split by access point. Now we have real data that we can go, start going room by room or access point by access point or other things as well.

The first thing I noticed was that this is really a pretty widespread. This was specific to the Plenary room at that conference. And some connections are nice and fast and some of them are really slow and there is something really wrong here but there is to patterns to pick out there. You can't draw a line through that. I was kind of disappointed with that actually. But we also measure latency, and here we start to see some patterns. This is one of the rooms, the exhibition hall over the course of one day. And one part of the pattern that's really clear is this, there is a giant 40 millisecond gap going on here, what's going on here? The answer being that the server running the tests was on the other side of the continent, was in Greece, when we were all in Portugal, and that was the delay between there and the server. It was showing up very clearly. But there was something even more interesting that we noticed. There is ten different access points measured in this data set and only three or four of them are really getting a lot of use and we have a few of these outliers at the top where there seems to be some quite slow going. And I thought, if we take out one of those access points, almost all those outliers go away, there was something going on with that access point. Now maybe it's overloaded, maybe we just happened to be picking up more data and therefore we were seeing more good data and bad data, it could be either, we don't know. That's the kind of thing when you start to identify this is going on you can go in and investigate with more convention means.

So, that was a lot of fun and I literally spent the first two or three days of that conference writing some Python to try and paragraphs logs. These are just Exel. I literally loaded a spreadsheet and started playing around in Exel and pulled out whatever data I could in the course of a day, it's amazing what you could pull out like that. We want to get a bit more serious about it and what we have been working on is how to make an architecture out of this that would work in the general case.

And what we have come up with, and what we're now trying out, looks a bit like this. You start with your access points or your Wi‑Fi controller and you are getting a tonne of different data from that, the really important thing is you are getting the layer 2 to layer 3 address binding in some way. You want to go from your IP address of the user to the access point to which they are connected. That's what you need to get. And from the mobile client themselves, these things or laptops or phones, you want to get them to run the test in the browser. If they visit your university website or conference website, you embed a little bit of JavaScript in there. All it does is download an image and tell you how long it took to download or what the latency was in doing that. You don't care about who they are, except inasmuch as you can work out what access point they are connected to. We want to suck that data back to a database and analyse it. That needs user interface in order to see the realtime stuff, but also for historical diagnosis and for historical trends you want to be able to run reports and queries off that.

So we tried this now in a few different places and a couple of different conferences, we're trying to build the real thing now in Dublin City University, and in many ways, that's sort of environment looks a lot like here. They have more access points, they have like 800 access points across multiple different campuses, but you have lots of people in individual large rooms for example and then a bunch of people scattered in corners who are working, and they use these free radius for authentication. So we didn't do this for everyone, we just got a few people who we know to test, run some test clients basically, walk around the university in different places, visit this web page, run the test and see what came back.

And we started to get results. The raw results look like this. You are able to get obviously the date stamps. Download rate, upload rate and some sort of latency. The important one on the right‑hand side, what's the Mac address of the access point you are connected to. The interesting thing for us, though, is how to analyse this. That's just raw data and we're still quite rudimentary at this point. This is why I am coming here. We need as much operational experience as we can find to try and refine this. Right now it looks like this. This is throughput basically, raw throughput graph. We can split it even just for giggles, not that I expect there'll be a real difference here, you can split it by browser, this is small number statistics because I don't want to stand here and say Microsoft Edge is the best browser. But it gets really interesting when you split it down by access point and this is where you can start to pull out real trends and real data I think.

Again with downloads it's like we saw at TNC there is a spread there and not a lot to see. When it comes to latency there is obvious patterns and outliers and we think we can start to analyse this.

There is four parts to T the scripts are fairly easy. You need to set cookies because the way in which we make sure that if somebody is visiting many pages on the same website in succession, we don't want to run the same megabyte file every time. We set a cookie to make sure the test isn't triggered for another hour or whatever. The logs you get depends on the site. Everyone's Wi‑Fi set up is different, how do we cope with that? And of course, if the site you are on is running NAT which many universities do, many don't, and almost all the entire rest of the world does, then you have to find a way to get those measurements and still track the IP address. You might need to put the measurements server inside the campus or whatever site you're at.

And of course, we need to get privacy right. This obviously is the most crucial thing of all. On the one hand, we're not actually collecting any new personal data here. All we're adding is here is how your browser performed on the particular time. And the data about which access point you are connected to is already connected. Those logs are routine. But, they are really sensitive. You do not want to mess with these things. You do not want these things leaking out all over the place. It's very, very important to get this right. And we are analysing existing data in a new way, and that rightly just give you pause, are we doing the right thing here? So, we need, obviously to make sure that anything we build here, it's all fine when it's just me and Exel and a laptop and it's not leaving that, but we need to make sure that anything we build is properly reflecting the user's expectations and any privacy right that we may reasonably expect to have to make sure the data isn't going anywhere, that it shouldn't, and it isn't being revealed to anybody it shouldn't. Don't take the actual logs off campus, if you want to extract something, but we don't want to move the actual logs off campus.

So, that's kind of where we are. There's something really interesting going on here. I think this has legs. We have shown we have the mechanisms to get performance data and we have the mechanism to separate it out by access point. There is still some things we need to prove and in particular we need to get some better analysis. We're going to need a lot more data to do that. That's why I'm here. If this peaks your interest and you have an environment which either is like university campus or could be stretched to seem a bit like it and you think you might like to run this. Where or you have a place where people are visiting the same website fairly often, we have a bit of code you can run and you can get the same kind of results we got. We have a mailing list. We have the project itself, so people like me who are responsible for generating deliverables and meeting deadlines and things. We have a mailing list just for people who want to try this casually and see how it goes and maybe they are able to contribute in an Open Source or even just here is the data we were able to gather sort of way. We don't have a proper website. But if you drop me and e‑mail I will point you out the code or put you on the mailing list, or both. Just let me which of these you'd like and I'd be delighted to put you in touch. Thank you very much.


CHAIR: Thank you. Time for questions. No one? No, well thank you very much Anna, it was really, really interesting.


Okay. So we're ready for the next speaker, and this is Luuk who is going to talk about the IPv6 extension headers. Thank you.

LUUK HENDRIKS: Hello everyone. I am Luuk, and with the university of Twente, and I will start with some assumptions, given the fact that you are not currently at the IPv6 Working Group means that you feel confident working with v6 and begin the fact that you are at this Working Group means you do measurements on v6, right. So everybody in this room is measuring v6. Okay.

Now, unfortunately I will add another assumption, which is I assume that at least some you use inflow basic technologies to measure things have some inaccuracies in your measurements, and I will, if I ‑‑ I will tell you what's wrong.

Flow based measurements are based on aggregation, right. We see multiple packets passing through one point and based on some specific fields in those packets, they are grouped together. So, basically, we do this for a classic 5 tuple. That's how most people define a flow, we have a source address, destination IP address, we have a source port on layer 4, a destination port and a protocol, so all the packets that feature the same values for these five fields are grouped together, they get exported by the exporter, and we see the total number of bytes, the total number of packets for that flow.

Now, it works beautifully, right? This is an IPv6 packet and I tried to highlight these five fields here, I'm not sure if you can see it, but at the top there is the next header field which is TCP, there is a source and destination address, and a source and destination port. Now, if we add extension headers to this thing, right, the extension headers introduced in v6, we directly see the problem. Now, at the top of the packet, you don't see TCP, you see proto 44 in this case, which is fragmented IPv6, and where previously the TCP packet started, we now first get something in proto 44, so we get some fragmentation information and in this case the first few bytes of that information are the next header, right, that's TCP. We get some other fragmentation information and after that the actual TCP packet starts, that contains the source port and it contains the destination ports.

So, what you can get from this is all this information from the classic five Tuple is in the packet, but it's not on the traditional position in the packet, right. And forwarding devices, security devices have seen this as a problem, because you actually have to traverse all these next headers, because where TCP is the first next header here, it might be another EIX tension header so there might be three or four extension headers and you have to traverse this entire path of extension headers before you get to the upper layer protocol.

So statement: When you are using flow based measurements and you had measure v6, then the extension headers will hide some information for you if you don't take them into account.

So, example: This is an actual flow except for the db8 things on our university network, which is fragmented IPv6, protocol 44. We see the source address, we see the destination address, there is nothing new there, but you can also see some strange things, right? The ports are zero. And that's easily explained because the exporter didn't see an actual valid port on the place where it expects this port, right. This is because of how the packet is constructed now. We first have the fragmentation information, after that the actual TCP information. It chooses the easy way, exports zero for both ports. But there is something more. The aggregation is also incorrect in this case. The ports are still used to aggregate packets. So if you aggregate on source inter zero and destination port 0, along with the other three of the five tuples, you will group all the packets that are fragmented and go from source to destination address, right. So, it says they were eight packets, that's likely to be correct. There are roughly 10,000 bytes. That's likely correct. But these.didn't belong to one in a single flow, right. This points out two problems.

Some of you might be interested in this actual upper layer stuff. I want to get all the details of my flows, right. Then you are likely to be interested in these upper ports, which are still 0 here. On the other hand, if you wrote some algorithms to spot big flows, well you get ‑‑ well, a relatively big flow here, but it's not one single flow, right. So in both situations, you get wrong information.

So, what's all hidden then behind these extension headers? We saw the upper layer protocol, the destination port and the source port but also all the extension headers after the first one, right. So, if there's a change of extension headers and you want to filter some of them out or you want to measure some of them, you have to traverse this path if you want to say something about them, you have to get them out of the packet and export them all, right. But now you only see the first extension header.

Furthermore, this aggregation thing is a horrible thing, so, your counts, your aggregation counts are wrong as well.

So how can we fix this? Some research questions, how can we get this information? It's quite easy, it's all in the packet right. Just get it out of there. Okay, what do we exactly need? We'll get back to that. How can we fix the aggregation? That's also quite easy, we just have to change the key, right. Which fields do we want to aggregate? We already need to get them for the first question here, so we just have to decide which one do we add to the aggregation key? Do we need to change things on the collector side? Maybe, it depends on your software so I won't go into that.

We implemented this in a flow Monday plug in, this is an IP fix NetFlow probe exporter, however you want to call it, it allows for plug‑ins. We wrote a plug‑in that exports amongst other fields the upper protocol field, the source port and destination port belonging to that upper proto, but also information about the extension headers, namely the full list of extension headers, and the total size of these headers.

Now, the total size is not that interesting for what I'm telling here today, but that's more from a security perspective, because there is a lot of forwarding devices that can not handle a large number of bytes in extension headers.

We adapted the cache key, we added the upper protocol there, the upper source and upper destination ports if we find an upper protocol in the packets. So, we either use a traditional five Tuple or in case of an upper protocol after some extension headers we use the three things that you see here.

So, testing it out. We deployed this thing at CES NET, the ENRON in the Czech Republic at 10 links which were all monitored by flow Monday probes, this was in May 2016 and we only collected IPv6 flows. The traffic was, or the measurement was unsampled. IP addresses were anonymised for obvious reasons and we used one single collector to collect all of that stuff and to analyse it.

Now, as with collecting norm flows without this plug in, you see everything, so you can analyse this in 1,000 ways, but I will try to drill down to one example that I hope makes sense to most people in the room at least. First some overview.

As you see we collected 4,000 million flows, which in most languages is 4 billion I think, and we see the obvious big players here, TCP, UDP, and because it's v6, ICMP, and other category here, which is other, which is 0.5 percent. So just to put you at ease, I assumed your measurements were incorrect. Then it all falls into this 0.5% category so don't throw out your measurements just yet, but bear with me.

If we look into the 0.5% and we only look at flows that actually have an upper layer protocol, then we see that the lion's share of that stuff, almost 70%, is indeed fragmented v6 like I used in the other examples. The other big player is Hop Options, protocol number 0, and there is another, again, aggregated category, of 0.0 something percent, other.

Now, we go into this fragmented part and this is a distribution of TCP ports, which we could now get from our measurements, right first, this will be 0, now we have the actual upper layer source and destination ports and what you can see here for example, is that there is a lot of traffic actually 86%, coming from port 50 over TCP, but which is fragmented. So if you are configuring your DNS software to do a TCP fallback, and at the same time, you are a bit scared of IPv6 fragmentation and robbing everything fragmented for v6, you will actually directly influence the quality of experience and users for the DNS, right. This is something you couldn't get before from the previous measurements but this is something quite obvious right now.

So concluding: Well the share of flows with extension headers is not that big. The actual payload might be very interesting for you. It depends on what you do. Like I said, some of them are really directly related to quality of experience, for example the DNS. And as we have seen before with forwarding devices, and security devices, also our measurement technologies should pass and traverse this extension header path all the way to the end to get correct and therefore realistic results.

Last slide. A big thank you to Peter from CES NET with the best last name if your Internet working that you can have, for and a shout out to some Open Source software which is also written by Peter, they are here and these are very useful and cools tools. That's it from me. If you have any questions, I am happy to answer.

AUDIENCE SPEAKER: Hi, I am Daniel, I am also a measurement guy. I wondered what kind of performance impact you had when you actually do this stuff which in fact is sort of variable length headers because you have to skip across multiple extension headers potentially. So, what does that do to the performance of your flow capturing stuff?

LUUK HENDRIKS: That's a good question. Our plug in, but also the entire probe, is software based. That means that it's ‑‑ well, likely it's slow to begin with, if you get me. On the other hand it makes it easy for us to develop the plug in. But in the end it didn't matter so much for us because we were able to already filter in hardware what was going to the software. I have to admitted that we didn't look into the performance impact of these things. The best I can say is that none of the software crashed, which is well, it's a good thing right, especially for an academic, but I have no hard numbers on performance impact in terms of CPU or RAM, if that's what you're after.

DANIEL KARRENBERG: Short additional question: Did it have any influence on the packets you lost? I mean you didn't include in your aggregation?

LUUK HENDRIKS: You mean packets that came from the exporter to the ‑‑

DANIEL KARRENBERG: Usually these things give you an idea how many packets were not captured, right. Were not aggregated. Because, the thing was too slow. That's what I mean.

LUUK HENDRIKS: That's not something I am familiar with. That can either be a good thing or completely not. But as far as I know everything was included in the aggregation, even though it's unsampled.

AUDIENCE SPEAKER: Hi Luuk. Benno. I'm really curious about the IPv6 traffic, fragmented traffic you found on the Internet because I know that large ISP in the Netherlands, they told me four or five years ago, they would block every IPv6 fragment on the networks. And I know also of a large CDN, global CDN, they don't accept any fragmented traffic. So the traffic you observed, can you also relay that with DNS because you also mentioned DNS traffic being fragmented over IPv6, was it really dance DNS traffic and can you also say a bit about the flow?

LUUK HENDRIKS: Whether it was actual DNS traffic, I have the same info for UDP upper proto, upper destination port and stuff like that there. There is even more than 86%, which is port 53. I cannot say for sure whether that's DNS because I don't have payload information because it's flow based, right. I can only assume so much. To get back at how you started your question, this wasn't actual reason to do this research because operators, ISPs are scared for fragmentation, but do they know what's in there? Right. Maybe they do, maybe they don't. Maybe there is a configuration error somewhere right, so there was a reason to do this. There was another question you had about the flows itself.

AUDIENCE SPEAKER: No, no, you answered all my questions. Thank you.

CHAIR: Well, thank you. You are all done. That was interesting.


And the next point we have is the RIPE Atlas update. So welcome Rob.

ROBERT KISTELEKI: Thank you. This time I'm going to give this talk instead of Vesna, so please not as nice looking but bear with me.

The RIPE Atlas update. The number of probes is floating around 9400 and that's due to ‑‑ there was a slight slow down recently due to the USB issues that we have which we are trying to address both on software and on the hardware side. So we hope that this number will start increasing again very soon. We are covering about 3 and a half thousand ASs in the v4 space and slightly next in the v6 space, v6 is still slightly smaller than v4, interesting numbers. We collect about 4,000 data points every second, so that aggregates to roughly 350 million data points a day. You can imagine that's quite a challenge to do that. At the moment, we have about 6 million measurements in the system, so that's over the six years of the existence of RIPE Atlas. Obviously more measurements recently. And I just got this number from the back end guys, we have about 100 terabytes of data which is compressed, so if you look at the raw size that's about 20 times as much.

We have 362 RIPE Atlas ambassadors and that's highly highly useful in order to spread the word all over the world. Vesna reports we have almost 16 hundred followers on Twitter. 30,000 users in total and the people that we can consider to be active users so in the last quarter is about 5,000 on the website itself. Subscribers on the mailing list, almost 1,000 people. We have six Atlas in 2016 so far. If you would like to sponsor RIPE Atlas this year or next year, please come and talk to us.

Anchor. As you probably know an anchor is both a probe, it's a bigger probe, more stable if you will, but also willing target. If you run an anchor you automatically start measuring you from a whole bunch of other probes and the ranks also have a mesh in between them. And it also generates more credit, also partially because it's doing more. At the moment we have 224 anchors, it's increasing ever since we introduced it three or four years ago, so it's pretty good.

Some of you participated in the pilot programme so those were Dell boxes, those are being replaced now and we are starting to approach the lifecycle replace for the anchors as well. This tells us this has been going on for sometime. And we would like to thank the sponsors for the RIPE Atlas angers as well, you can see that they are app Mick, LACNIC, ISOC and AfriNIC, they really help spreading the word and they put money in to put angers in various regions. If you'd like to sponsor these we would really love to talk to you and Michaelas is somewhere in the audience, please talk to her.

Some use cases. You can read all of this on RIPE Labs and some of them really make me very happy because they were built or written by the community members when they figured that they have a problem that they want so solve and Atlas was the tool that actually helped them solve these problems. Some of these changes, these are bits and pieces you should be aware of. One is that we are changing the APIs. We have advertised this at the previous RIPE meeting as well. The Version 2 APIs are official the other once have been depricated since May, we'd like to switch it off at the earlier end of the year. Please switch if you haven't yet.

We have worked a bit on API key flexibility. We got input from you that it would be easier if you could say I want this key to be able to do this and this and this instead of having a pre‑defined selection. So we did that.

What could be interesting for the hosts is that now we introduced a feature I think two, three months ago, where we give an extra credit for each and every result that the probe actually delivered to us. So, in some cases, that gives like 10% extra in an anchor it could be a hundred percent extra and I think anchors involved in DNSMON get something like 1,000 percent extra.

We introduced internet topology measurements so every probe has a built in measurement now, take an indication of a random IP address if you will, which happens to be the dot 1 in each and every routed IPv4 prefix an ::.1. At the moment we are doing this with relatively low frequency. If the community thinks this has value we can increase that to higher without basically any cost to the system or to the host. For us it will be way more measurements of course coming in but that's kind of the point.

At the moment we are working on continued stability. As you can see we have, I don't even know how many zeros I would have to put number of results stored in the system, the number of measurements is increasing and so on and so forth so that's quite a difficult task.

We are working on a proposal for VM probes. We heard from the community it would be interesting to use it in a VM environment. We will make a proposal and then the community will tell us whether that's actually a good way to go or not.

Open IP map, I am hoping that all of you know what it is. It's our tool to geolocate infrastructure components. We are ‑‑ actually working on productising this, so making a proper supported product of it.

Trace route visualisation maybe I have noticed that RIPE Atlas visualises almost everything except traceroutes nicely. That's a gap we want to fill in and that image is trying to show how it's going to look like.

A word about Wi‑Fi measurements, this is coming up. We haven't started yet but technically the system is now capable and we are working together with gee aunt to do this for Eduroam. What it is not is a means of measuring your own home Wi‑Fi connectivity and quality. That's not the intention. The intention is to support measurements of kind of well known and spread out networks, for example he had Rome. The hosts of the probes will have to opt into this so we're not going to turn on Wi‑Fi in your home. It's active only so it's not scanning the available Wi‑Fis, and the reason why we are doing this is because via EduRoam we will have a potential extend our network into the, the Atlas networks into other ones where we are not present yet. That's the benefit of the Atlas project and via that to the community.

A word about a public measurements. Last week I think we published an article on RIPE Labs about statistics on this subject, and it seems like some of you out in the community would like to keep on doing this so‑called non public or if you want private measurements, we want to stick to the non public term. For various reasons. Now the real value of Atlas here is to have open data, so, we were thinking whether this is still something that we should continue doing or not. So, if you want to read up on the details, then that article is there and you can even vote.

Almost finally, we are thinking about putting out some kind of communique about measurements ethics surrounding RIPE Atlas. What we have seen is more and more people are using RIPE Atlas to look into interesting use cases like censorship, like if a particular country shuts down the Internet, how much do we know about it? What can be measured? And what we would like is that the people who are doing these measurements think about what the ethical considerations are for such measurements. You don't want to put any host in any kind of danger and if you are not careful in some jurisdictions what you think is just okay to do at home, in those jurisdictions it might not be okay. So, think about it. There will be more communication about this later on.

And finally, I have this slide, this time we also had the hack‑a‑thon just the weekend before the RIPE Meeting. This was not about Atlas but some of the hack‑a‑thon participants actually used data as well and I understand that some of the outcomes have been presented already in, I think maybe in the connect Working Group, but there will be more and if you are really interested in the details, then Vesna is collecting all the code, all the tools, everything, and she is publishing that on GitHub.

AUDIENCE SPEAKER: Hi, I am representing myself. Just interested in the progress on the magically dying USB sticks, and if that's going to happen in the VM world as well. Okay, the last was a joke but...

ROBERT KISTELEKI: We have published, I think, two or even three articles about subjects so we tried to measure the extent of this problem, what it means, what could be the causes for it. We suspect that this is caused by power supply flakiness which may affect some countries more than other countries. So, the way we are addressing this is twofold. One is that we are enhancing the firmware that are on the probes to be able to deal with reality better, it just happens so there's not much we can do about corrupt identified systems for example. So the software is behaving better than it used to and we are hoping this is going to make a difference. The other course of action is that we are looking for next generation probes that do not have a USB stick in them and the expectation is that they will be more stable. When we did these measurements, you can read up on the RIPE Labs articles, the probe exists, we have to recognise that but for the multihome it doesn't seem to be too bad. It does affect some of the probes, we recognise that.

AUDIENCE SPEAKER: Hi, Robert. Sebastian. The Internet topology discovery measurements, how frequently are happening now?

ROBERT KISTELEKI: At the moment, it's either every probe does it either ten or 15 minutes, so they pick up address and then they just trace route that. We could squeeze that down to say one a minute if we wanted to. And I think they do one on ICMP and one on UDP, so maybe twice in the ten minutes.

AUDIENCE SPEAKER: And so what's the time frame to make that available?

ROBERT KISTELEKI: It's already up. If you go to documentation it should have an entry that says built in measurement to random targets and it gives you the ID and you can go and download the data set.

CHAIR: Thank you very much Robert.


And next one up is Remco, who is going to talk about Nanny Filters.

REMCO VAN MOOK: Hi, I'm Remco. And I have a problem. And this is ‑‑ I know this is not the AA meeting, but still...

And the problem is, Atlas and Nanny Filters, and the start of this is that a lot of people think that the answer to every network engineering problem in the world is the DNS, which may or may not be true. And these engineering problems including stuff like content filtering requirement set by your Government, mail server, blacklists, customer preferences, or vendor improvements that all impact the answers you are getting back from your DNS recurser. Now, whether that's your Government, your frankly local neighbourhood Government saying you are not allowed to watch this because it's illegal or we have told your ISP that this is not allowed, or I am actually a customer of an ISP that allows me to configure that certain types of consent are not accessible in my house or it's the CPE and there is a couple of brands out there that will happily hijack some domain names for themselves because they think that's where the configuration page belongs.

Now, that's all fine, until you are starting to host Atlas probes behind such connections, because if you use Atlas probes for DNS stuff, you might run into those things, and you get all these weird results that you have no idea what's going on. And this was actually triggered by ‑‑ there was a RIPE Labs article about somebody doing some DNS analysis, I think, found some weird stuff in Iran and, what could he do about it? So, I was thinking that maybe, if you look at ‑‑ so these are the tags that are associated with the Atlas probes sitting in my house, it's a really old one, yes, and it doesn't have USB stick issues, we could consider adding something like another system tag that says well, you want to be careful analysing the DNS answers that this probe is going to give you because they are being influenced by something else not being the originator of those zones.

How do you do this? Well, I mean like any network engineering problem you can use the DNS for that. You can ask the people writing these DNS recurser's software to include some beacon DNS entries, because it's going to replace answers they can all certainly include DNS entries as well. Checking for known bad names, like, I don't know, I don't think every country in the region likes people looking up That's probably not a good idea for a system check within Atlas.

And I am kind of intrigued about your thoughts on this. Is this something we should just ignore? Should we just stop using Atlas for DNS probes or should we just accept that there is a lot of if you see being created and increasingly added to DNS, or do we want to figure out a way to filter it out of our measurements? That's it.

AUDIENCE SPEAKER: Hi, Tim again. I previously ran a small Anycast network nor DNS stuff. Found the Atlas stuff really useful and yes and I saw these issues, I think should a tag would be really, really, really useful, and yeah, I'm pretty sure other Anycast operators will agree with that. Thanks.

AUDIENCE SPEAKER: Daniel Karrenberg, measurement guy. I guess what I'd be interested in what your definition of DNS tampered or whatever it was, is, because I suspect that actually most of the locations we have Atlas probes at are in some way marked with a DNS, that's question one. Question two is, do we really need this because if you don't want the DNS influence, you can always measure based on an IP address.

REMCO VAN MOOK: So, yes, of course I mean there is always the question, what's tampered with? Is it adjusting a TTL tampered with? Or is ‑‑ well, you name it, or is actually changing an A record or redirecting or ‑‑ I mean, you can certainly have a discussion about that.

The reason why you don't necessarily want to just look up an IP address is because figuring out what the world thinks of a certain domain name or a certain host name can be very interesting to figure out whether your website is being, or somebody is trying to hijack your website by using DNS tricks for example. If I go to my bank website, my bank might probably want to know what the most, what the consensus answer in the world is about the IP address of that website. Which, I mean, you could do in a number of ways, but using Atlas for that kind of thing is probably very useful and that sort of, well, I mean, that you can't do that by just looking up an IP address.

ROBERT KISTELEKI: First, from the RIPE NCC's point of view, the system tagging is extendible, if there's a good definition of what it should do, when it should and shouldn't tag things that's definitely doable. All it really needs is a good definition of, okay, but when do you tag stuff like this. Hat off, so as an individual, I wonder why you couldn't just do this as a DNS measurement, just measure what the A record is for your favourite whatever you want to know.

REMCO VAN MOOK: So, my problem is that, yes, if I go start measuring for this specifically, I can find this out. If I don't measure for this specifically, I get all sorts of ‑‑ if you see in measurements that weren't supposed to get this if you see. So, even if I don't even consider that there might be people doing strange things with DNS recursive queries, I might end up all sorts of strange answers that I can't make sense of. So actually being able to filter out these based on the a system tag or user tag, I could actually help improve the results that I could get out of my queries or my tests I am doing on Atlas.

ROBERT KISTELEKI: I guess it boils down to the definition again. So...

CHAIR: Okay? All done. Thank you very much, Remco.


And the next one is Sebastian, going to talk about country topology using Atlas data.

SEBASTIAN CASTRO: (Speaking Spanish.)

Mapping a country's Internet topology using RIPE Atlas. I am Sebastian Castro. We love this place and actually although I have been in RIPE meetings before this, this is my first presentation at a RIPE meeting, so I'm very excited.

So, the motivation for this work was originally to understand the Internet connectivity in New Zealand for clearly, so, despite New Zealand being a small country and a small community, there were a few misunderstandings going around. So, we thought, we can do this better and we can actually provide evidence to help discussions. And we also provide help to our policy people who actually does the lobbying with the Government, in order to provide them with like, you know these things are not that way or this way, we can actually show you.

And also, we were interested in finding oddities and the strange behaviour like traffic intended to be within New Zealand leaving the country, going to Australia or Germany or somewhere else. And understand the use and benefit of Internet exchanges.

So the goals for this work was first create reproduceable research. So, by making the code available, the data available and the methodology available. So if you are willing to go and do this for your own country, you can actually go and do it.

And also generate a visual representation of BGP adjustancies. Including with some analytics.

So the methodology for this work, we started with RIPE Atlas probes and tried to collect as many IP paths as possible, and then we select three different sets of destinations. So, the first set is the RIPE Atlas probes public addresses in the country.

The second one is a list of curated popular sites. So everyone else we use Alexa, although not very willingly, and discard some of the undesirable sites.

And finally we go and check some data sets from census, which is a project from the University of Michigan, to detect active IPv4 addresses in the country address space.

And then using ICMP Paris, we do the traceroutes, as the UDP traceroutes are not usually reliable and the TCP traceroutes might not be very good option as we learn doing the weekend at the Atlas hack‑a‑thon while testing a different tool.

So, one of the problems with IP path and traceroutes is you have to go and deal with incompleteness. Because we are interested in getting the BGP adjacent season this. So nodes that don't respond to ICMP probing are also called star nodes, cannot snap them to an ASN. A private addresses or non routable addresses which very common in some IXPs, at least you see in one of the ISPs in New Zealand there is a lot of addresses used internally that are not globally reachable. And Amazon, internal infrastructure, also give us a few problems. And we complete information, so we did it with incompleteness and complete the picture using the information from peering DB.

So peering DB for the developed countries works well, but I tried them up with my home country, and they couldn't fine IXs, so, it usually works.

So, in order to go and deal with the incompleteness and fill up the path, we made a few assumptions. So, the main assumption is, the inter AS edges always will answer ICMP, so if you are moving from one provider to the next, the edge of the next provider will answer you with ICMP. So, all the start in private he nodes happens inside an IS. So on this specific trace, you have one private address there, a star node there, another star node there, and cannot do adjutancy, so we assume all those belong to the same AS.

We're interested in doing this not because of the BGP adjutancy, but also we have plans to make a big big fully IP path topology map. Anyway. So the code is available on GitHub, it's there in our page, and that mainly fetch BGP data using BGP stream, so thank you CAIDA. Get information from the RIRs and BGP data in order to find the active IPv4 address space. Find the sources, the destination, the scheduled trace route, collect the results and combined the visualisation. So running a specific map or generating a specific map might take two or three hours between all the steps.

So, a couple of results of interest for this audience. And you see the IP topology map. It's an available on that URL. Because of time, I cannot give you a demo, but you can go yourself and play around with it. So the colour coding, you will see the reds are the Internet exchanges. The blue ones are New Zealand ASs, the yellow ones are the Australian ASs, green is any other country, and in orange you don't see any orange here, it's the tier 1 providers.

So, a few data points about this map. We used 78 probes. Completed 32,000 traces of which 68% actually reached the destination, and 31% of them didn't reach the destination. The IP path trace length is small as 10 hops plus minus 4, and the most influential providers in New Zealand are actually Australian. You will say why? Because of acquisition. There used to be New Zealand providers but now they have been acquired by larger providers in Australia. There is a couple of new IXs in New Zealand, so mega IX, Oakland IX, which are actually more or less the same number of peers than the well established IXs, although they have been around like for a year‑and‑a‑half. And the big providers in New Zealand, spark and Vodafone, they peer with each other because that's one of the thoughts, they don't do peering or they have kind of different arrangements.

Also, relevant to this audience we have this Spain topology map using 115 probes and 65,000 traces. So, the completion rate is slightly higher here with 72.27%. On this map, on this snapshot you can see the orange nodes with the presence of the entire one, providers. And you see the, a different IXs so not all of them are based in Spain, and the trace length is slightly longer.

So, I showed this result to a couple of people from Spain, because I was expecting to see like Telefonica being the big big provider. But British Telecom and IZFE, which seems to be a regional inter‑connection network in Spain are actually bigger than Telefonica in terms of peers. And the tool also identified the three IXs in Spain, Espanix, CATNIX and NIX value, and the people from NIXVAL, which is our regional IX in Valencia, they said how did you go and manage to discover that by yourself? It's magic, you know.

So, I invite you to play, to go to this site, see the visualisation and discover the information about peering and countries and discover which addresses were involved on an edge. You can actually go and search for source and destination, so, what path, AS path did I follow to go from A to B? It will provide you that. So, it's pretty interesting.

We have to acknowledge really the work, so CAIDA has done a lot of topology measurement in mapping, and also my good friend Emile in the IXP country Jedi. So I submitted this work like a year ago to the MAT Working Group and they said do you integrate with IXP country Jedi? And I can say now yes, we do. We all love the IXP country Jedi and because we are ‑‑ we have some commonalities on the methodology, so, we have code called export to IXP Jedi, you can take the configuration files from this tool and generate the same visualisation, and we also working on actually integrating the topology map into the AS graph of Jedi. That will happen soon enough.

So, this is like the Randy Bush slide, because Randy is not around, people will say you have to go and include what are the shortcomings of your methodology. So, the bias on the Atlas probes is the clue core, although in this case we have been trying really hard, so I'm also an Atlas ambassador, so we hand out probes in New Zealand, and so 78 for our very small country you will see is a reasonable number. And also, not all destinations are covered. It's not trying to be exhaustive and you can actually do something if you want to run less traces.

So, the future work, so, the idea of link RTT estimation comes from a quote from Daniel Karrenberg, considering we have that number of traceroutes, we're kind of have a large number of traceroutes, we might be in a position to realistically approach the RTT of each link and do that for colouring the map. And also, run the process regularly, so I was asking Robert, your topology, how frequently are you running that? Because we can actually use that data integrated with this tool and generate maps that change with time. And make the data snapshots at least for New Zealand available for all of us to go and play.

So gracias, and that's the end of the presentation.


CHAIR: Thank you. Questions? No. Okay. Thank you very much.

Okay. And we are finally reaching ‑‑ we are now reaching the last presentation for today, which is not Atlas but maybe a little bit going to touch on Atlas any ways, because these guys have been working on a document system as well and is looking at what they are doing and comparing it to what the tool that we have been seeing used today in a lot of presentations. Thank you.

CHRISTIAN VARAS: Hello everybody, my name is Christian Varas, I work for Speedchecker Limited and we have also a measurement system called ProbeAPI which is basically a software based system. We did a compared measurements on both platforms, and we'd like to share a part of the results we had.

So a little outline. Of course the introduction. A little comparison between the hardware probes from Atlas and our software probes. Also the coverage of both systems. I'll show you a selection of measurements we did. Some conclusions, and there is also a study with more measurements we put online this afternoon.

Okay, comparing the Atlas Probe API. So I got $5 interesting points I would like to show you. First, SLAs has homogenous hardware design for these measurements. You took the time of making these little machines and distributing them but ProbeAPI is a software distributed and gets installed on Windows computer, so therefore it's heterogeneous, and it's a bit more unpredictable in that sense, not only because of the software but also of where it is installed. Of course the connections are more stable because it's independent from users hardware but in the case of ProbeAPI we are running within the Windows installation of the user so the user will close the laptop, go somewhere else and open and use the computer, so they are very volatile, probes appear and disappear a lot. In that sense Atlas is not bound to a host OS, but at the same time, ProbeAPI being in Windows also offers an interesting point of observation or vantage point which we can also use for getting, gathering interesting information.

On the other side, distributing physical probes is costly and slower, and of course the interesting regions which are still hard to cover like South America, Latin America, and we have managed to cover that region via software with ProbeAPI. Either not optimaley in both, but because they are very large territories with lots of population, but that helps a lot.

Http measurements are available in Atlas but in a very limited way. You can only target anchor probes, we can target open links but any problems can be spoken directly with us. And we can do DNS and http GIT using for the page load we use Chrome libraries, and we can also download files and measure their speed and latency in TCP and not only with ping.

So, everybody knows this map here. I wanted to compare a bit of, it was a bit hard to compare visually, the coverage of both platforms but this map is more or less the extent of Atlas, and the density of those probes is in this other map, we can also get from their home page. I highlighted the United States, which, together with Germany, are the two countries which the highest probe density. And Atlas has around 10,000 active probes at the moment. So, but there are still many regions which need either better coverage or at least some. So, how does it look like with a ProbeAPI? Well, the map looks different and this is how I need to explain a little bit here. I had to zoom in a little bit to get a better numbers to read. Those little boxes indicate the number of probes active software probes we have in each country, and you can see that the, like, we're talking very different numbers. There may be some disadvantages in the probe stability and everything, and the hardware, but at the same time, the numbers are very big because of software distribution. And at the same time, having diverse probes is also a very interesting thing to analyse because we can see who has better, clear connecting and you can actually map the populations with the software probes from real users.

So this is a list of the top 20 biggest ASNs. We did a study last year with LACNIC and they provided us the number of eyeballs per ASN and we correlated that with the number of probes of each one of those in Atlas and ProbeAPI. And as you can see, there are the biggest ASN is in in China, the top one. We don't have many there. Since last year, 2016, we have improved a whole deal with the software probes in China. But to compare, to be fair, we had to use this data from last year, and you can see that there is a diversity of coverage. The blue line corresponds to ProbeAPI and the purple one is Atlas. Atlas has also 326 in the second AS from the US, but at the same time, in the software probes, we have four times the amount. So, how does this all this diversity and more probes here, less probes there, how does it translate in measurements?

So I wanted to show you a couple of measurements with we did.

We did ICMP measurements, pinging 60 times during one hour, like one time per minute, one country at that time and a selection of nine countries. I was requesting 15 probes per measurement from Atlas and 25 from ProbeAPI. The difference is because of the probe volatility in ProbeAPI, we need to ask for more to get an equivalent number of results from both sides. And the 10% of the slowest results were discarded for both platforms.

So this is the first result I'd like to show you, like the difference in measurement. They measure very differently, in the Japan you can see Japan because the very big difference in measurement. The higher the bar means that that Atlas measured slower ping times and the lower the bars are when it goes to negative, it means that the ProbeAPI measured slower times. So you can see in countries like Argentina and Mexico there is a big difference, and India, which are just because of lack of coverage from one and having more probes from the other side from ProbeAPI. But, in countries that were better covered, the best one is Germany from what we used, but the USA and UK are also very similar in measurements. So you can say when the countries is well covered, you really don't need to worry too much in that sense, which platform will be more exact, precise.

So, let's move to the measurements. First is Argentina and Brazil, they look similar. And you can see that over time over those 60 minutes of measurements, although RIPE Atlas measured slower and in general like the lack of coverage resulted in slower ICMP times for some reason, but the measurements are very steady, very precise in that sense are more sensitive to changes in the network or destination, and while at the same time, ProbeAPI has such a big number of probes and they are so diverse that each one of those measurements were delivering a random number of probes from a much bigger universe of ‑‑ to grab from, so each measurement will reflect a different selection of that population, and Atlas, of course, since there's less probes, the selection will be more or less the same each time, because of less probes.

Mexico is more or less the same, but it was a bit better for Atlas in the sense of we're near.

China and India are also very special cases, they are more or less unstable in both cases, but of course, in India ProbeAPI showed also problems with, because of changes in the behaviour of the network, the usage of the computers, more users online, less users online or that. But Atlas kept always a more steady result line.

And here is the case of Japan. I put this screenshot because this is the tool, we built a tool for comparing Atlas and ProbeAPI. You can see on the right side there is a table, delivering each ping from those. I did a ping to test to and started pinging, and we know that Microsoft .com is behind Akamai end point. And in this case, ProbeAPI measured very low, like something you might expect when pinging Akamai from Japan. Like, 17 average to 13 milliseconds on average. But Atlas kept measuring with the same probes all the time. That's another difference, very high, and we didn't know why, it was really weird to see Akamai pinging so high. So we thought is there something with Atlas in Japan? Why is this different? Why is it not like Germany or UK or USA that looks a lot better? Like more similar?

But, we measured again with something that's not behind Akamai and we measured many places and it seems like Atlas and Akamai have some problems, we measured our own home page and the results now match more or less and we kept measuring Akamai end points with Atlas and we kept getting those high ICMP values and with other things, for example, our web page Speedchecker is behind CloudFlare and this didn't happen and we tried many many times on different days, different situation and the results kept being more or less the same.

So, that's an interesting result. Why is Akamai not pinging well from Japan? I don't know. Maybe it's not only Japan and not only Akamai, but this is one result we found. You want to ask a question now, yes?

DANIEL KARRENBERG: Yes, my question is, go back to the Japan one, back to the Microsoft one, please. Did you do DNS resolution on the probe for these?

CHRISTIAN VARAS: Yes, the probe itself, but also I tested the Google DNS points with the ‑‑ directly with the address, or resolving the address, there was no result difference. There was like ‑‑ like, the Google looked more or less like this graph. But, whenever testing Akamai end points, it came to being like this. So, it's something I would like to present to you and there's ‑‑ it's a mystery for us.

DANIEL KARRENBERG: It must be something DNSwise because Akamai secret source is in the DNS. So it's quite clear that, to me, I'm very suspicious that actually you didn't get the same IP addresses when you queried for Microsoft .com.

CHRISTIAN VARAS: Yes, I really ‑‑ we can look at that after in more detail happily, I'm very happy to do that.

So, and finally we have the UK and the USA which also look very similar, but of course ProbeAPI, since the probes are more diverse the results for the whole country of course, big countries, very diverse connections, there will be more, less steady than Atlas. But if you take the software probes and measure more specifically, of course since the measurement group is smaller, you will get more stable results, but measuring the whole country, you will select 25 probes each time from the whole USA, from the thousands of probes in the USS and that will give this very jumpy curve.

That's more or less the results I wanted to show you. Like a little comparison on how they behave. Germany, of course, this is the country best covered by both platforms, and we can see the results there. In that sense, almost indistinguishable in the practical sense.

So little comments about this. Both platforms performed well in both areas, in Germany, USA and the UK, properly configured in Japan as well. And the results from software probes are a bit more unstable, of course, because of the explanation I gave you, the diversity of the probes and the connections and also the volatility of the probes make the results a bit more variable in the software probes. Of course a lot of coverage affects measurements in the case of ProbeAPI, the curve will get less steady, and in the case of Atlas, it looks like the ICMP values get I think just less precise because we are measuring with only those probes available in one country, not with more.

So, the next one, hardware probes are more adequate for base measurements because they are more sensitive so any changes in the network or the server or anything that might be affected in a minor scale, the results will be more visible with Atlas than if we were measuring in a large scale like I showed you, but in a smaller scale or a well‑covered area, the results should be equivalent. And software probes, they offer a good opportunity of measuring without having too many ‑‑ hardware probes in some areas. The LACNIC made a study last year and I linked ‑‑ there is a ‑‑ the same study I wrote an article and it's on the RIPE page, you can read, there is more results about this, and I mentioned also LACNIC study about connectivity in the Latin America and Caribbean area and they did it using ProbeAPI. And the idea is that so that we can study a whole region if they think well our measurements, we can also help cover areas that aren't very well covered with hardware probes.

The complete article is in the RIPE Atlas home page. The previous study on the coverage we did from RIPE Atlas and ProbeAPI, we did that last year, that's the link as well, you can download them. And the LACNIC study on the Latin America and Caribbean region is in the last link. So you can have a look tattoo. In the complete article I also show http get measurements and a bit of a survey of Germany and Brazil done with the software probes and you can also have a look at a whole histogramme of at least 1,000 probes surveyed in those countries and you can ‑‑ yeah, it's very interesting to read.

And thank you. That's our contribution to this talk. Thank you very much.


AUDIENCE SPEAKER: Peter Hessler from Hostserver. For the probes that you received both from Atlas and from ProbeAPI, did you compare what their so called home network type would be? Is it a home user on a traditional eyeball network or is it at a data centre or elsewhere, etc.?

CHRISTIAN VARAS: No, we weren't testing that, we were just comparing results directly.

AUDIENCE SPEAKER: Massimo, RIPE NCC. So related to the Akamai test, you didn't use resolving probe. So essentially I just check ‑‑

CHRISTIAN VARAS: I don't see you and I can't hear you very well. So...

AUDIENCE SPEAKER: Hello, I'm one of the software engineers of the Ripe Atlas project. I just check in the database to see and the test with Akamai doesn't use resolving probes so that can be the reason why you got this difference in the analysis. Just to say. And maybe we can together meet later and ‑‑

CHRISTIAN VARAS: We can have a look, yes, thank you.

AUDIENCE SPEAKER: Daniel Karrenberg, measurement guy. Just to explain to the audience what happened there is that the DNS resolution for the measurements actually ran in amount centre dam so what you got is the optimal Akamai node for Amsterdam and if you go from the optimal node in Amsterdam it's going to take longer than the optimal node in Japan.

But my question is you said that the bigger volatility that you see in the software probe thing is due to a number of factors, like more diverse locations, different locations, it makes a bigger distribution. I am interested in the bit that has to do with other things running on the machine that you are using, which we don't have the probes. What's your ‑‑ do you have any intuition on how big that inference is, is it half? Is it 10%? Is it 90%? That's question one.

And question two is have you thought about reducing that by basically monitoring the system and say hey, if it's too busy, we won't do measurements on this probe?

CHRISTIAN VARAS: Well, no, I think until now, the only thing that's taken into account is the Wi‑Fi connection sometimes when there is some logic integrated for getting rid of Wi‑Fi connection, but also it would be very interesting to include system variables to correct measurements that would be ‑‑ yes ‑‑ what was the other question?

DANIEL KARRENBERG: What's your intuition, what's your feeling on how much of the variants that you see is explained by the load on the machine that does the measurements? Do you have any intuition or data?

CHRISTIAN VARAS: No, data from that ‑‑ Janos can answer that.

AUDIENCE SPEAKER: This is Janos, I work with Christian. So, you know, by the definition of software probes, you don't have ‑‑ cannot monitor cross traffic, you can ‑‑ of course there is a lot of limitations from the software so that's why we use more probes for the measurements. The only things that we can actually measure is on the machine itself, so we monitor there is already some user activity, we don't do any measurements, but of course on the home network or any other you know, cross traffic on the network cannot find it out from the software probe itself. So that's of course the limitations of all the network and you really need to do a lot ‑‑ you need to use a lot more probes to get the same level of, a comparable level of data quality and filter out the noise.

ROBERT KISTELEKI: How do you know where your agents are? You said this many in this country, that many in that country, how do you know? What is the methodology to figure that out?

JANOS: I will answer your question. So, we use two geolocation methods, we use modular geolocation API which is Wi‑Fi and also the IP geolocation from commercial vendors. So this is a combination and in some cases of course geoIPs is less accurate. We see, I think, maybe a fifth, I would be guessing, I would say it's giving us 50% of locating of the probes in a Wi‑Fi range so, of course, for privacy reasons we scramble it around, but it's quite accurate.

CHAIR: Okay? All done.

Thank you very much. This was interesting.


And finally, we have a few announcements that we need to do. Everybody remember to vote for your PC members or any other voters, votings, or elections that's taking place over the week. But in particular, for your next PC. It's important that we are getting some cool people that will put together a Plenary programme.

Also don't forget to rate the talks that you have seen today.

Don't forget to use the mailing list for any discussions or interesting observations that have come up over the course of today. And then finally, don't forgot to write your lightning talk and submit it so you get to tell funny, good and interesting stories in the next lightning talk slots.

And anybody have any final remarks for today?

End of of Working Group session.

Sorry, going over time, enjoy your coffee. Thank you.