Archives

Plenary

Monday 24th October, 2016

At 5 p.m.:

LESLIE CARR: All right, everyone. Come in, sit down, make sure you go to the middle so everyone can get a seat. We are about to start.

BRIAN NISBET: Right. Hello, ladies and gentlemen. If I could ask you to take your seats. So, welcome to, we are already into the second plenary session of RIPE 73. My name is Brian Nisbet and I will be co‑chairing this plenary session with Leslie Carr. And I am going to have an action‑packed, exciting, etc., etc. session which will see you through to the evening events. A few reminders before we begin: Thank you all for your wonderful cooperation on microphone etiquette in the Opening Plenary, and again, please, for those of you who weren't here for that, just a quick reminder, when to you have a question, please raise your hand and do not press the microphone button until you are asked to do so by the Chair.

You will knock somebody else off and then it will just get complicated and there will be tears and crying and hot milk and all sorts of things.

SPEAKER: And apologising

BRIAN NISBET: And we will have to apologise to you for some reason. So anyway, other things, please remember we are looking for nominees for the PC, the deadline for this is 16:30 tomorrow. With an exception that the nominees will stand up on stage previously and introduce themselves at the beginning of the 170 session, so roughly, 24 hours' time. So please send your nominations to PC [at] ripe [dot] net. Also,as well as, you will become hopefully tired of hearing over the next few days, please rate the talks. There are prizes to be won. From a network that I am reasonably sure has quite a bit of IPv6 although I do not though how many CGN they have. So, I don't think there is anything else, nothing else I have forgotten there? I think we are good. So, we shall proceed on with the first talk, which is from Athina Fragkouli from the RIPE NCC, accountability and other such diverse matters.
(Applause)

ATHINA FRAGKOULI: Hello, I am head of legal from RIPE NCC, and it's not the first time that I am going to talk about accountability in a RIPE meeting. As a matter of fact, the first time we talked about accountability I talked about accountable was two years ago in London at the RIPE 69. And back then, I was trying to describe a bit what account the is, because it's like a very fancy new term that was broadly used, so I was explaining what accountability is for the RIPE NCC. That was a slide ‑‑ that is a slide from my slide back then, and I was talking about account the for the RIPE NCC is actually compliance with what the community wants and it's also operational accountability, like the RIPE NCC is a legal entity as a structure, has a legal structure, has legal mechanisms and has a documentation and procedures in place. But why was I talking about it back then?

It was the beginning of the IANA transition discussions, and there was a lot of focus on ICANN, actually ICANN was under microscope, ICANN's accountability was under microscope, and there was this cross community Working Group on ICANN accountability thing that was created and everyone was talking about it, and about ICANN and we could feel, we could sense that ICANN is just the beginning and other Internet organisations will be next, such as the RIPE NCC and the other RIRs. And about the RIPE NCC's accountability, we were very confident. We have a solid frameworks, we have been working on it, it's been years, we have our procedures documented and explained where we are reporting to the community, to the members, and we are responsive to any emerging issues that there are, and that is the same for all RIRs. However, we realised that this should be explained to everyone else that does not belong in the numbers community, in the RIRs community. Therefore, we created the RIRs accountability matrix, that is still in the NRO website. We have created that two years ago, it's a matrix on ‑‑ that collects all documents, all procedures of all five RIRs in one place and we also created some FAQs. The questions were not asked but we were one step ahead, because we wanted to own the discussion, we wanted to be ready, to have all the answers before any questions were asked. And that was a success. That was ‑‑ that worked. Governments, law enforcement authorities were all referring to our documentation during the ICANN talks, the RIRs were presented as an example of accountability, and even the NTIA, the US Government, in their reports, in their evaluation on the IANA transition proposal, they were referring to this matrix, to prove the accountability of this mechanism of the system.

So now what? Now, the IANA transition is done, is over, but the discussions on accountability are still ongoing. The CCWG is still on and we now see they are focusing on the accountability of FO and ACs and committees within the ICANN structure, that represent the various communities, ccTLD, gTLDs, governments, civil society, and also we have the ASO there, which is the numbers community representatives. We will make sure that the work of this group, of the CCWG group is limited to what happens within the ICANN framework and not further than that. This group, like going beyond that and seeing and what we are doing, let's say, at the RIPE community is beyond the scope because there should be a respect on the community's self‑governance. Having said that, we do see again this trend; we see that the focus is shifting from scrutinising the legal entities such as ICANN and RIPE NCC, to the actual communities, such as the RIPE community. And again, we do feel confident about our accountability as a RIPE community. We are bottom‑up, we are transparent, we are open, we are inclusive, we have our documentation, we feel that we have this covered but from someone on the outside, this is not that clear. And we to hear some noise. We hear questions, like okay, where is this authority coming from? Who sets this guidelines for this discussions? What is the scope? Who is the RIPE community? Who are these participants? And who do they represent? What is this decision‑making mechanism? How is it implemented and enforced? So all these questions we hear and we do have answers for these questions. Only I am not sure we speak the same language. The answers we have are not that understood. We don't have answers in a comprehensive manner for them. And that is a risk, because if they don't understand our answer, they might think we have no answers, and they might think that we are not accountable and we cannot allow this to happen.

So, as I said, it is a matter of translation. We need to explain our procedures, we need to explain what we are doing in their language, we need to respond to their questions in a way that they will understand and we also need to listen to them and to take their concerns into consideration. For a community such as RIPE community this is a very healthy process, to self‑reflect a little bit, to see what we are doing, to reevaluate whether the procedures we have correspond to the principles we serve, and maybe improve our documentation, if necessary, and all in all, we need to defend our community, and again, we need to own this discussion as we did with the RIR matrix before they ‑‑ there is a massive, like, amount of questions towards us, we need to be ready and be a step ahead, because don't get me wrong, we have a very good reputation right now, we are again presented as RIPE community as a very accountable community, we should make sure that this is maintained and that we build on that.

So, as a way forward, this is a job for the community. The RIPE NCC of course can facilitate, can support it, can help in any way that their community wants us to help, but this is a job for the community. And the question for the community is, whether they want ‑‑ whether you want to address this issue. And I think this is the question that I want to ask you here and I want to open the discussion with that. Thank you.

(Applause)

LESLIE CARR: Thank you. Thank you, Athina. Now we'd like to open up the floor for questions. Hold up your hand and please keep your questions as brief as possible.

SPEAKER: Alexander. The problem I see in your report is that RIPE does not exist in the terms and meanings of governments law enforcements and something like ‑‑ we have good reputation but for law enforcement we are not ‑‑ we are just ‑‑ we something which does hot exist in law and definition, and I think well, I think we should work on this but working under their rules, trying to feed their borders, their limits, well is a bad idea. Let them feed our ideas and come to our community, join us, that is the only way how governments and law enforcers could interact with us, not differently. Thank you.

ATHINA FRAGKOULI: Thank you. So, I don't think they see us as a gang, I think they are taking us very seriously and they are ‑‑ they are ready to join us and we have seen them in RIPE meetings, for example, and it's good that we see this participation from their side and this collaboration. At the same time, I think it's important that the collaboration goes both ways, so if someone has given their background, has a question and they are trying to understand where we came from and what exactly we are doing, I think it's only fair that we sit down and we explain to them what we are doing, and since we see the same questions again and again, maybe we can prepare like such an explanation and document it and have it as a point of reference for others that will come tomorrow with the same question. I think that is the idea.

SPEAKER: Thank you, Malcolm Hutty. London Internet Exchange. Not just off the street for this topic. Thank you, Athina, and firstly I'd' like to say on the narrow point that you raised, the discussions with ICANN, I would thoroughly support the position that the ASO needs to be accountable to the RIRs but the question of the RIRs' accountability to their own communities is a matter for the RIRs to organise and that it's right that we should do that here and that ICANN should absolutely not be seeking to interfere nor supplant that so I thoroughly support that. On the broader question of how we both define these things and how we communicate them and I think it's mot just how we communicate them to outside our community but it's also worth mentioning how we communicate it within the community in helping to renew our community's understanding of these matters. I would like to remind people of RIPE 464, the report of the enhanced cooperation task force which did look at this question in the con /TEFRBGS the Whois discussions that were very much asking the question you alluded to earlier or that you mentioned earlier, which is where is the authority of this community from and which was a clear position statement on behalf of the RIPE community that the functions of the RIRs arise because our community has technical needs, we have technical needs for coordination amongst ourselves as operators, to ensure that we can fulfil our separate and individual responsibilities for making operations work, and that requires some collective action, and that gives rise to the need for addressing policy and for database policy and for a secretariat to support that and operate that and to enforce it. That is all clearly set out in 464, they could certainly be further communication and elaboration and simplification of that communications message there which I would very much welcome and be happy to participate in. But this is not entirely new to this community; it is important that we should continue to understand and to renew our own understanding of that basis so that we can communicate it more effectively to those who may ask. Thank you.

ATHINA FRAGKOULI: Thank you.

FILIZ YILMAZ: Thank you. I will just answer your question first and I am going into more detail why. Yes, accountability is something we have to be thinking about as a community, I believe. And that shouldn't be only because there is external focus now on us. I agree that the recent external focus coming from ICANN, coming from the bigger Internet ecosystems, ITU, the Whois community etc., has been ‑‑ so that is good, it's a positive outcome. I think that they have been paying attention to us, can push to us think more about this because we have been, let's say, engineering this bottom‑up process and engagement for the last, more than 20 years, and maybe we are getting a little too comfortable with our own ways, and there is a whole big world out there, our industry is changing, there are more and more new players who will need to engage with us and they may be outsiders today but tomorrow they might be our own community participants too. So, are we good enough in telling ‑‑ understanding our own processes, to educate these newcomers? We keep seeing RIPE community growing bigger and bigger with the numbers, right? So, there is an internal need, I think, for us, for our own sake as well, to look our accountability.

I am mot saying we should be ‑‑ very procedural community with bi‑laws and engaging around very lawful or legal lingo but I think we can agree with some principles and document them in a way that somebody who comes and enters this community doesn't need to go to ten RIPE meetings to understand who is doing what and why.

ATHINA FRAGKOULI: Very good point. Thank you.

NURANI NIMPUNO: Thank you. The way I see it, to try to answer your question, you have to divide it into three different buckets. So one is the accountability work in ICANN, and being on the ASO, we know that we will be affected because you are looking now, the accountability works in two, the CWG works on ‑‑ looks at enhanced community powers, right, and the ASO is the representative body in ICANN and there are quite a lot of grey areas there and that is something we have discussed in the ASO, what are ‑‑ what is the scope of the work of the ASO and how do we relay to the NRO and see ‑‑ to the NRO ECs, for example, and to our communities. That is one thing. The other thing is about the ‑‑ two other parts I see; one is actually a documentation and getting a little bit better at explaining how this community works. And I mean, this is an ongoing work, right; I mean, when I got involved 15‑plus years ago we didn't even have a PDP, we didn't even have processes for how to elect Chairs etc., and we have done that work, so let's document that work and let's also make it a bit more accessible. I do think, you know, for those who have been in the community for a long time, we think it's very clear but for newcomers it's very hard to get their head around this community. Like any communities. And the third part is, I also think we should also not be afraid of looking at what we can improve, not just document what we do but other things ‑‑ are there things we can improve to make this community more accountable? And we shouldn't be afraid of reviewing that ourselves or getting someone else to help us do that. I mean, it needs to be community work, but I think this community is strong enough to stand up to such a test and we should look at how can we make it even more accountable. And then just finally, I want to say I think we should also be a little bit careful with adopting some of this language that sort of imposed on us from outside, multi‑stakeholder accountability. I am all for accountability, but we also have our own words for it, right? Bottom‑up, transparent, inclusive of it, so let's use those, thanks.

HANS PETTER HOLEN: Thank you. I think this is an excellent opportunity to propose that ‑‑

BRIAN NISBET: Who are you and where are you from?

HANS PETTER HOLEN: You know, since I didn't remember in my opening speech who I am, then how should I remember during the day. RIPE Chair. I think this is an excellent opportunity to propose forming a task force to work on documenting this framework. I am not proposing to change anything but work on documenting on how we are doing and then as a side effect of that we may want to improve on something but if anybody is interested in participating in such a task force, contact me or Athina and we will put together a mailing list and maybe meet this week to start to discuss how to approach this. Does that sound good to you? The room is silent.
(Applause)

ATHINA FRAGKOULI: Thank you. If you want to participate in such an exercise, then please send an e‑mail to Hans Petter or me and we can start working from there.

LESLIE CARR: Anyone in the back? It's a little hard to see you back there. Well, thank you very much, Athina, and everyone, for having such a great discussion.

ATHINA FRAGKOULI: Thank you.
(Applause)

LESLIE CARR: All right. And next I would like to invite to the stage Ioana, talking about IPv4 transfers.

IOANA LIVADARIU: So hello, and today I am going to talk about the IPv4 transfer market. This project was done in a collaboration between Simula Research Laboratory in Norway and it's part of my PhD. So we are interested in studying the market because market now represent a viable solution for obtaining IPv4 addresses. And organisations to the market mainly because of the scarcity that the IPv4 space suffers from. And to put this into numbers, I chose to show a report from this year, June, published by NRO, so what we see, we have four years that have already started allocating from last /8 and the last LIR is not far from the same situation. And how did we get here? Well, let's look at the history of the allocation. So, I draw a time‑line from the starting point of the standardisation of IPv4 until today, and I divided this time‑line into free phases, so we have the pre RIR phase in which allocations were done in a classful manner and IANA was allocating directly this space to organisations, and space that was given back then we called it right now legacy space. Next space we have the establishment of the RIRs and the change in the allocation, so RIRs were getting space from IANA and in turn they were giving this space within their own region. And of course, with the establishment of the RIRs we have also RIRs policies. And the last phrase is the exhaustion phase in which we have today and it's caused to have started in 2011. I have highlighted then the four months when the RIRs that were in the previous slide hit the last /8. So, we have APNIC 2011 ‑‑ you can read all this ‑‑ the last one was ARIN in 2015.

So, IPv4 market solution to this scarcity, and I have highlighted with red the first intra‑RIR transfer market that occurred within the ARIN region, and the first entire ‑ Inter‑RIR transfer market that occurred between organisations registered in ARIN and APNIC, 2012, and to give a definition, we have an IPv4 transfer is a transition that occurs between two organisations, and can involve third parties which are brokers but is not mandatory. What is mandatory, they are regulated by the RIR policies and as of today we have three RIRs that have this market and these three RIRs are RIPE, ARIN and APNIC, and they have in place both Inter‑RIR policies and intra‑RIR policy, first publish transfer 2009 because organisations that are within the same region and first published Inter‑RIR transfer occurred in 2012, right?

Now, these RIRs publish and maintain lists of transfer blocks that occur within their region, and what we are going to do in this talk, is going to take ‑‑ we are going to take these lists and look at how they are evolving over time, how the published transfers are evolving over time. What a ‑‑ what are being transferred, how the buyers are acquiring the space, whether there is a correlation between the market and the v6 adoption and the last part here is going to about market value. And we also are going to see if we can manage to how to defect these transfers using publically available data. So let's start.

So, what we do first is very simple: We take these lists of transfers and we just count them. And what I am showing there is this number over time, and each region is represented with a different colour, so ARIN is red ‑‑ ARIN is green, APNIC, red, RIPE, blue and the Inter‑RIR with brown. So, the first observation is that we have an increasing number of transfers over time, right? And to see it better, we can zoom in the first part. If we look at RIPE only, we see that we have 70 transfers in the last part of 2013, and in 2015 the first part of the year, we have almost 900. So, this brings me to the second observation, the fact that we have a very highly active market within the RIPE region. Now, these transfers represent transactions of address blocks, which means that organisations can exchange /16, /8,/13 and so on, so to see how much space is exchanged we count number of /24 blocks and this is what we get. So the colours remain the same and our analysis shows that we have the transfer block account for 2 .67% of the IPv4 space, and half of the space comes from ARIN, and we continue looking at the space, and we asked what type of space is transferred. And what type of space in terms of legacy and size of blocks, so our analysis shows that we have a high percentage of space that is transferred this legacy and also, we see that a very high percentage of the space, of the transferred blocks, 80% of these are very small in RIPE and APNIC and 37% of these blocks in ARIN are large blocks. Now, what we have done so far, we have looked at the general overview of the transfer, right, and we move and we want to see whether the space is used by the buyers or not, and we consider the space to be used if we see it in the routing table, and we basically look at the routing table dumps and search the transfer that address blocks. And to help our analysis, we divide four classes, so we have class A, the space is going to be routed before routed after, class B routed after, routed C and above and after and class D. So it is ‑‑ we basically take the transfer blocks and we try to fit them in all of these classes. And what does our analysis tell us? It tells us that most of the space is routed after the transfers and moreover, most of the space comes from class C, which was routed before, routed after, right? And next thing that we look here is how much time it takes for the space to be reannounced. So we looked at the average number of months that it takes to appear in the routing tables and we see that is relatively facts and our conclusion is that buyers acquire these addresses to meet their immediate needs. We continue analysing this utilisation of address space and we do it by adding another source of data, the census data provided by ISI, and basically this data gives us result from probing the IPv4 routing space and we use this data to devise a matrix which we call utilisation fraction and define it as a fraction of IP addresses that is point to ICMP transfer, and we use this to come up with aggregated view of the transferred space. So what we do, we compute the median value of this utilisation fraction within six months of the transfer date, so we think before and after. So this is what we show. So we have T represents the transfer date, what is to the left is what happens after the transfer date, what is to the right is what happened before. So, what we see, we see an increase in this value, in the utilisation fraction and if we look at APNIC, which is the most obvious increase, we see that it actually jumps after one year. So we conclude that the space is utilised more after the transfer, right? And what we do further, we look at the sellers and the buyers and we try to identify whether we can see top ‑‑ in the market, and top participant, what I mean is top organisation is in ARIN, top countries in ARIN and RIPE and our analysis shows that the top 10% participants dominate the market. So the space exchanged is approximately 80%.

We focused on RIPE and what we found, we found that out of 64 countries we have six countries that exchange approximately 78%, top selling country generally Romania, and the map shows the ‑‑ the map there is taken by the ‑‑ from the Google IP v6 statistics, and the darker the colour, the higher the adoption rate. And the numbers show the ratio between the space that is bought and sold and the idea is to try to correlate the adoption with the market, and for example, if you take Germany, we have more space that is coming out from Germany, more IPv4 space and, at the same time, we have a high adoption of v6. So we could say that there is a correlation there. But if we move to Ukraine we see the same ratio but the adoption is close to zero, probably, there. So the idea is that we can't conclude anything from just looking at this map. And what we do, we devise a metric that a fraction of IPv6 adopters, which is a fraction of buyers that originated ‑‑ originate prefixes after they acquire v4 addresses on the transfer market. So, this is what we show, we show the fraction ‑‑ the diversion of the fraction across time and we see had a this fraction increases and we conclude that the market don't have a negative impact on the v6 adoption.

The last thing that we look at is the monetary value, and to do that we need prices, and what information we have about prices? We have prices that are publically available from the IP transaction and we have prices published by IPv4 brokers but the bottom line is that the monetary aspects of the transactions are confidential, meaning that the RIRs publish the transfers, so publish the blocks, but they don't map these transfers, in the listing they don't map with the specific price and also the fact that brokers are involving all the transactions in the market is not mandatory for them to be there. What we do, we offer a wait of estimating the market value. And our approach is to build a model that is based on the Hedonic pricing method, used usually in the real estate market, by taking into account both its internal and external characteristics, and we basically come up with characteristics and also external factors and we propose a model, we fit that model with the prices reported by an IPv4 broker and this is what we get. We get the estimated IPv4 address price for each block size, and the estimated value of the market is 386 million dollars.

Now, what we look so far, we looked at different characteristics of the published transfers and the next question is, whether we can detect these transfers using publically available data. And the rationale here is that transfers need to be approved by RIRs, but there is no mechanism to enforce this upon an organisation. Our methodology is very simple; we use BGP data routing table dumps collected from two over a period of time of eleven years and we construct multi‑prefix to mappings and we look at whether this routed prefixes change in time or not and once we identify such a prefix we label it as a candidate transfer. Of course, not all prefixes that change origin in the routing table can be ‑‑ because there is transferred so what we do is we find reason for this to happen, so we basically design for filters that help us identify false positive in the initial list of candidate transfers and this filters targets based within the same organisation, short‑lived advertised space, space advertised by the RIRs and peer space. And this is our result. So the red line shows the initial number of candidate transfers, the blue line shows what we get after the filtering. So we get 65% less in the blue line and we also validate our methodology by comparing what these ‑‑ what we get in the final list of transfers with the graph of data and our analysis shows that we find more than 90% of the detectible published transfers in our final list of candidate transfers, which is good. But we still have a very high number of false positives. We have a lot still there on the X Axis. And to do that, we look at some of them, and we give examples.

So we have an example of a false positive, and we have the /16 that moves from AS 19262 to AS 701 in June 2013. So when we manually inspect this movement of space we see that both of these two ASes belong to the same organisation; however, our AS mapping doesn't reflect this, so what we basically have, we have a false positive, so the ‑‑ AS mapping. We also identify other causes like reallocated address block, switching providers and what we do next is to try to expand our methodology, and the data that we use to expand our methodology is DNS names and the idea is to look at changes in the DNS resource records and the data that is available right now suffers from some limitations and these limitations are basically the, the coverage and in frequency. So, what does that mean for our methodology? It means that we are able to analyse just a part of the final list of candidate transfers and I have shown there what our preliminary analysis looks like. So, we manage to remove two‑thirds of the analysed transfers, which is good. And this is the last slide with conclusions:

So what we have done, we have looked at the transfer market and we looked ‑‑ we concluded that this increase in size, the majority of the blocks are legacy blocks. Markets seem to serve their intended purpose. They don't have to have a negative effect on the v6 adoption and we see that they have certain characteristics across different regions. And we also tried to defect transfers and our conclusion is that it's difficult and it probably requires multiple data sources.

And that is it.

(Applause)

BRIAN NISBET: Thank you very much. So, are there any questions?

SPEAKER: Hi. AfriNIC policy development. I just had a comment on one of the slides where you said utilisation is checked with ICMP. Which would mean this is where ‑‑ in cases where the target is filtering out ICMP completely you don't have visibility, so maybe you could combine that with the trace routes, checking where the final reachable HOP is and then further combine with your AS and DNS lookups. Just to point out.

IOANA LIVADARIU: Yes. We acknowledge the limitations of the data that we use for the utilisation, that is a very good comment. We will look into it. Yes. Thank you.

SPEAKER: Hi, Elvis Velea, Escrow. I am one of the IPv4 brokers. I have a question about slide nine, and the slide nine is, as far as you can see, the time before the announcement is somewhere around two months for the RIPE region and more than six months for ‑‑

IOANA LIVADARIU: Slide nine, sorry.

SPEAKER: Yes. So you are saying that in average I suppose, the time before the announcement for the RIPE region is just below two months and in APNIC and ARIN it's more than six. Any idea why this difference, first question? And the second question is: Have you noticed hijacks of the space and do you count that as reannouncement or is that just removed? Because we have noticed with quite a few of the transfers that are being brokered that if the ‑‑ and we actually advise all of our customers, if the IP address is transferred and then it shows up on the public website of the RIRs, most often it happens that if the buyer does not start immediately announcing the address space the block is hijacked.

IOANA LIVADARIU: Okay. So, first question, why is it shorter, so why do they appear in the routing table earlier? So ‑‑

SPEAKER: Or later.

IOANA LIVADARIU: I mean, of course we can't say precisely why this happens, I mean, but my guess is this is directly correlated with the fact that you see in the RIPE region highly active markets, so buyers there seem to be more interested in getting space from other organisations, and the second one, if we looked at ‑‑ if we saw hijacks for the data that we analysed, I personally didn't see that, but this study doesn't cover, so this doesn't cover the last month so I don't know, maybe in the last month these things have happened.

SPEAKER: No, I am talking about over the past few years.

IOANA LIVADARIU: Oh. Okay, no, I didn't see that in the routing table.

SPEAKER: Well, there may be something is wrong there because even ‑‑ there were even ‑‑ some of these were even made public in the past.

IOANA LIVADARIU: Okay. Maybe we can talk ‑‑ yeah.

SPEAKER: Sander Steffann. Address Policy co‑chair. You studied both the stuff you could see in the routing tables and the things you could see from the RIPE database. I am just curious, where would you say from all the transfers you have seen, what do you think the quality is of the documentation in the database? Are all the transfers you have observed in public actually properly documented?

IOANA LIVADARIU: What do you mean properly documented? Because this is ‑‑ I mean, this is very debatable. For example, I mean, RIPE, for example, is the ‑‑ I wouldn't say best documented but they offer very ‑‑ they offer a lot of details, whereas other regions don't offer as much details as RIPE. So, yes, there is a problem there, yes.

SANDER STEFFANN: I mean, for example, have you seen many transfers that were not documented at all?

IOANA LIVADARIU: So you mean transfers that ‑‑ this is like the second part, detecting transfers?

SANDER STEFFANN: Yes.

IOANA LIVADARIU: So I don't have any thing to back this up but I mean, I have read ‑‑ I mean, these things can happen, it's not unlikely, so yeah, to answer your question, no.

SANDER STEFFANN: Thank you.

LESLIE CARR: Any others, if you are in the back make sure you wave really hard.

BRIAN NISBET: No. Okay. Thank you very much.

(Applause)

Now we are going to move on to the lightning talk session of this afternoon, and first off, we have Geoff Heuston talking about ECDSA.

GEOFF HUSTON: Good afternoon. I work with APNIC. I have nine minutes and what 40‑odd seconds and 40 slides so this is going to run pretty quickly and the talk is all about cryptography, including amazing engine, I want one of those, it's so cool. I need one. Basic challenge about cryptography: What you want is asymmetric keys, so that you encode a message with one key and it can only decode it with the other and it works for both keys. So if I lock it up with key A only key B can unlock it and the other thing you need to know is they are not derivable from each other, even if I know one key I don't know the other key value. So, how do you do this?

Anyone remember maths? Sorry. If you use exponentiation where you are taking the exponent as a prime number, are you with me, within a mathematics you get this cute thing that says when you raise an integer to the power of E and D and take the modulus to some very large relatively prime number N you can get back to the original number. It's true. Now the issue is, these numbers need to be really, really big, and that number N we are talking about in mod n needs to be the product of two extraordinarily large prime numbers. Now, if you do this, then even if I tell you the values of E and M, it will take you an awfully long time to find the value of D. The other part of the key. So, even if you got the best computer in the world it will take awe long, long time. Now of course, tomorrow it might be slightly less because computers seem to get faster and what was infeasible yesterday might be possible tomorrow. The problem with this prime number factorisation problem is that it relies on the fact that quantum computers don't work. Because if they did everything would crumble. But as long as that doesn't work we are cool because prime number factorisation is extraordinarily difficult. However, we are getting better, computers are getting faster and we put more to work, all those machines when they are not attacking DNS can be put to work factorising primes, I have been told there are millions out there. The way you protect yourself is start to make these key values bigger and bigger.

Now, why am I telling you this? Who uses the Internet for banking? Right. You are relying on this, because if this doesn't work you are hosed and so am I. Because this is my bank, at least I think it is. How do I know it is? It looks the same as whenever I go there. What it's telling me is a digital certificate and signature associated with it. And once you find if you click the green icon because it's safe, I am using the RSA algorithm which is this prime number factorisation issue and the key is 2048 bits long. Think how about how big a number that is. Remember when we say that IPv6 had 120, what, six bits in it, and that was so‑so ‑‑ 128, sorry. That was so‑so big that if you did grains of sand you get 300 million planets, well if do you 2048 bits it's really big number. Bigger than that. Right. So yes, this is a big number and it's really hard to factorise but there is a whole bunch of trust issues flying around here. Because I am trusting more than the commonwealth bank in prime number factorisation, I am trusting that someone whom I have never met, they are a company called Semantic. Anyone here from Semantic? Christ, you are my bank realistically, you better be around here somewhere. There is a list of everyone I trust. And the problem is that it's the same as your list; there is a shit load of them. Who else do I trust? Wow! Somewhere there in my machine I find an entry for CN Nic, the Chinese government, and if you look that up on Google's list of naughty people who have issued shit certificate, they are using through some rogue Egyptian. Oops! Below it is our little friends come owed dough who have also been hacked. All of this goes horribly wrong when you are on the front page of the New York Times. This is the system you are relying on for your bank. Feeling better? I know I am. So, what is going wrong? The problem is that this whole handshake that accepts the security actually doesn't tell you which certificate that says the domain name is the right one. It doesn't tell you which CA you should be trusting. And your browser takes the easy way out; any trust point that validates that certificate is good enough. Shit, that's bad. That is awesome. Here is a lock. It's a great lock, looks fantastic. Any key in the world will open it. This is your bank. This is my bank, this is the whole underpinning of the security system of the Internet today. And you wonder why we get worried. You know, we are missing something. And what we are missing is the awesome magic of the DNS. Because there is this theory out there, and it's true, that no matter what the problem, even the fact you haven't been attacked today, can be solved in the DNS. You just put it in the DNS and it's magic and just works. Lets just try this. None of this rough consensus in running code, let's put it in the DNS. But seriously, if I'm trying to find the public key of domain name that is in the DNS because it's a domain name, why wouldn't you look in the DNS? Oh, yeah, right. Obvious really, isn't it in so why don't you query that for who issued this certificate, who is the CA? What is the actual certificate value? Who needs CAs anyway? That is actually a really deep question because, quite frankly, it's a super‑structure of lies and rumour that, when we are false, catch us all and you too can be on the front page of the New York Times.

So there is this cute technology out there called DANE, I will get to ECDSA eventually and the way this works is this is a method for putting that public key into the DNS, right? So now what happens is, I have to go to the commonwealth bank saying what is your certificate and they say look up the DNS to see if this thing I am offering you is genuine. I look up the DNS separately and I go, yeah, absolutely, I trust it implicitly, yes? Now, there is two problems with this; one is, the DNS is full of lies and miscreants because the DNS is magic. And the other thing is, there is this problem we get back to I talked about before with key sizes, because the one thing the DNS doesn't do is move large stuff about. If you ever try fragmenting packets in UDP and it's an adventure in v6 it's a disaster. So this is crap. So what do you do instead? Different cryptography. Elliptical curves, any cryptography that says we don't understand how it works has got to be really, really good, so this one is really, really good. Right? It also does what you need to do in thousands of bits in prime numbers in a few 100. ECDSA responds a mere 527 objecting at the times, awe so. Let's use ECDSA, let's secure DNS with this, right now. Who uses it? Well here is a world map. I have been doing a whole bunch of testing and, quite frankly, most of us do use it. So let's look at the opposite question: Let's colour this map red if you are cool, not red if you are kind of only using RSA but not this weird wonderful thing called ECDSA
A. There are not many things that are Australians that whip the arse of the New Zealanders. This is one. Today we are in Spain, even Greece is ahead. When we look at the level of ECDSA we are running at around 7% of the country. Here are the top eight ISPs and the number of ‑‑ struggling to get 10%. All of them, the top eight ISPs, the only folk who are actually doing any real work in trying to make the security of the ‑‑ better, are using Google. And if it wasn't for Google, in this country, there would be no DNSSEC at all and no ECDSA, and I think that is pretty dam sad. If you want to find out about where you are and naming and shaming almost everyone else could I see about ECDSA, there is the URL. Thank you very much.

(Applause)

BRIAN NISBET: That is pretty perfect timing right there. Questions can be addressed over a pint, I believe.

So, our second lightning talk this afternoon is from Vasileios Glotas on Periscope.

VASILEIOS GLOTAS: That is a tough act to follow. I have to start with looking glass now, so ‑‑ so, yeah, I will talk about something not so exciting, Periscope, which is flat form that provides a layer of standardisation and automation on top of the disparate ‑‑ to facilitate the collection of measurements from looking glasses. So basically the basis of this talk is not just to inform the community about effort how to solicit your feedback about things we may have missed to make this platform more useful to you, technical insights and generally encourage your engagement.

So the high level goals and principles of Periscope is to provide unified API, and we are aware that this is from the existing manner of acquiring looking glasses which is manual, to avoid conflict we have three basic principles built in Periscope, the first one is to respect the source limitations and the conservative query rates that are supposed to support, provide transparency and accountability for for the queries that come from platform and strive to be responsive and compliant with the requests made by looking glass providers, because we want them to be happy with the way we use their resources.

So why bother with Periscope since we have already many measurement platforms like Atlas, which is awesome ‑‑ and the answer is because Looking Glasses are one of the very few publically accessible tools that provide directly interfaces to routers, which means that they have advantages that other platforms do not have like provide access to non‑ ‑‑ BGP attributes such as local preference, often they are located inside the have a few from the score instead of core sourced measurement agents. They have vantage points that are not covered by other platforms. So we want to make advantage of these features that Looking Glasses provide but they miss a lot of critical features that would allow consistent and systematic use. They have no standardisation in terms of input and formats, they have ‑‑ it's hard to discover in the first place which looking glass exists in which locations and which commands they support and they have high attrition rates so it's hard to maintain a first list even if we make a lot of initial effort. So we try to address these by building the uniform querying API by implementing indexing and data capabilities. This is the overall workflow of Periscope, it sits in between user and looking glasses so it's implemented as an overlay, the looking glass, we do not expect them to change support ‑‑ users are extracted from their complexity, the user needs to know the syntax of the API, the platform is responsible for translating it to the format expected by different looking glass and then translates the output to ‑‑ it returns back to the user.

This is an overview of the architecture. Basically, the key features, it's build on top of a Cloud infrastructure, so for every Periscope user you can have a different client that contacts the looking glass, so one‑to‑one mapping between users by Periscope and users perceived by the looking glasses, and we have a centralised controller that is responsible for so we enforce the query rates that we desire. We have two limits to enforce, user specific limit that says how many queries a user can and a limit that says the maximum number of permitted requests it executes over a period of time, and we have a backup mechanism so if looking glass responds with errors we have a backup until the errors go away.

To provide transparency we set some headers in the HTTP request to identify that it comes from Periscope and to provide accountability we provide the original users IP and we also have a static IP that we assign to user so we can be persistently identified throughout their life time in Periscope. Here is the geographic footage of the looking glasses available through the Periscope platform. Cover almost 500 countries ‑‑ sorry, 500 cities in 77 countries and over 60% of this is IPv4 enabled v6 and both BGP ‑‑ supported by 75% of the looking glasses.

So the topology observable through the looking glasses is complementary to other platforms which is very important, since we know Atlas is also very quiet platform and you can still observe about 20% more links than 70 more ASes. It means if we combine all the available platforms you can extend the visible topology which can be translated in more vantage points and better capabilities for troubleshooting and network diagnostics. Overall the benefits just now, we have more VPs to improve our capabilities for troubleshooting and by having an overlay it means we have a bird's eye view of what users do so we can do a lot of clever tricks to improve the load distribution and the utilisation of Looking Glass and have high query loads and some lower so we can distribute queries in an intelligent manner to balance them. And we can avoid double measurement by making public to users, can consume executed queries instead of just issuing new queries that will just get lost into cyber space.

So I would really appreciate any insight and contributions, especially for three things: Query limits for users and global, I derive them emperically because there is no consistent way for looking glasses to express them and not all use the same limits, so please e‑mail me or talk to me later about these limits, and also opt in or opt out requests, if you want your looking glass to be included or excluded, let me know and we will configure it accordingly. And also I would really, really appreciate support the infrastructure support, VM instances and Cloud.

Our goals are to provide this so everybody can use, it will be public ‑‑ accessible after e‑mail request to this address. We provide the documents for the API that explains how it works. Try to provide transparency and accountability that operator resources provide so I hope you can try that and find it useful. Thank you very much.

(Applause)

LESLIE CARR: Thank you very much. Questions?

SPEAKER: Hello, Alexander, open net. I was looking for this project for some time but an CAIDA website it says it's limited ability and download ‑‑

VASILEIOS GLOTAS: The reason that we still want to get some more feedback from operators, we still want to queue this red limits and get more insights for opt‑in and opt out requests. Now, and I originally did this presentation in NANOG last week so it's our official announcement so you can open it to the community. And I hope that ‑‑ I will get some feedback, from NANOG I got a lot of requests for access. I didn't get so much insight about this variables, so I don't know, maybe post to some mailing list to help me, even now configuration is a bit empirical derived from my experience as a user. I would love to get the insight from the operators' side before making more widely available. But please e‑mail us and we will definitely provide account to everybody who sends e‑mail. The limitation is that the query rates are conservative so it's good for targeted measurements but not for really aggressive queries.

LESLIE CARR: And we have a question from on line.

SPEAKER: This is from the RIPE NCC, I have a question from a remote participants called Joe from Google. And the question is as follows: Your cache results what indications for staleness. Do you propagate the users?

VASILEIOS GLOTAS: That is good question. Probably the NANOG presentation where I talk more about the caching of the results, so just to provide for the audience here a bit of background: To reduce the query in looking glass whenever we can satisfy a query with a very recent query to the same destination, we use the cast query and that happens very often because there are some destinations that many use 8881 and you have many users querying the same destination at the same time, so to detect caching ‑‑ staleness if we had 10 requests for the same address in a very short time frame, say five minutes, we space them in ‑‑ we satisfy three of them, we see if they are the same and then, you know, we turn to caching for the three of them but to the other users that we queued in the database. And then we repeat this process until we find it changes and we set it to non‑caching.

LESLIE CARR: Thank you very much. And next ‑‑ oh, yes.

(Applause)

LESLIE CARR: And next, Roy is joining us from ICANN to tell us how rolling the route DNSSEC signing key is not going to break the Internet.

ROY ARENDS: Thank you. I work for ICANN. Next year, ICANN is going to change the root zone key signing key. This is a big thing, I am going to call out a few dates and point to you a few documents that we have on line that I would like to you read. What is this about? DNSSEC in the root zone consists of two operators, ICANN and Verisign in 2010 signed the root zone. The root zone is signed by a zone signing key, the zone signing key is managed by Verisign and they roll it over every three months. The key signing key is managed by ICANN, and it hasn't really been rolled over since 2010, and this is what it's all about. What we are not going to change, I can't see Geoff Huston ‑‑ are the parameters like RSA‑SHA 256 key and going to be a 2048 bit key. Those parameters will stay the same. It's just the key itself is going to be rolled over.

Why do this now? The thing is, secrets don't last forever and part of the KSK is the secret part we have in key management facility. If you have BGP keys they have end date, if you have browser certificates, it doesn't matter how bad they are or good they are, they have an end date and it just make good practice to do this for DNSSEC keys as well. Additionally, we want to do this while we can, which is now, not when we have to, when, for instance, things have been compromised or there is evidence towards that, there is no evidence towards that, things haven't been compromised, we feel perfectly fine to do this now, so we will.

Also, and this is very important, we promised to do so in 2010. There is a DNSSEC policy in practice statement, the short name is DPS, this states that we are going to do this in five years.  ‑‑ sorry, after five years of operation. And this is a document basically produced with the help of the community and the community. Part of that community is you.

So we have five documents and I will go over this one by one, I don't go into detail into every document, let me start by operational implementation plan. This states what we are going to do and how we are going to do it and, importantly, when we are going to do this. And when we are going to do this, that includes making changes to the DNS, we would like to monitor the L Root and also get traffic from B Root servers to see what the change has an impact on in DNSSEC validators. Is this impact is massive, if things break massively and we don't expect this at all, we don't want to have a backup plan. Now this states what we are going to do, are we going to roll back to previously known good state, or if we know that when we are going to roll over to the next stage things will break, we want to remain in a certain staining. So, that is what a backup plan tells us. However, there is a slight handicap here, we can only use the KSK during a key ceremony and that happens four times a year or every three months. And when we introduced the new KSK into the root zone, it's not the exact same moment as we have the key ceremony so we need to prepare these backup plans and these sets of signatures, if you will, we need to do that in advance. So if we are going to do that we need to change the systems as well, so that is why we have a roll‑over systems test plan. When we get signing requests from Verisign in order to sign their ZSK, and of course to introduce our KSK, we need ‑‑ that goes through systems that we operate at ICANN, and those need to be tested. Lastly, we want to operate a community in the ‑‑ sorry, the developers' community to test their systems as well, and so therefore, we have a rollover test plan.

Dates to watch, and this is important, this is, to me, very important:

I will start with the middle date, October 11th 2017. This is when we go to stop using the current DNSSEC KSK. If you have a validator or if you run a validator or work for a company that runs a validator and the current key is configured in basically a stale way if it doesn't automatically update itself via RC 5011 or you don't have any mechanism to automatically update this, things will fail if you don't change it to the new key.

Another important date is September 192017, a little bit earlier. This is when the response size from a DNSSEC abled ‑‑ when response size for DNSKEY request is going over some certain amount. This is the largest by far we ever had for a KSK response, and I will highlight a little bit what might go wrong: For instance, if you can he employ a validator in an environment that can't handle fragments or that will fragment, a filter might drop fragments on the floor, you are going to have a problem, or, even worse, if you do not allow DNS fall back over TCP or not allow TCP at all for DNS, you also have a problem.

Then the third date below is when the response size grows even larger with only 11 bytes. However, if you solve the problem in September 19th you won't have a problem on January 11th. So again, October 11th is the big date, we are going to stop using the old and start using the new key.

We have some tools and testbeds. There is available also on the site, I previously mentioned, I think it's ICANN.org/kskroll. There are some troubleshooting aids, what we are going to do is talk to developers to have the new key set bundled with their software. There is testbeds for code developers, we are talking to the folks in unbound and BIND, PowerDNS, etc., etc. If you want to test your code against a live system there are two live systems, keyroll systems, and two dash serve.net. And we have a testbed that we are currently building for servers operator for validator resolve operators to see if their current configuration works, their current updating RC 5011 compliance actually works, and this is planned for the end of this year.

For more information about this, join any of these media things, like we have a mailing list, so you can follow us on Twitter if they are mot under attack, if it actually works. Or visit the web page. If you have any questions, feel free to ask me here, I will be around all week to talk about this. It's the main reason I am here.

And one more thing I want to point out is, this Thursday, I think it's the 27th of October, probably around 11:00 UTC ‑‑ ory, probably around 5 o'clock in the evening UTC, we are going to generate this new KSK in the key management facility in Virginia, so that is going to be Thursday. We will have TCR representatives there to make sure all the processes work correctly. But that is an interesting date, we are going to generate this new key. Thank you.

(Applause)

BRIAN NISBET: Thank you very much. You are going to have a year to try and get that key and crack it.

JIM REID: Computer programmer from Scotland. A couple of questions for you. The first one is on the dates you published. Are these absolute dates that are cast in stone and would you revise these if any problems or difficulties that pop up later on in the process?

ROY ARENDS: Currently, we see them as absolute. We have no reason to believe anything will go wrong. However, if something will go wrong and we have to remain in a certain state because we know if we have to roll back or know we can't roll forward, yes, then that date will change or might not happen at all. But for now, the 11th of October, that is set in stone. It won't arbitrarily change.

LESLIE CARR: Only time for one more question. War en: There has been a lot of DNSSEC outreach and pushing and stuff like that, are you planning on publicising this much wider in C net and the register and all other places like that?

ROY ARENDS: The short answer is yes, we are planning to do this. We have a whole theme of ICANN, communications team that is busy with this. We have a communications plan that did not list on the set of plans because it's not technical, it's not communications, communications plan is what we are doing here, short answer is yes.

BRIAN NISBET: From a Google point of view, given it's on Google, plus all of your engineers will be able to read it, but other people may need to know as well.

ROY ARENDS: It was Warren Coomery who is responsible for one of the testbeds here, I think it's key systems that he is responsible for, so thank you, Warren, for doing that.

(Applause)

BRIAN NISBET: Things like communication plans are probably something we should be talking about more at this point in time as well as technical aspects of all of these things.

So that is our plenary session, basically done. I would like to remind you if you wish to nominate yourself or a friend, preferably not unknowingly, for the RIPE PC, please look at the web page, the information is on the rotating slides as well around the place but mail PC [at] ripe [dot] net and also please rate the talks. There are a number of evening things going on, they are all in your information and there is the meet the board if you haven't already met them all, at 19:30 and then the welcome drinks at 2000. So thank you very much. Have a good evening.
(Applause)