Archives

DNS Working Group

27 October 2016

At 5 p.m.:

CHAIR: Good afternoon, ladies and gentlemen. We can resume with session 2 of the RIPE 73‑DNS Working Group. A quick reminder of the microphone etiquette, we are webcasting and the session is being recorded, if you could state your name and affiliation, thank you. We would like to take this opportunity now to, in this session of the Working Group, we have ‑‑ we are exchanging bought ones between Working Group Chairs, we welcome Shane Kerr as our new Chair, Shane if you could just come forward and wave to the boys and girls so we can all see what you look like. We are all happy to welcome Shane. There is a bit of sadness as we have to say fair well, but not goodbye to gym, I am sure he is going to keep on attending.
(Applause)

On behalf of the Working Group and the broader community, we just wanted to say a very warm thank you for your 15 years of service to the Working Group.

JIM REID: It's time for a change so over to the new guys.
(Applause)

Jaap Akkerhuis: Next year it's my turn to ‑‑ just to remind you how this happened is that we decide collectively that we should do ‑‑ roll over of the Chairs and the first was Peter over there, actually we actually flipped a coin for these two, and I am the last one to ‑‑ but that will be next year so then we have ‑‑ and then it will be pattern every year a new, at least roll over will happen. So that is part of the process.

SHANE KERR: All right, I am supposed to say a little introduction about myself, so if you aren't on the mailing list, that is where the discussion about who becomes Working Group Chair took place, so I gave a brief introduction there but I think I will just say that I have been involved with DNS a long time and I love it, I love the DNS community and the RIPE community and I hope that we can have as good or even better content going forward so that's it.

CHAIR: Proceeding with our regular programming, Johan Ihrén is going to talk to us about the changing DNS market.

JOHAN IHRÉN: Thank you. So how do I flip this? So, this is a rather forth presentation and rehash of something I did three or four weeks ago in Sweden. And you will see why I mentioned that on the last slide, once upon a time DNS was simple and considered to be sort of the solution to the problem and the only thing you needed the DNS server and then you were basically all set. Those were the good old days. Then over time, things became more and more complicated, people started doing stupid things, sometimes with the best of intentions, as in good guys doing everything in much more complicated way and it was intentionally intended to be and also the bad guys showing up doing what bad guys always do. And the consequence of this is that DNS is becoming more and more complicated and more costly to operate, and this is really a problem because at the same time, we are seeing problems finding the actual funding to finance this operation so when there is a mismatch between the cost of doing something and revenue available it starts becoming a over time, so why are the costs so of doing DNS going up, well, there is a whole bunch of changes that occur, some of them are technical changes and some are business changes or market changes, and if we just look at the technical stuff first, we have Anycast more or less over everywhere now, there is Anycast in the root since a long time, we have Anycast for the TLDs, also for long time and we are seeing has sieve amounts of Anycast in the enterprise sector these days and it has all sorts of advantages and it's not to have etc. But not exactly trivial, there are lots of costs in actually operating Anycast. We have seen more and more DDoS attacks and that is obviously goes hand in hand with Anycast deployments. Because of the scale of stuff now where you have dozens of serves or hundreds and lots of things and especially lots of zones, we are sort of moving away from the old more or less static configuration files and configuration is becoming a dynamic thing that happens all the time and you are not exactly sitting there with I MAX any more hacking on your config.

So system complexity is really going up here. And because the system complex see it going up and you are no longer tweaking your name server configuration through name dot Con and your favourite editor you are configuring stuff in other ways and you are doing this through databases to a large extent we are doing through through various APIs now. If you want to get access to DNS servers were some sort of provider for your 200 zones, typically the access is API and that is prototypery provider specific, and that really changes what you do. In parallel with that we have another very interesting development, which is let's call it scriptable name servers, where you actually sort of talk inter actively to your name server because it's running a ‑‑ and Ondrej showed us that one of their name servers just before the break. That is very, very interesting, but it's not exactly decreasing the amount of interesting things you can do with name server. There are lots of interesting new complexity coming out of that type of stuff. Also, it sort of breaks all the old assumption of being able to deduct the name server behaviour from some sort of static configuration. If I look at my name cons, I know what the behaviour of the system will be. No, that is no longer correct because we don't have a static, we has a running system that has to be modified in mid‑flight. So, APIs, lots of them, there is not just one API, there is provisioning APIs for adding and rezones, stats to determine the states of various zones and statistics and management APIs and all sorts of things that are there now to keep the right stuff in the right place at the right time and also being able to track that it actually is in the right stuff in the right place at the right time. We have behaviour modifying stuff, if you look at name servers today, once upon a time a typical name server would be something that responded to a query out of the so‑called classical three toople and is the query name and query class and query type, and the answer to the question was completely determined out of these three things. That is no longer true. We have response policy zones, we have geolocation stuff, we have CDNs tweaking the response fending on exactly time of day and market the query came from, etc.. so, lots of things are also happening here that makes it more difficult to verify correctness as in in this the right response to that query for that zone.

So that is about the technical stuff. What about the market changes, then. Well, the market change is primarily one very large thing, and the very large thing here is that the cost of Anycast is basically dropping towards the floor. And what happens when the cost of Anycast drops almost to the floor? Well, what happens is that smaller operators would be edged out because the only way to make any sort of revenue here is if you really, really large and global in scale and what will the consequence of that be? Well, really interesting to me is that when you look at the chronology changes, typically, and there is something that is at least in Swedish called technical history, and you sort of look at how new technology is being taken up and then it matures and really, really goes quickly into market and reaches a tipping point and after that everybody needs to do it. So it can be very slow in the beginning, but after the tipping point it just happens for everyone, and this is true about really every technology at some stage in its revolution, it happened with phones, with e‑mail, with credit cards, with cars, it happened with Internet access, et cetera, et cetera, et cetera. In the beginning, a few used whatever the new thing was, and then it took off, and after taking off and reaching the tipping point, everyone did it that way. And the question now is, are we reaching the point where DNS service, in particular Anycast DNS service, is reaching this tipping point where basically everyone will do it? Well, why would everyone do it? Do I actually need global name service for my website which only contains pictures of me, my dog and my kids? Well, presumably not but if it's cheaper to get global Anycast for my zone, than to just sort out by e‑mail friendly service with two colleagues of running slaves for me, we have reached a point where this would be done for every single zone. And I think we are really close to that point. That has certain advantages. Among the advantages is we see professionalisation of the market, the quality of DNS goes up, that is good, because we have more servers, the general DDoS resilience would go up, you will see fewer outages from broken name servers, etc. Excellent, we will expose previously hidden breakage, more directly. Broken zones is one example. In our case, because we offer Anycast service to, well zone owners around the world, as long as our only customers were TLDs, well the zones were okay, but when you get into the enterprise market you start immediately seeing zones that are not okay, so why were those zones even served before? My guess is because whatever they were using to serve those zones they sent to us was really old and crappy name server that they didn't check the content of the zone or the semantics of the zone. So getting that exposed by moving over to professional services, of course a good thing. Are there any disadvantages then? Well, among the disadvantages is that someone has to pay for moving all those bits around the planet for my web ‑‑ my small zone that only contains a website with pictures of me and my family. And it will be rather difficult to recover that cost. So when we have millions and millions of zones that are moving all around the planet in spite of actually not having a lot of queries from all around the planet, there is some sort of inefficiency in the system that someone will have to pay for. The next thing we do have a community understands how to configure name servers, we do have a community that, well, at least some of us we sort of eat NETCONF for breakfast and people can help with such things but over time if everything moves into professional services and you pushed your zone to a provider through an API, the name server configuration part sort of disappears and this turns into API configuration and adapting your local configuration environment to talk to particular vendor API of some sort. And yeah, I consider that to be a disadvantage, some would possibly consider it to be a benefit.

What else is there? Well, do I run a risk of monoculture here. There is a very strong tradition of OpenSource software in the DNS community, we have lots of OpenSource name servers and most of the public world has traditionally been run on OpenSource software. Everyone and their dog and in some cases even the cat is running BIND, many, many, many zones around the world are using NSD, unbound is a very widely used name server, Knot DNS is making inroads, etc.. but as this moves into more of a professional service where everyone is just throwing their zones over the wall to one of the DNS providers, we are switching into a closed source environment, and the OpenSource tradition is disappearing to some extent and well, some implementations may die but other perhaps not really a critical thing, but I am also a bit concerned about when these providers try to distinguish themselves against each other what do they do? The thing do you is you provide something in addition to standard DNS, so you provide various tweaks and things that enhance your product compared to the competition's product and that means, well, all the stuff we talked about before, about policy based responses and geolocation based responses, etc., becoming less standard DNS and more version A proprietily DNS or version B and they are different, and when you get dependent on the feature in one you will have a harder time switching to the other. Is this a problem? Well, it could be. Another thing to consider is the consolidation into fewer players. So, just a couple of weeks before the Dyn attack, if you looked at the enterprise market, the discussion was, where do I get most service, most sites, most performance, whatever, for a certain amount of money. And after the Dyn attack, at least in Sweden, we see much more discussions about should I have two providers or should I have three. And that is not necessarily a bad idea, but if we fastforward here a bit and we look some years into the future and we say all the small providers have been edged out and we have a limited number of really massive global players left, and the DDoS attacks are still making the rounds around the Internet and they have grown up, so instead of hitting on one provider, perhaps they hit on three at a time, so if I have service from three providers because Anycast by then will be really, really cheap, so I can afford three times very cheap, yeah, but the attacks hit multiple providers at the same time and the amount of collateral damages across these that have has sieve zones each, will be rather large.

So, the question here is that could it be that the trend towards a professional service with fewer providers and the trend with larger and larger DDoS attacks and less and less revenue per zone are sort of not really pointing in the right direction for stable future here.

So, to just wrap this up. This is a slide I used a year ago at the completely different presentation in a different forum where I tried to predict what would happen in the future. That is always very dangerous because regardless of what you choose to predict it will be wrong and in this case it was, well, mostly wrong, so I have uptated that slide and (dated) and if you look at what I wrote, well DNS service more Anycast, etc., and I think it's important to add that yes, it's more complicated but customers won't care because it's no longer their problem, that is obviously among the advantages of Anycast from the customer point of view, and then you have the other variesious predictions where I no longer think we will just see a marinalisation of the smaller players; I think we will actually see them being completely erased in five years, the costs are dropping so far that there will be no way whatsoever of earning a living on providing small scale Anycast or small scale DNS. It will not work. And then you stop doing it. So, and I spoke about the concerns about fewer providers with really, really large collections of zones, collections of customers and the consequences of that.

So, I will leave it with this, and ask whether anyone has similar thoughts over the future, whether I am completely off here in my prediction for the future?

DAVE KNIGHT: Thank you. Any questions, comments?

SPEAKER: Hi, thanks, I like this kind of prediction things are going on, gives us a little bit of ‑‑

JOHAN IHRÉN: Next year the update will be in the third colour because this will be wrong.

SPEAKER: Smaller font and I don't know, bigger player screens. I think I disagree with your thought that it will reduce the number of OpenSource implementations, and that things will go too close, yes players will have specific logic next to the standard DNS that they try to sell but many zones don't really need that; they just want a name server and the extra logic will cost more money and they don't want that, just as cheap stuff and then players like us will keep using OpenSource for these things because we do believe that OpenSource benefits and improves the quality of the product, the DNS product. So, yeah, I think I would like to disagree with that statement.

JOHAN IHRÉN: Well, I think you and I look at this very much in the same way but if almost all the zones move into some sort of professional provider and the interface is an API it no longer matters whether you are using an OpenSource implementation on the inside or not. From the point of view of the world it's closed source. But let's not drag that out too long. I am just saying, from the point of the world, it changes from name D dot con to API. We will see what happens in a year, yes.

SPEAKER: I have to rephrase my words, so we will do that in the hallway.

KURTIS LINDQVIST: So, this is not technically an observation on DNS but on your second bullet‑point I think that is Moran effect of general outsourcing and ‑‑ more professional rather than DNS, very many enterprises sit up there and think about DNS department how they are going to handle it, I think they will outsource IT.

JOHAN IHRÉN: Completely agree.

SPEAKER: CloudFlare. I like this. I think you are on right track. Is it a good thing, is it a bad thing? I don't know. From people who like you and me, who used to live in the DNS world, maybe this is a bad thing because we are losing jobs.

JOHAN IHRÉN: I still live there. Where are you?

SPEAKER: I am there, too. Maybe ‑‑ yes. But one of the things that has changed radically in relation to OpenSource over the last decade and I totally underestimated until recently it has become so much easier to write new DNS tools because of libraries and better languages, so, we have BIND with how many hundreds of thousands of lines? That could be replicated now in a new language in 10, 15,000 lines of code, if we ‑‑ throw out all the unnecessary complexities that have gone into it over the years. So, if you want to throw up authoritative DNS server, I rolled one but did a very specific thing in about six hours, but it is into the general purpose one. Writing a resolver is the hardest part today in the DNS environment. That is where we have the least diversity. That is what everybody relies on. That is the place I would be worried about. Not the publication of the data. And did you not mention resolvers.

JOHAN IHRÉN: No, that is true.

SPEAKER: Right now, what, our good friend from Australia says that 20‑some percent of the world uses one resolver, okay? Is that good, maybe, it's not bad. What is the rest of the world using? We have all kinds of tricks being played on resolvers in various states and commercial entities. That is where the biggest threat to the ecosystem is.

JOHAN IHRÉN: I think I agree with that.

DAVE KNIGHT: Okay. Thank you, Johan.

(Applause)

DAVE KNIGHT: Up next we have Sandoche Balakrichenan to talk about Zonemaster.

SANDOCHE BALAKRICHENAN: Hello everybody. My name is Sandoche Balakrichenan, I work at the French registry, AFNIC. Here I am going to present about a project called Zonemaster, which we are being developing with Swedish registry, IS, you can see in the logo. We are being developing ‑‑ we have been working on this project for at least two years now, and here the objectivity of this presentation will be to give a background of what we are doing and how it is useful for different people and what are the objects. So the agenda goes like this: I know that even though the, you here we have DNS Working Group and most of you know what is validation tool, DNS validation tool, I thought it's better to start with the brief introduction, then I will have some slides about why we decided to start working on this tool and then finally the documentation involved in this project, the different functional blocks, how it could be used by different people like domain users, like normal users, DNS administrators, DNS research people and entities like registries, registrars or a company with portfolio of domain names. Then one or two features that will be useful for everybody and what as a group IIS and AFNIC have plans for Zonemaster in the future.

I like this idea to have a ‑‑ to have health check, so it's easy to explain to people why do we need DNS tests for DNS configuration. For example, when we have ‑ fever we test with thermometer and when the temperature is above threshold, like 38 degrees, we say that the person is having fever so, the threshold 30 degrees Celsius given by some scientific data, body temperature above their threshold is fever. Similarly for DNS delegation testing we need to have relation ‑‑ when we do test and we need to already do some documents which are scientific, which are proven, for example, like RFC or BCP documents like we have in RIPE, then the next part is that we can have specific tests, for example, you can have tests for diabetes or blood pressure and also you can have a complete health check, in the same way for DNS you can have specific tests like for connectivity, but also you can have a comprehensive tests, and this testing all the details of configuration over DNS delegation configuration so that is what I Dáil as a comprehensive test. So when you have tools like when you can use it for specific tests but for comprehensive tests ‑‑ we have specific tools that we have put down here, there are other tools which you pay for usage but as here we concentrate only one OpenSource free tools, so the well‑known tools which it does comprehensive check over DNS zone is zone check, DNS check and DNS viz is maintained by Verisign now, it is a very good tool which gives you graphical representation once you test the domain. The other two, DNS check and zone check both have been developed by IS before and AFNIC. Zone check and DNS check are currently not being maintained, so we discussed together how ‑‑ whether we can upgrade these existing tools or develop a tool from the scratch, when we had this discussion the first ‑‑ when we concentrated on zone check we had a problem because it was built in ‑‑ we didn't have any resource to maintain this so we took out of our consideration. DNS check was ‑‑ we tried to have a look at DNS check the following points like modular, extensibility, optimisation of, interfaces and run time selection. When we discussed these the team came to the conclusion it's better to develop from the scratch rather than upgrading existing tool. This is why we have a new project called Zonemaster and not just because we wanted to do it to have something new.

Keeping in the same analogy with the human beings, when you say health check it varies from hospitals to hospitals. Most of the tests are same but one or two tests that differ from one hospital to another hospital, so similarly there is no ‑‑ there is no specific set of test cases that we say that we comprehensive testing of DNS zone can be done. So, we just checked it out when we didn't find anything specific, what we started was that we took all the requirements that are existing in tech and zone check and we removed ‑‑ it was deprecated and we consolidated them. Then, IANA policy has certain test cases who say this test should be done for DNS delegation validation, these tests are being done for new gTLD domains and integrated them moo the requirements. There are some inputs from within the team and external people saying that okay, there is a new RFC why don't you include them as a test case so we updated all these tests and consolidated them.

So, here, we put all this set of tests requirements in Git had you been which is public and when we grouped them about 18 requirements and each of these requirements, we categorised into different categories such as connectivity, address and text delegation, etc., so finally we had like nine categories, and each of them, 80 requirements.

Then, we wanted to write test cases for each requirement, and we ‑‑ for each requirement we added a test case ID. There were some cases where you can have group ‑‑ more than one test into one test and for some we had a test case ID for each requirement. So finally when we could see that of the 80 requirements that we have, we had like 56 test cases. And for each test case, we wrote the specifications on how the tests should be run. Here as you can see, I have a test case here, we have a test case identifier, then we have an objective: Why these tests is being done and we have a reference where ‑‑ reference from the RFC and then we have an input what should be given us input for the test, then we have a specific set up description how the test case should be run, then the output, whether there should not be any false positive or negative errors, then we have also specific requirements, if IPv4 or IPv6 is disabled then the test should take into account and should not ‑‑ finally we have inter case dependencies such as if there is ‑‑ if this test is given ‑‑ dependent upon any other tests.

So like this, for 56 test cases, we have description for complete specification on how this test should be run.

And for information all these are public on GitHub and this is under the creative comments 4.0, and it could be reused, it could be used by anybody, it's completely free and OpenSource.

Now, as in the case of health check for human, we discussed between ourselves whether this is only the ‑‑ only one IS and AFNIC that we consider as the comprehensive test cases for DNS or whether the community also access that. So to have a consolidation with the community, we first had a presentation within centre, the Council of Europe in national top level domains, which is the ‑‑ which is the ‑‑ which all the registries in Europe, and we made a presentation and then asked them whether it would be interesting for them to discuss and peruse a document. So there were interests and then within centre we created Working Group called TRTF, and the purpose of this was to set up a complete set of requirements for comprehensive DNS testing and create a set of specifications for them and it was considered as an informative document. But we had low ‑‑ low response in the mailing list and there was interest in the beginning and mostly done by Patrik Falstrom who was working with the Zonemaster team and finally we said that a list of any document or study, we thought we should stop this Working Group and then go to larger scope. So, at the beginning we asked if at DNS op about their interests, some scepticism, they did not succeed. There is also interests. But the point was that never an approach, as far as we know, with such exhaustive documentation. And then we also made a presentation at DNSOP Working Group and similarly, we had some scepticism but there was a strong support for supporting this as Working Group document and currently the draft is Working Group document in DNSOP and there are already inputs in the mailing list regarding TTL values, regarding how strong the ‑‑ strong should be in the document etc., etc.. this is one of the output of the Zonemaster project. Will not be followed by the Zonemaster team. We are sure that any ‑‑ any modifications or update in this document will be reported back to the Zonemaster project.

Now, somebody who wants to test this domain, the first point will be to use the web interface that we provide the Zonemaster, Zonemaster .net. What we do is that you just input your domain name and click on the button and it ‑‑ all the test cases and then we have an output. If everything is green, we are happy, and this is for basic user, and afterwards we have some other options such as you can export the results in text or S GM L format and you have also a history that you can have a list of all the tests run on the same domain previously. So all these results are stored in the database that is maintained by AFNIC and IS.

A slightly advance use is that you can use the same interface for pre delegated domains, where, for example, if the domain, if you have not already the domain delegator but you have the name servers in the Internet you can test whether whether the zones are okay or not using this interface. Also, you can have test only for IPv4 or IPv6, and without that you have possibility of having a different providers of tests, when we say different providers of tests you can have, for example, here we have default provider which includes all the test cases that we run, and IANA policy provide which includes test cases proposed by IANA. So the idea here is that you can have different test profiles, the list of tests to be run and also for each of the tests you can have different severity levels, for example, there are different we have like critical info, critical warning, etc.. so, that could be changed, depending upon your test provider.

And here you have a pre delegated domain where you can ‑‑ I was just talking about it earlier. You can also test where is algorithms with the ‑‑ so these are the use cases, anyone will have with the Zonemaster interface and you don't have to install or have to know anything about project or the software. But, going further, if you want to know about the code, what is happening here in the Zonemaster software ‑‑ project, so the code that is the brain of the tool is the engine, the engine has ‑‑ is a Perl library and has all 56 test cases and for in each of these it's like a plug in, if you want to add a new test case you can add it as a plug in so off framework where, in order to run these test cases, you have to give us input to the framework and that has different objects like configuration, that is where I said it will take into consideration the profile as configuration, and once it, there is ‑‑ there is a login format and in order to ‑‑ you can have translations, so currently we support English, French and Swedish, and also we have our own resolver which is built on net D LS that is useful for ‑‑ results.

We have a common line interface and here, I am not sure whether you will be able to read all those things, different usages that you can have, in the common line interface you give us input a domain name and different flags so you can have your own configuration, you can have your own profile, you can look at the list of tests that are being run, etc., etc.

So these are the advantages of the common line interface with respect to GUI. We have option called filter which has been requested by RIPE, the filter option is something like, when you compare to the health check, if you go for a complete health check but you had a slice of cake half an hour before, you think that okay, we will have the sugar level will be high so you don't have to show that you downgrade that. Similarly, for, in the case of filter, you can have ‑‑ downgrade certain name servers for domain name, that is an options that also provided, that is also ‑‑ that also could be run using the CLI.

To use the CLI you have to install the engine and CLI, both the components and for installing there are documentation on GitHub and it's just as simple as here, as you could see the instructions. You just copy paste and it's installed. Currently we support DB N, CentOS and free DSB. I know there are registries which run health health check of their zone, all the domains in the zone and then refer to earlier tests so in that case you need a back end. We have the softwaror the back‑end also in GitHub but in order to it have it for yourself you need to install it, that would allow you to see history from your own database. The back‑end is interface, currently you have database support for my SQL and post GRE SQL, one domain but you can write ‑‑ can run a batch of domains at a time.

And we have also the software for the GUI, here as you can see you have to install the back‑end of the GUI, that is at the left or you can change the GUI as per your need so it's like what CIRA has done, using Zonemaster code.

The ‑‑ we had latest full release just 12 days earlier. The normally the idea is to have three releases in a year but meanwhile, we can have different releases for each for individual components, and as per our plans we ‑‑ IIS and AFNIC plan support Zonemaster for another two years. What we want here is to ‑‑ (plan to) we want to have Zonemaster for comprehensive test of DNS configuration so we want the DNS community to use the tool, report it to us if you have issues, if you want to add new test you add, you can create your own applications based on Zonemaster like ping DOM, Patrik falstrom has created his own for scanning all new gTLDs and if you want to add new translation file you can add that. We will also, we need Beta testers and mentors to help us to make this tool a reference tool. So these are the useful links that you have, all of them are on GitHub and if you have any developer issues you can contact us in Zonemaster‑D EV EL, for user discussion you can contact us at users or Zonemaster .net. Thank you very much.
(Applause)

DAVE KNIGHT: Thanks very much. Are there any questions for Sandoche?

JAAP AKKERHUIS: Not a question but more a comment. I actually have a port ready doing into the 3D SB port and I was waiting for the latest release and I am surprised to see that it's apparently happened because it was nowhere announced, so it's kind of strange. One other thing is that what I find from the way Zonemaster is built, there is an awful lot of dependency and some of them are complete, I don't understand why they are there because there are quite some different ‑‑ I mean, some of the requirements are double other requirements and it's kind of a pain to install. So and I talk with Matt about that and he says he was going to trying to trim it down in the next release. But to see that it's now an official release again, I will try to update it and see how far I get. But if people want to try it, I mean, I am happy to get my port thing for them to test.

SANDOCHE BALAKRICHENAN: At the beginning, we had an idea that we will not back edge this tool, we thought we will help back edges but not back edge this tool from our own end but finally, there are multiple requirements to package this tools to make it easier for users so that is why we contacted maintainers like you and for other operating systems. And we are working on already using this dependency but we had one, I think it is LDNS that was causing main issue, we built it for our own ‑‑ as our own library because we had difficulty in getting it support from, I think it's NLnet Labs which were, if I am not wrong, which was ‑‑ which is initially developed N will the D LS, isn't it?

Jaap Akkerhuis: I am unsure what you mean ‑‑ but we probably should take this off‑line, but there was ‑‑ but kind of ‑‑ hoping for releasing it and sitting in my to be released box for more than a year now. The other thing is, the documentation is quite poor for certain parts of the system. I hope ‑‑ I hope that is improved but ‑‑

SANDOCHE BALAKRICHENAN: Yes, I think for certain components the documentation not up to date, we should input that.

ANAND BUDDHDEV: From the RIPE NCC. So, I mentioned in my presentation earlier we were switching to Zonemaster and one of the things we needed to do was to look ‑‑ was to package it, of course, for our own uses so this is a comment to Jaap's comment, actually; that yes, Zonemaster consists of lots and lots of modules and dependencies and for everyone who works with, you know, distributions, Linux distributions, this is just hell. The way we have solved it is by using a feature of Perl called local lib, by stashing everything Zonemaster needs into one giant library ‑‑ directory, that lives independently of the OS that it ‑‑ we don't mess around with the modules and things like that. Another option which some people might scream at at me for is to stuff everything to a docker container and just run that and not touch the base OS so these are two approaches people could use, possibly, to avoid this dependency hell that Zonemaster gives. It's a good tool but this is one of the down sides Zonemaster presents for system administrators.

SANDOCHE BALAKRICHENAN: Thank you very much for this input and I will take that back into consideration, we will see whether we can do something or it's too late.

Jaap Akkerhuis: In creating those packsages for free DSB, the not a package ‑‑ two days to compact the dependency, proportion, still needs three other packages which is not supported by free BDS from the start and want to get rid from that. I will try the new release.

DAVE KNIGHT: Thanks. Up next we have ‑‑
(Applause)

Up next is Shane Kerr, going to talk to us about DNS for Egyptians.

SHANE KERR: Hopefully we will have something not as useful but hopefully still entertaining, actually domain like Annie Egyptian. Up on the slide there, as far as I can tell the Egyptian symbol for the underworld, so basically what the hell? And what this is about is that, as you do, I was looking at the list of scripts that were supported by various registries, and I noticed that Verisign seems to have quite a few, it almost looks like they have every Unicode point added as a possible registry for registering a domain name. And I noticed one of them Egyptian hieroglyphs and I thought that seems really, who wouldn't want a hieroglyphic name. What to you do? You go to a popular registrar and since Verisign was a registry, I figured I would go to the Verisign registrar but it turns out while there is a registry they support hieroglyphics they don't do it as a registrar, I googled and none of them worked. At this point I went to the DNS Working Group mailing list and said hey I am trying to do this thing, Joao Damas said use puny code directly, if you don't know the details of how DNS works with non ASCII domain names, there is a special magic formula called Punycode which converts the Unicode into X N ‑‑ and then some other characters which have a bunch of other properties such as being small and these kind of things.

So what you do is rather than using the Unicode version, which is non ASCII, you can use the Punycode version directly. I tried that and that still failed with some registries and someone said stop using crappy registrars and go to Gandi.net and it worked, and so rah, rah, rah that is the symbols at the end there. And well, so if you don't know much about Egyptian mythology, RAH is the sun God and the fun God. Anyway, it turned out for some reason the Egyptian hieroglyphs were not on the list of languages, cop I can which is old but not as old as hieroglyphics script was on on there and also from Egypt. I picked some script like the first one at the top, arm even Jan, it seemed to work, great. It didn't seem like the DNS servers that Gandi was providing were working for this, that didn't bother me because I am a DNS guy and I want to run my own servers anyway, that was the next step, getting it registered now I have got a registered domain name. I chose BIND 9 and bundy as my authoritative servers, just because those I had configured and running and Punycode version for configuration in the text files and everything. It was all quite standard and boring and normal. One problem that I had, a small hiccup, we say, is that Verisign doesn't allow Egyptian glue for some reason. I don't know. Which is actually a bit weird because it's just Punycode, why not just ‑‑ could you just treat it independently but so the name server records appeared but the glue didn't work which is the addresses, the A and AAAA records so there is a work around for this which is use out of bailiwick name servers so instead of hieroglyphic.com, I used some other domain for the name servers themselves. That worked. I didn't sign the name for a fun project that I was doing at this point I was realising I was spending a lot of time on it and I decided not to sign it, but at some point in the future till probably have a presentation two or three times as long as, going into the gory details of how to get DNSSEC working because it makes everything hard and takes longer. I have got the domain registered and the DNS working but DNS doesn't let you do anything, so let's set up a web server, I threw up apatchy which I had running for something else as well. The configuration is quite standard and boring. I again used the Punycode version for everything. It's not encrypted. Now, I strongly encourage you to set up TLS encrypted version of all of your web stuff, don't use plain text, bad guys are looking at everything. Unfortunately, I am very cheap, so I am a huge believer in let's encrypt, but it turns out that they don't currently support Punycode names, and I gave this same presentation at DNS org recently, a few weeks ago, and someone came to me and said, well that was just a mistake because someone advised the let's encrypt people to the to allow that but since then they have revised that and going to allow it soon. So in the near future my Egyptian hieroglyphic web server will be secure and encrypted. It may be possible that I could have gone to some other certificate authority but that would have cost money and I am very poor, so ‑‑ all right. Great. We have got a web server set up, we need to be able to access this. We are almost at the final step, don't worry. What do we need? We need to be able to see these characters so we need fonts. Luckily, I use Linux on my desktop and it's quite easy, the single line you see there, ancient scripts which has a side benefit of adding cuniform and other scripts you use on a day‑to‑day basis as you are doing these things. Unfortunately, it doesn't look very good in my browser because browsers show you the Punycode version of the name which is X N ‑‑ thing which is ugly and doesn't look like the pretty hieroglyphs. This is anti‑phishing technique, I put a URL there. The problem is in Unicode there is lots of characters that look the same and in traditional phishing you may have something like, someone will say Microsoft with the zero instead of 0 to try to get you doing to their site and infect you. This is much worse with Unicode because there are characters really identical down to the pixel, so in order to avoid this problem with phishing and things like that browsers have white‑list of character sets that you are allowed to do and very careful about these things and for some reason Egyptian hieroglyphics are not on these white‑lists, I don't know why. But the browsers work, I tested Firefox and chromium, I don't have Windows on my desktop so I didn't try that but you can try it after this presentation. I also didn't bother to test other applications like e‑mail or Jabber, I assume there will be scary corner cases. Anyway that is it, you can download the slides and click on the link, it does work. I encourage to you use Egyptian hieroglyphs for all of your stuff in the future.

(Applause)

ONDREJ SURY: This is Ondrej Surrey, cz.nic. Shane, you didn't do your homework between DNS org where I saw the presentation first and now, the let's encrypt enabled IDN.

SHANE KERR: We didn't plan that, folks.

DAVE KNIGHT: If there are no other questions, thank you very much Shane. One last question.

Jaap Akkerhuis: There is another free certification agency, C ‑‑ it's not a new browser but you might give them as well just for the fun of it.

SHANE KERR: Sure, sure. There are at least one other free commercial certificate authority but I didn't investigate it. Yeah.

DAVE KNIGHT: Thanks again, Shane. One more.

JOAO DAMAS: Did you try registering your name with the Unicode 1 F 512 which is the padlock right in front of the name?

SHANE KERR: That is something I didn't try but that seems really saw some. That is ‑‑ that may be an actual security concern. I hope there is more, there is more.

Warren Coombary: Roy Arends made an e‑mail address that starts off with padlock, if people see e‑mail from Roy look for that.

SHANE KERR: It will be more secure that way. Yes, yes.

DAVE KNIGHT: Thank you. Up next is Sebastian Castro who is in search of resolvers.

SEBASTIAN CASTRO: It seems this is designed for Dutch, I am not sure if I met the height requirements here. We are at the end of the day, we are all tried and thinking of beer or dinner or whatever, so we will make this fast and sweet. This is a short presentation called in the search of resolvers, it's joint work from my research team at NZRS domain name registry between Jing and myself. So we are working on algorithm called domain popularity ranking which is trying to derive domain popularity by mining DNS data, particularly the authoritative DNS data. And because of the noisy nature of the DNS data, we need doing and filter out and find out which traffic is more likely to be useful in order to get something meaningful. So, we thought, are we in a position to discover from the data which source addresses represent resolvers? So, that is the main question.

You know if you have, get your hands dirty with DNS data you know it's very noisy, you have a very long tail of source addresses and just a few queries with the strange things we saw in the presentation from the day in the life and route servers traffic, you see all kinds of rubbish there, ccTLDs is no different. So, because we collect all the ‑‑ our DNS traffic, and we were able to actually cooperate with New Zealand ISPs and get their resolvers, the public resolvers addresses, not the addresses that use the customers but the addresses they use for querying our service, plus the documentation from Google DNS and open DNS about their ranges. We built up a list of what we think is reasonable list of non‑resolvers. And we also pick up a few non‑resolvers, including the address from ICANN querying for this ICANN TLD monitoring dot TLD and a few other addresses only sending in as queries. So, we presented this at OARC and Robert he had Mondays suggested the second, maybe not the right one, so we can improve the process.

Anyway, we have an exploratory analysis of the traffic sent by this and the main conclusions are, there is a primary and secondary address for every resolver from an ISP. You can actually pinpoint from the traffic patterns which one are doing validation, and there are a few resolvers in front of mail servers. So, in order to test this idea of detecting resolvers, we decided to use a little bit of matching learning and create supervised class fire. So we provide an algorithm, an address in a label saying no, this is a resolver and a few features of the traffic from that address so it's a fraction of different query types and the a fraction of different flags and different responses that address got. We use one day of data for training with 650,000 unique source addresses on that day. So, we trained the model, in this case a linear SVC and you can see from here that the model got 100% accuracy, success, we are done, end of presentation, and let's go for beer. But we actually tried random forest, 100% and K neighbours which is different algorithm, 100%. So if we go and pick some different data to test the model, we go and get a kind of a high fraction of resolvers, so out of the 650 ‑‑ sorry, 651,000 addresses, most of them are identifiers resolvers. And we get that on different dates. So there is something really weird here. So, our preliminary analysis of this approach indicates the most of the addresses are classified as resolvers because we are providing non‑resolver behaviour very specific. So, the model is fitting that behaviour and anything else is a resolver. So, we move into ‑‑ sorry, we move into unsupervised class fire so we ‑‑ let the algorithm doing and try to understand the structure of the data first. We use K means with this, with K being 6, as inspired by our work presented by Verisign, in Dublin here are the DNS Working Group. And then we deciding and sorting outsource addresses based on that. So in case you are interested why 6, this is the cost curve of our K means and you see the number 6 is where the curve is bending, it's usually called the elbow, so, when we classify the addresses we have six clusters and they have this query type distribution. You can see the slides of your own, I am not ‑‑ to read them but roughly 35% fall in the first cluster. And then 27% in the second cluster. And so on. If we go and check the ‑‑ different feature from the algorithm, the R code profile, this is the distribution and the flag profile, this is the distribution. So, now we test how many of those known resolvers fit ‑‑ do they fit in the same cluster, we actually found some structure on this data and we tested both week and weekend a data and found between 98 and 99% of the known addresses that we think are resolvers, they fit in the same cluster which is that result, cluster number 0. For the non‑resolvers, they also 74 addresses that fall in the same cluster, and the rest, 72%, null cluster number one, so there might be some noise but it seems to be promising.

So then we add to this client persistence so we think that a resolver will have the tendency of sending new queries constantly they they won't go away so we start checking of how much of these addresses and queries in 10‑day rolling Windows, so we start from the first of the month and start moving and checking if they are present on all 10 days. So the client's persistence, there is some shades of blue and a baseline of 200,000 addresses that is seen every single day across a whole month and our variety and you see weekly ‑‑ weekend, weekday, pattern. So, what if we go and apply the same idea to the addresses, the resolvers addresses in the baseline here it's like 700 addresses of roughly 900 are seen every single day. And there is some noise in there so that requires a little bit more inspection. So, this is some ongoing work, if you are interested in seeing this, we are planning to make the code available. If you are interested in why are you doing this, I can explain to you in length, but we are very interested in getting some useful results. This kind of analysis is not specific to New Zealand so you can apply it to any ccTLDs, we are using OpenSource code, we have a cluster and using Python for this, and if you are using ENTRADA from the awesome guys, you can definitely run this with very little change. So one of the reasons why we have, we want to make sure you can use for repeating the analysis. So that is it. Originally thought of making this presentation in Spanish but at the end of the day that would be too much. So I will take any questions, if not well, almost ‑‑ almost done.

DAVE KNIGHT: No questions for Sebastian then? Okay. Well thank you very much.
(Applause)

I think we have at least one item of any other business, Gerry if you want to come up.

Gerry: I am with DNS org, and I just want to announce a released version 2 ‑‑ no, 1.2.0 of DNS KASP a few hours ago, it's structure ‑‑ the structure is that of described in the DNS‑in‑JSON.

SHANE KERR: So is this at all related to the work that Terry Manderson in ICANN was talking about?

Gerry: It's not. I am guessing they are going to release drafts on cometly different structure within CB O R, that explains why they are getting the compression rate. So this ‑‑ I used the continues in JSON draft just to get some work done and you can easily convert the CB O R for ease and it complies with the draft and even with all the labels, with all textural labels and everything, I still see a bit of compression.

SHANE KERR: So the DNS‑in‑JSON draft had a lot of flexibility. Do you have documented what you have chosen.

Gerry: If you go on line and look at the CBOR stuff there is explanation what doesn't really work yet, so this code uses LDNS to parse the packets so they need to be perfect, otherwise going to complain, and the draft actually gives you freedom to do whatever you like, even invalid packet and stuff but that is the plan.

SHANE KERR: Okay. Cool.

Roy Arends: ICANN. I see you mentioned DNS‑in‑JSON draft, have you talked to Paul Hoffman about this?

Gerry: No, I am going to, I found a lot of stiff want to change within the draft I want to change because it's unclear. A bit to do.

ROY ARENDS: Just to clarify, Paul Hoffman happens to be my colleague, that is why I am asking. He is also the author, co‑author of the CBOR standards so I think he is double interested in this. And lastly, as already asked if there is any relation to the CBOR stuff interest Terry Manderson, who happens to be a colleague, are you going to talk to them as well and get something together out?

Gerry: Since I am using this DNS‑in‑JSON draft get to some experimental work done, whenever they release the draft of the structure of their DNS packet within CBOR I can implement that.

SPEAKER: Thank you for this.

DAVE KNIGHT: Thanks.
(Applause)

It looks like we are finishing about 15 minutes ahead of schedule. Go drink now, fantastic.

I would like to thank the NCC staff who did the scribing and the stenographer and everyone who spoke today, thank you very much. And that concludes the DNS Working Group for RIPE 73 and I look forward to seeing you all in Budapest.
(Applause)