All Projects → tiglabs → Containerdns

tiglabs / Containerdns

Licence: mit
a fast DNS for Kubernetes clusters

Programming Languages

c
50402 projects - #5 most used programming language

Projects that are alternatives of or similar to Containerdns

Fastdns
fastDNS is an authoritative only, high performance, simple and open source name server based on DPDK and NSD server
Stars: ✭ 12 (-96.26%)
Mutual labels:  dns, dpdk
Gokv
Simple key-value store abstraction and implementations for Go (Redis, Consul, etcd, bbolt, BadgerDB, LevelDB, Memcached, DynamoDB, S3, PostgreSQL, MongoDB, CockroachDB and many more)
Stars: ✭ 314 (-2.18%)
Mutual labels:  etcd
Nps
一款轻量级、高性能、功能强大的内网穿透代理服务器。支持tcp、udp、socks5、http等几乎所有流量转发,可用来访问内网网站、本地支付接口调试、ssh访问、远程桌面,内网dns解析、内网socks5代理等等……,并带有功能强大的web管理端。a lightweight, high-performance, powerful intranet penetration proxy server, with a powerful web management terminal.
Stars: ✭ 19,537 (+5986.29%)
Mutual labels:  dns
Sonarsearch
A MongoDB importer and API for Project Sonars DNS datasets
Stars: ✭ 297 (-7.48%)
Mutual labels:  dns
Nestcloud
A NodeJS micro-service solution, writing by Typescript language and NestJS framework.
Stars: ✭ 290 (-9.66%)
Mutual labels:  etcd
Stolon
PostgreSQL cloud native High Availability and more.
Stars: ✭ 3,481 (+984.42%)
Mutual labels:  etcd
Lagopus
Yet another SDN / OpenFlow software switch
Stars: ✭ 281 (-12.46%)
Mutual labels:  dpdk
Hackertarget
🎯 HackerTarget ToolKit - Tools And Network Intelligence To Help Organizations With Attack Surface Discovery 🎯
Stars: ✭ 320 (-0.31%)
Mutual labels:  dns
Golb
🐙 Yet another load balancer
Stars: ✭ 315 (-1.87%)
Mutual labels:  etcd
Postgresql cluster
PostgreSQL High-Availability Cluster (based on "Patroni" and "DCS(etcd)"). Automating deployment with Ansible.
Stars: ✭ 294 (-8.41%)
Mutual labels:  etcd
Jupiter
Jupiter is a high-performance 4-layer network load balance service based on DPDK.
Stars: ✭ 292 (-9.03%)
Mutual labels:  dpdk
Python Etcd3
Python client for the etcd API v3
Stars: ✭ 290 (-9.66%)
Mutual labels:  etcd
Dt
DNS tool - display information about your domain
Stars: ✭ 313 (-2.49%)
Mutual labels:  dns
Toriptables2
Tor Iptables script is an anonymizer that sets up iptables and tor to route all services and traffic including DNS through the Tor network.
Stars: ✭ 287 (-10.59%)
Mutual labels:  dns
Pulsar
Network footprint scanner platform. Discover domains and run your custom checks periodically.
Stars: ✭ 314 (-2.18%)
Mutual labels:  dns
Zonemaster
The Zonemaster Project
Stars: ✭ 282 (-12.15%)
Mutual labels:  dns
Vinyldns
DNS Governance for streamlining DNS operations and enabling safe and secure DNS self-service
Stars: ✭ 293 (-8.72%)
Mutual labels:  dns
Waterdrop
💧Waterdrop is a high performance micro service framework. Waterdrop comes from (The Three Body Problem).
Stars: ✭ 305 (-4.98%)
Mutual labels:  etcd
Getaltname
Extract subdomains from SSL certificates in HTTPS sites.
Stars: ✭ 320 (-0.31%)
Mutual labels:  dns
Rancher Letsencrypt
🐮 Rancher service that obtains and manages free SSL certificates from the Let's Encrypt CA
Stars: ✭ 318 (-0.93%)
Mutual labels:  dns

ContainerDNS

Introduction

ContainerDNS works as an internal DNS server for a Kubernetes cluster.

Components

  • containerdns: the main service to offer DNS query.
  • containerdns-kubeapi: monitor the changes of k8s services, and record the change in the etcd. It offered the original data for containerdns, meanwhille containerdns-kubeapi offers the RESTful api for users to maintain domain records.
  • containerdns-apicmd: it is a shell cmd for user to query\update domain record, it is based on containerdns-kubeapi.
  • etcd: used to store DNS information, etcd v3 api is used.

It is based on the DNS library https://github.com/miekg/dns.

Feature:

  • fully-cached DNS records
  • backend ip automatic removed when it not avaliable
  • support multiple domain suffix
  • better performance and less jitter
  • load balancing - when a domain has multiple IPs, ContainerDNS chooses an active one randoml
  • session persistence - when a domain name is accessed multiple times from the same source, the same service IP is returned.

Design Architecture

image

Setup / Install

Then get and compile ContainerDNS:

    mkdir -p $GOPATH/src/github.com/tiglabs
    cd $GOPATH/src/github.com/tiglabs
    git clone https://github.com/tiglabs/containerdns
    cd $GOPATH/src/github.com/tiglabs/containerdns
    make

Configuration

containerdns

  • config-file: read configs from the file, default "/etc/containerdns/containerdns.conf".

the config file like this:

    [Dns]
    dns-domain = containerdns.local.
    dns-addr   = 0.0.0.0:53
    nameservers = ""
    subDomainServers = ""
    cacheSize   = 100000
    ip-monitor-path = /containerdns/monitor/status/
    
    [Log]
    log-dir    = /export/log/containerdns
    log-level  = 2
    log-to-stdio = true
    
    [Etcd]
    etcd-servers = http://127.0.0.1:2379
    etcd-certfile = ""
    etcd-keyfile = ""
    etcd-cafile = ""
    
    [Fun]
    random-one = false
    hone-one  = false
    
    [Stats]
    
    statsServer = 127.0.0.1:9600
    statsServerAuthToken = @containerdns.com

containerdns-kubeapi

  • config-file: read configs from the file, default "/etc/containerdns/containerdns.conf".

the config file like this:

    [General]
    domain=containerdns.local
    host = 192.168.169.41
    etcd-server = http://127.0.0.1:2379
    ip-monitor-path = /containerdns/monitor/status
    log-dir    = /export/log/containerdns
    log-level  = 2
    log-to-stdio = false
    
    [Kube2DNS]
    kube-enable = NO
    
    [DNSApi]
    api-enable = YES
    api-address = 127.0.0.1:9003
    containerdns-auth  = 123456789
    

containerdns-scanner

  • config-file: read configs from the file, default "/etc/containerdns/containerdns-scanner.conf".

the config file like this:

    [General]
    core = 0
    enable-check = true
    hostname = hostname1
    log-dir = /export/log/containerdns
    log-level = 100
    heartbeat-interval = 30
    [Check]
    check-timeout = 2
    check-interval = 10
    scann-ports = 22, 80, 8080
    enable-icmp = true
    ping-timeout = 1000
    ping-count = 2
    [Etcd]
    etcd-machine = http://127.0.0.1:2379
    tls-key =
    tls-pem =
    ca-cert =
    status-path = /containerdns/monitor/status
    report-path = /containerdns/monitor/report
    heart-path = /containerdns/monitor/heart
    

containerdns-schedule

  • config-file: read configs from the file, default "/etc/containerdns/containerdns-schedule.conf".

the config file like this:

    [General]
    schedule-interval = 60
    agent-downtime = 60
    log-dir = /export/log/containerdns
    log-level = 100
    hostname = hostname1
    force-lock-time = 1800
    
    [Etcd]
    etcd-machine = http://127.0.0.1:2379
    status-path = /containerdns/monitor/status
    report-path = /containerdns/monitor/report
    heart-path = /containerdns/monitor/heart
    lock-path = /containerdns/monitor/lock

Testing

containerdns-kubeapi

    we use curl to test the user api.

typeA

    % curl -H "Content-Type:application/json;charset=UTF-8"  -X POST -d '{"type":"A","ips":["192.168.10.1","192.168.10.2","192.168.10.3"]}'  http://127.0.0.1:9001/containerdns/api/cctv2?token="123456789"      
    OK

typeCname

    % curl -H "Content-Type:application/json;charset=UTF-8"   -X POST -d '{"type":"cname","alias":"tv1"}' http://127.0.0.1:9001/containerdns/api/cctv2.containerdns.local?token="123456789"  
   OK

containerdns

typeA

    % nslookup qiyf-nginx-5.default.svc.containerdns.local 127.0.0.1
    Server:         127.0.0.1
    Address:        127.0.0.1#53

    Name:   qiyf-nginx-5.default.svc.containerdns.local
    Address: 192.168.19.113

    if the domain have more than one ip, containerdns will return a radom one.

    % nslookup cctv2.containerdns.local 127.0.0.1
    Server:         127.0.0.1
    Address:        127.0.0.1#53

    Name:   cctv2.containerdns.local
    Address: 192.168.10.3

typeCname

    % nslookup tv1.containerdns.local 127.0.0.1
    Server:         127.0.0.1
    Address:        127.0.0.1#53

    tv1.containerdns.local    canonical name = cctv2.containerdns.local.
    Name:   cctv2.containerdns.local
    Address: 192.168.10.3

monitor

     If the domain may have multiple ips, then dns-scanner is used to monitor the ips behand the domain. 
     When the service is not reachable, dns-scanner will change the status of the ip. And the containerdns will monitor the ip status, 
     when it comes down, containerdns will choose a good one.
     
     cctv2.containerdns.local    ips[192.168.10.1,192.168.10.2,192.168.10.3]
     
    % nslookup cctv2.containerdns.local 127.0.0.1
    Server:         127.0.0.1
    Address:        127.0.0.1#53

    Name:   cctv2.containerdns.local
    Address: 192.168.10.3
    
    % etcdctl get /containerdns/monitor/status/192.168.10.3
    {"status":"DOWN"}

    % nslookup cctv2.containerdns.local 127.0.0.1
    Server:         127.0.0.1
    Address:        127.0.0.1#53

    Name:   cctv2.containerdns.local
    Address: 192.168.10.1
    
    we query the domain cctv2.containerdns.local form containerdns we get the ip 192.168.10.3, then we shut down the service, we query the domain again
    we get the ip 192.168.10.1.

Performance Test

Testing Conditions

Physical hardware

    NIC: gigabit ethernet card
    CPUs: 32
    RAM: 32G
    OS: CentOS-7.2

Testing Software

    queryperf

Test result

image

DPDK-based Optimization

Improve ContainerDNS throughput by leveraging the DPDK technology to reach nearly 10 million QPS, https://github.com/tiglabs/containerdns/kdns and the code is also production-ready.

Reference

Reference to cite when you use ContainerDNS in a paper or technical report: "Haifeng Liu, Shugang Chen, Yongcheng Bao, Wanli Yang, and Yuan Chen, Wei Ding, Huasong Shan. A High Performance, Scalable DNS Service for Very Large Scale Container Cloud Platforms. In 19th International Middleware Conference Industry, December 10–14, 2018, Rennes, France. "

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].