Choosing Digital Ocean over AWS

Choosing Digital Ocean over AWS

Choosing Digital Ocean over AWS

 

Sat, 08 Apr 2017 07:10:00 GMT

 

This episode I explain the decision to choose Digital Ocean over Amazon AWS.

 

Choosing Digital Ocean over AWS

 

Spotify | iTunes | Sticher | Google Play | Player.fm | MyTuner Radio

 

Transcript:

Digital Ocean Free Credit

 

Transcript:

Hello and welcome to the podcast. I’m your host Dave Albert. In this show, I talk about technology, building a company as a CTO and co-founder, and have guests to discuss their roles in technology and entrepreneurship.

Hello, friends, and welcome to episode 2 of my podcast. I guess I’m going to have to come up with a name that’s interesting or catchy, or something better than ‘my podcast.’ Anyway, in this episode, I’m going to talk to you about our decision to stick with DigitalOcean versus switching to Amazon AWS.

There’s two major reasons and I guess, one minor reason. The first reason would be lock-in. To avoid lock-in to AWS’ system and tools, and proprietary types of products that you can’t necessarily get somewhere else easily and being at the mercy of a company like that. Two would be price. Their pricing model is complicated if you ask me. I know they have tools that you can use to try to estimate your pricing, but it just seems like too big of a question mark to me, especially for a small startup. We need to control our costs at this point. That’s just the way it is. If we run out of money, we no longer have a company and I can’t trust Amazon with their pricing model. With DigitalOcean, I know exactly how much I’m going to be paying each month unless there’s some sort of humongous a skyrocket in traffic which at this point is not a large concern of ours.

Now, why was this decision so hard? Well, obviously, Amazon has quality service. The tools they have are nice. You get quite a bit of it without having to build it yourself such as the VPC, was the biggest one. That was one of the reasons that I had to wrestle with the this question for weeks. The VPC is the Virtual Private Cloud, and so that’s how you basically have a bastion into the network and load balancers or application balancers, or whatever you’d like to call it there, but proxies into your private network as opposed to having each host just publicly available on the internet, and that would be the way that typically, your first set up in DigitalOcean would be is you spin up a host and it’s on the internet. Now, if it’s a database host, it’s on the internet. If it’s a web host, an application host, a monitoring host, it’s on the internet. There’s not really a built-in DigitalOcean solution to have a private networking option. Now, there are private networking adapters on each of the hosts if you select that option when spinning up your ‘droplet’ as DigitalOcean calls it but that is private to the data center, so everyone in the New York 1 data center or in the London 1 data center is on that same private networking space. So all someone would have to know is where you hosted and scan that network. Obviously, they’d have to know your IP otherwise, they’d be scanning all of the data center nodes and that’s bound to bring up some security alerts. I’m not intimately aware of how DigitalOcean has their monitoring set up at all, but I can’t imagine someone that has as good a product as they have wouldn’t be monitoring for something like that, but first, some sort of targeted attack where you may accidentally expose your IPs through config files on GitHub or DNS if you’re trying to use some sort of private DNS scheme, the chances of that happening are greater than zero. So the way I was able to overcome the lack of a VPC or a VPN network was through using a tool called a Tinc, so the Tinc VPN. It’s a mesh VPN network and basically what that means is, any node can connect to any other node, and obviously, it uses the VPN software to create a mesh network. So if you’ve got four hosts, A, B, C, and D, A talks to B, C, and D, B talks to A, C, and D, etc., so all of the hosts communicate with all of the other hosts, and one of the hosts goes down, the three remaining hosts did not lose their VPN connection, so as opposed to if you had host A as the VPN and B, C, and D connect to A, if A goes down, you lose connectivity to all of the hosts. With a mesh network, it’s more robust and you have redundant links. Of course, if there’s a service that’s critical and you didn’t make a redundant solution for that, then you’ve obviously lost the ability to access that service. Anyway, so I use the Tinc VPN over the private network so all of the hosts in the data center that I’m using for one portion of the services communicate over that private VPN - sorry, over that private network. I’ve used the iptables to turn off all traffic on the public address except for my load balancer and then a VPN into the network so that I can access my hosts from home or office, or wherever it is that I want to manage my systems. I connect to the VPN, then my machine is part of that private network, that virtual private network. So it’s part of the Tinc mesh VPN that allows me to have only one or two hosts actually available to the internet, that cut down on the number of scans that I receive from the greater internet to several scans per day from China I was receiving on each of the hosts I had publicly available and now, it’s only one host that I have to be concerned with, but I can spend more time and effort ensuring that it’s as secure as possible and since it has less services, there are less potential vulnerabilities to be exploited. It also allows me to be a bit less worried with how I secure, say the connection to the database whereas in the past, I may have used an SSH tunnel, I can now more confidently expect that since it’s going through the private Tinc VPN, I don’t necessarily need to expect that it is going to be bombarded with attacks, so I can be a bit less control - that wouldn’t be the right word - I can be a little less strict with how many security practices I need to follow. Obviously, I try to follow as many as possible without going overboard, but it’s really nice to be able to not have dozens of SSH tunnels to worry about till they come back up after the reboot. All of my monitoring doesn’t necessarily have a view into each of the SSH tunnels but it does have a view into whether hosts are reachable and that the expected services are available to the monitoring system. So it solves the needs that I had, gives me that VPC security blanket, if you will, and also, I know how much it’s going to cost, and from my calculations, AWS would have been approximately twice as much. So for a small application host, I know for a fact on DigitalOcean, I can run one depending on what it’s going to do, but if it’s just to begin with, for $5, $6 if I want a backup, a weekly backup automatically created. On Amazon, that’s going to be I believe at least twice as expensive, and then that’s just for the smallest host, so you can run a WordPress instance on one of the five-dollar hosts. If you put CloudFlare in front of it and it’s not massively - it doesn’t receive massive amounts of traffic, then that’s good enough to start with, and then it’s easy to scale that vertically because potentially, you may need a reboot but you can scale up your hosts. You can also scale them up with just memory and CPU which would speed up the host, not scale up the disk space, and that would allow you to then roll it back, so if it was a peak time and you needed to double the size of the server, you could do that and then roll it back so that you’re not incurring that double expense forever.

Another thing that I really like about the DigitalOcean, which it’s a preference thing and not necessarily something that affects everyone, but the price quoted for digital ocean, the $5 or $10, or $20, that includes the root partition. So for example - I’m going to go look so I’ll give you the exact numbers - when you go to create a droplet - apologies, the internet is slowing down, oh, here we go - so a $5 per month host is, it has 512 meg of RAM, one CPU you, 20 gig SSD, and it looks like a terabyte of transfer. The 20 gig disk is the root partition, so if you don’t have much storage needs, then you don’t need that additional drive. I’m not as familiar with AWS terminology, but I know that the drive quoted for an EC2 instance is an ephemeral drive so that means you have no persistent storage without adding an additional storage container. Also, DigitalOcean are in the process of rolling out block storage, so that would then allow you to have, for that $5, even if it is a low system-intensive tool but needs a larger memory - storage base, I mean, you could add those block storage for a great deal less than what it would cost to double the size of the machine. Currently, the block storage is only available in their France 1, New York 1, San Francisco 2, and SGP - Singapore - oh, and sorry, not France - Frankfurt. So it’s not available in London, Bangalore, Toronto, Amsterdam, San Francisco 1, or New York 2 or 3. That can affect you if you would like block storage and most of your users are in the UK and Ireland, and you maybe have your host running in London and there’s no block storage yet but they seem to be rolling it out at a reasonable pace, so I’m hopeful that it should be everywhere within the next few months. I mean, it’s been, I want to say eight or nine months since they announced it but of course, that’s major upgrades to each to their data centers, so you have to give them time. I mean, they are trying, they’re giving the consumers what they’re asking for. They also have an API so you can spin up boxes, do backups, add additional storage, upgrade the hosts all from their API. There’s a tool, a command line tool, so similar to AWS. It just seems that DigitalOcean is simpler. Of course, that means they have fewer options, so like the block storage, when you want it, you may not get it as soon as you want. You may need to come up with alternatives. So like for the Tinc VPN, I’m sure there are other customers besides myself who are quite concerned with the security of having all of your hosts online. It’ll come, I’m sure there’s a lot of people who wish it was out already, being one of them but now that I have Tink VPN, I’m happy enough. If they roll it out, I’ll obviously look into it and see if their VPN solution makes it easier. Mine’s relatively simple. I have the majority of it set up with Ansible Playbooks so the hardest part is keeping straight my naming conventions because Tinc VPNs don’t allow for dotted naming conventions, so I need to remember when it should be a dot and when should be an underscore, and then I haven’t automated the generation of keys, so I just have to manually create those and name them properly and stick them in the correct directory, and then run the playbook against the new host, and it’s part of the VPN. So with regards to the hosting service, I’ve never had issue but they do have regular maintenance, but you get plenty of notice, even when they say you may be down for - there may be disruption for up to an hour or two, I’ve never noticed it. I’ve it’s never alerted on my Nagios alerts, obviously, those run in about five minutes apart as opposed to some sort of full time-based system like Prometheus or… What was the one that I always - doesn’t matter, the other time based monitoring solutions, but I don’t know for sure if there were disruptions in the users, but at this point, the volume of users that we have on our sites and applications isn’t so high that that would be a serious issue. As we grow, hopefully, that becomes an issue I’m really concerned about, but also, I will build in the redundancy around the data centers so that core critical application isn’t in one data center only, it’s in multiple data centers. That’s just the practice, but the hosts are very responsive, I love that you single click in the configuration of your droplet, and it backs up once a week. I wish you could tune that, I wish it was daily or at least you could make it daily. Obviously, because of the API, I could script that. It’s not just become an issue that I care that much about yet because the more back - well, snapshots, those would be snapshots and not backups - the more snapshots you have, the more expense you have, of course, so I need some way to manage that. I mean, again, it’s just part of the scripting. I could handle that but it’s not become a core priority as we’re in the process of building out our websites and the applications that we’re trying to get into the MVP state, servicing clients, and all of the other duties as that you have as the lead technical person, you’re making sure that all of our users have access to email, Facebook Workplace, Trello, and any of the other tools that they need as well as technical support and finding time to think about what the right way to build things are. That’s sometimes one of the hardest bits in technology, is we just have to build it, we have to build it now. We don’t have time to think about it, and that’s terrible and that’s why I wanted to leave the corporate world where I work for someone else and I wanted to get in a place where I could work for myself and Julie, my co-founder and CEO where we could do the right things and do them right and not just do what someone else decided either in the office around the corner or hundreds of miles away. Of course, that means that we are those people now but we only hire people we respect. We only work with people we respect, so we respect people’s opinions, so we try our best not to do things dictatorially and do things with input from those people around us. We’re not perfect; we’re people. So the more input that you have, when it’s useful, the better. Obviously, if you just listen to everyone all the time, you can never make a decision, but that’s not what I mean. I mean, it’s not just secret decisions made in private that are handed out as orders: “Here’s what we’re doing because I said so.” That’s not what we want and that’s why we started our company, along with other reasons. Not even sure how I got on that rant, I’m sorry about that. This was about AWS and DigitalOcean.

Yeah, so that’s it. I really like DigitalOcean. I’m very happy with them. I’ll put a link in the description that you can get to or you can find out on my blog, dave-albert.com. It will be an affiliate link so I can get a little extra credit if you sign up, and you’ll get free credit as well, so it helps us both out. I hope this is useful and if you have any questions, hit me up on Twitter at dave_albert or on email. I’ll put those links in the description again but I think I’ll set up podcast at dave-albert.com. I don’t have it set up yet but by the time this podcast is posted, I’ll set it up.

So let me know if there is anything you’d like to hear more about. If you think I’m just full of s***, let me know, otherwise, be excellent to each other. Later.

Until next time, remember, any sufficiently advanced technology is indistinguishable from magic.