FTP Considered Painful

After many years of separation I was recently reunited with the venerable old FTP protocol. The years haven’t been kind to it.

plane dive

Happy New Year!

Right, that’s the jollity out of the way.

I recently had cause to have some dealings with the File Transfer Protocol, which is something I honestly never thought I’d be using again. During the process I was reminded what an antiquated protocol it is. It’s old, it has a lot of wrinkles and, frankly, it’s really starting to smell.

But perhaps I’m getting ahead of myself, what is FTP anyway?

It’s been around for decades, but in these days of cloud storage and everything being web-based, it’s not something that people typically come across very often. This article is really an effort to convince you that this is something for which we should all be thankful.

Wikipedia says of FTP:

The File Transfer Protocol (FTP) is the standard network protocol used for the transfer of computer files between a client and server on a computer network.

Twenty years ago, perhaps this was even true, but this Wikipedia page was last edited a month ago! Surely the world has moved on these days? At least I certainly hope so, and this post is my attempt to explain some of the reasons why.

A Little Piece of History

What’s so bad about FTP?” I hear you cry! Oh wait, my mistake, it wasn’t you but the painfully transparent literary device sitting two rows back. Well, regardless, I’m glad you asked. First, however, a tiny bit of history and a brief explanation of how FTP works.

FTP is a protocol with a long history, and even predates TCP/IP. It first put in an appearance in 1971 in RFC 114, a standard so old it wasn’t even put into machine-readable form until 2001. At this point it was built on the Network Control Program (NCP), which was a unidirectional precursor of TCP/IP. The simplex nature of NCP may well explain some of FTP’s quirks, but I’m getting ahead of myself.

In 1980 the first version of what we’d recognise as FTP today was defined in RFC 765. In this version the client opens a TCP connection (thereafter known as the command connection) to port 21 on a server. It then sends requests to transfer files across this connection, but the file data itself is transferred across a separate TCP connection, the data connection. This is the main aspect of FTP which doesn’t play well with modern network topologies as we’ll find out later.

Given that TCP connecitons are full-duplex, why didn’t they take the opportunity to remove the need for a second connection when they moved off NCP? Well, the clues are in RFC 327, from a time when people were still happy to burn RFC numbers for the minutes of random meetings. I won’t rehash it here, but suffice to say it was a different time and the designers of the protocol had very different considerations.

Whatever the reasons, once the command connection is open and a transfer is requested, the server connects back to the client machine on a TCP port specified by the FTP PORT command. This is known as active mode. Once this connection is established, the sending end can throw data down this connection.

Even back in 1980 they anticipated that this strategy may not always be ideal, however, so they also added a PASV command to use passive mode instead. In this mode, the server passively listens on a port and sends its IP address and port to the client. The client then makes a second outgoing connection to this point and thus the data connection is formed. This works a lot better than active mode when you’re behind a firewall, or especially a NAT gateway. As NAT gateways became more popular, as the IPv4 address space became increasingly crowded, this form of FTP transfer became more or less entirely dominant.

There were a few later revisions of the RFC to tighten up some of the definitions and provide more clarity. There was a final change that is relevant to this article, however, which was made in 1998 when adding IPv6 support to the protocol, as part of RFC 2428. One change this made was to add the EPSV command to enter extended passive mode. The intended use of this was to work around the fact the original protocol was tied to using 4-byte addresses, and they couldn’t change this without breaking existing clients. As a simple change the EPSV command simply removes the IP address that the server sends to the client for PASV and instead the client uses the same address as it used to create the command connection1.

Not only is extended passive mode great for IPv6, it also works in the increasingly common case where the server is behind a NAT gateway. This causes problems with standard passive mode because the FTP server doesn’t necessarily know its own external IP address, and hence typically sends a response to the client asking it to connect to an address in a private range which, unsurprisingly, doesn’t work2.

It’s important to note that EPSV mode isn’t the only solution to the NATed server problem—some FTP servers allow the external address they send to be configured instead of the server simply using the local address. There are still some pitfalls to this approach, which I’ll mention later.

Simple Enough?

Given all that what, then, are the problems with FTP?

Well, some of them we’ve covered already, in that it’s quite awkward to run FTP through any kind of firewall or NAT gateway. Active mode requires the client to be able to accept incoming connections to an arbitrary port, which is typically painful as most gateways are built on the assumption of outwards connections only and require fiddly configuration to support inbound.

Passive mode makes life easier for the client, but for security-conscious administrators it can be frustrating to have to enable a large range of ports on which to allow outbound connections. It’s also more painful for the server due to the dynamic ports involved, as we’ve already touched on. The server can’t use only a single port for its data connections since that would only allow it to support a single client concurrently. This is because the port number is the only think linking the command and data connections—if two clients opened data connections at the same time, the server would have no other way to tell them apart.

Extended passive mode makes like easier all round, as long as you can live with opening the largish range of ports required. But even given all this there’s still one major issue which I haven’t yet mentioned, which crops up with the way that modern networks tend to be architected.

FTP = Forget Talking to Pools

Anyone who’s familiar with architecting resilient systems will know that servers are often organised into clusters. This makes it simple to tolerate failures of a particular system, and is also the only practical way to handle more load than a single server can tolerate.

When you have a cluster of servers, it’s important to find a way to direct incoming connections to the right machine in the cluster. One way to do this is with a hardware load balancer, but a simpler approach is simply use DNS. In this approach you have a domain name which resolves to multiple IP addresses, sorted into a random order each time, and each address represents one member of the pool. As clients connect they’ll typically use the first address and hence incoming connections will tend to be balanced across available servers.

This works really well for protocols like HTTP which are stateless because every time the client connects back in it doesn’t matter which of the servers it gets connected to, any of them are equally capable of handling any request. If a server gets overloaded or gets taken down for maintenance, the DNS record is updated and no new connections go to it. Simple.

This approach works fine for making the FTP command connection. However, when it comes to something that requires a data connection (e.g. transferring a file), things are not necessarily so rosy. In some cases it might work fine, but it’s a lot more dependent on network

Let’s illustrate a potential problem with an example. Let’s say there’s a public FTP site that’s served with a cluster of three machines, and those have public IP addresses 100.1.1.1, 100.2.2.2 and 100.3.3.3. These are hidden behind the hostname ftp.example.com which will resolve to all three addresses. This can either be in the form of returning multiple A records in one response, or returning different addresses each time. We can see examples of both of these if we look at the DNS records for Facebook:

$ host -t A facebook.com
facebook.com has address 185.60.216.35
$ host -t A facebook.com
facebook.com has address 157.240.1.35

… and for Twitter:

$ host -t A twitter.com
twitter.com has address 104.244.42.1
twitter.com has address 104.244.42.129

When the FTP client initiates a connection to ftp.example.com it first performs a DNS lookup—let’s say that it gets address 100.1.1.1. It then connects to 100.1.1.1:21 to form the command connection. Let’s say the FTP client and server are both well behaved and then negotiate the recommended EPSV mode, and the server returns port 12345 for the client to connect on.

At this point the client must make a new connection to the specified port. Since it needs to reuse the original address it connected to, let’s say that it repeats the DNS lookup and this time gets IP address 100.2.2.2 and so makes its outgoing data connection to that address. However, since that’s a physically separate server it won’t be listening on port 12345 and the data connection will fail.

OK, so you can argue that’s a broken FTP client—instead of repeating the DNS lookup it could just reconnect to the same address it got last time. However, in the case where you’re connecting through a proxy then this is much less clear cut—the proxy server is going to have no way to know that the two connections that the FTP client is making through it should go to the same IP address, and so it’s more than likely to repeat the DNS resolution and risk resolving to a different IP address as a result. This is particularly likely for sites using DNS for load-balancing since they’re very likely to have set a very short TTL to prevent DNS caches from spoiling the effect.

We could use regular passive mode to work around the inconsistent DNS problem, because the FTP server returns its IP address explicitly. However, this could still cause an issue with the proxy if it’s whitelisting outgoing connections—we would likely have just included the domain name in the whitelist, so the IP address would be blocked. Leaving that issue aside, there’s still another potential pitfall if the FTP server has had the public IP address to return configured by an administrator. If that administrator has configured this via a domain name, the FTP server itself could attempt to resolve the the name and get the wrong IP address, so actually instruct the client to connect back incorrectly. Each server could be configured with its external IP address directly, but this is going to make centralised configuration management quite painful.

Insecurity

As well as all the potential connectivity issues, FTP also suffers from a pretty poor security model. This is fairly well known and there’s even an RFC discussing many of the issues.

One of the most fundamental weaknesses is that it involves sending the username and password in plaintext across the channel. One easy way to solve this is to tunnel the FTP connection over something more secure, such as an SSL connection. This setup, usually known as FTPS, works fairly well, but still suffers from the same issues around the separate data and command connections. Another alternative is to tunnel FTP connections over SSH.

None of these options should be confused with SFTP which, despite the similarity in name, is a completely different protocol developed by the IETF3. It’s also different from SCP, just for extra confusion4. This protocol assumes only a previously authenticated and secure channel, so is applicable over SSH but more generally anywhere where a secure connection has been created.

Overall, then, I strongly recommend sticking to SFTP wherever you can, as the world of FTP is, as we’ve seen, by and large a world of pain if you care about more or less any aspect of security at all, or indeed ability to work in any but the most trivial network architectures.

In conclusion, then, I think that far from FTP being the “standard” network protocol used for the transfer of computer files, we should instead be hammering the last few nails in its coffin and putting it out of our misery.


  1. I guess by 1998 they’d given up on those crazy ideas from the 80’s of transferring between remote systems without taking a local copy—you know, the thing that absolutely nobody ever used ever. I wonder why they dropped it? 

  2. Even with extended passive mode NAT can still cause problems, as you also need to redirect the full port range that you plan to use for data connections to the right server. It solves part of the problem, however. 

  3. Interestingly there doesn’t appear to be any kind of RFC for SFTP, but instead just a draft. I find this rather odd considering how widely used it is! 

  4. Just for extra bonus confusion there’s a really old protocol called the Simple File Transfer Protocol defined in RFC 913 which could also reasonably be called “SFTP”. But it never really caught on so probably this isn’t likely to cause confusion unless some pedantic sod reminds everyone about it in a blog post or similar. 

7 Jan 2018 at 9:50AM by Andy Pearce in Software  | Photo by Rob Potter on Unsplash  | Tags: ftp  |  See comments

☑ Website Maintenance on the Move

I write most of my blog articles and make other changes to my site whilst on my daily commute. The limitations of poor network reception different hardware have forced me to come up with a streamlined process for it and I thought it might be helpful to share in case it’s helpful for anyone else.

laptop hands

I like writing. Since software is what I know, I tend to write about that. QED.

Like many people, however, my time is somewhat pressured these days — between a wife and energetic four-year-old daughter at home and my responsibilities at work, there isn’t a great deal of time left for me to pursue my own interests. When your time is squeezed the moments that remain become a precious commodity that must be protected and maximimsed.

Most of my free time these days is spent on the train between Cambridge and London. While it doesn’t quite make it into my all time top ten favourite places to be, it’s not actually too bad — I almost invariably get a seat, usually with a table, and there’s patchy mobile reception along the route. Plenty of opportunties for productivity, therefore, if you’re prepared to take them.

Since time is precious, the last thing I want to do when maintaining my blog, therefore, is spend ages churning out tedious boiler-plate HTML, or waiting for an SSH connection to catch up with the last twenty keypresses as I hit a reception blackspot. Fortunately it’s quite possible to set things up to avoid these issues and this post is a rather rambling discussion of things I’ve set up to mitigate them.

Authoring

The first time-saving tool I use is Pelican. This is a source code generator which processes Markdown source files and generates static HTML from them according to a series of Jinja templates.

When first resurrecting my blog from a cringeworthy earlier effort1 the first thing I had to decide was whether to use some existing blogging platform (Wordpress, Tumblr, Medium, etc.) either self-hosted or otherwise. The alternative I’d always chosen previously was to roll my own web app — the last one being in Python using CherryPy — but I quickly ruled out that option. If the point was to save time, writing my own CMS from scratch probably wasn’t quite the optimal way to go about it.

Also, the thought of chucking large amounts of text into some clunky old relational database always fills me with a mild sense of revulsion. It’s one of those solutions that only exists because if all you’ve got is a hosted MySQL instance, everything looks like a BLOB.

In the end I also rejected the hosted solutions. I’m sure they work very well, with all sorts of mobile apps and all such mod cons, but part of the point of all this for me has always been the opportunity to keep my web design skills, meagre as they might be, in some sort of barely functional state. I’m also enough of a control freak to want to keep ownership of my content and make my own arrangements for backing it up and such — who knows when these providers will disappear into the aether.

What I was really tempted to do for awhile was build something that was like a wiki engine but which rendered with appropriate styling like a standard website — it was the allure of using some lightweight markup that really appealed to me. At that point I discovered Pelican and suddenly I realised with this simple tool I could throw all my Markdown sources into a Git repository and then throw it through Pelican2 to generate the site. Perhaps I’m crazy but it felt like a tool for storing versioned text files might be a far more appropriate tool than a relational database for, you know, storing versioned text files. Just like a wiki, but without the online editing3.

All there was to do then was build my own Pelican template, set up nginx to serve the whole lot and I was good to go. Simple enough.

Updating the site

Except, of course, that getting site generated was only half the battle. I could SSH into my little VPS, write some article in Markdown using Vim and then run Pelican to generate it. That’s great when I’m sitting at home on a nice, fast wifi connection — but when I’m sitting at home I’m generally either spending time with my family or wading through that massive list of things that are way lower on the fun scale than blogging, but significantly higher on the “your house will be an uninhabitable pit of utter filth and despair” scale.4

When I’m sitting on a train where the mobile reception varies between non-existent and approximately equivalent to a damp piece of string, however, remote editing is a recipe for extreme frustration and a string of incoherently muttered expletives every few minutes. Since I don’t like to be a source of annoyance to other passengers, it was my civic duty to do better.

Fortunately this was quite easy to arrange. Since I was already using a Git repository to store my blog, I could just set up a cron job which updated the repo, checked for any new commits and invoked Pelican to update the site. This is quite a simple script to write and the cron job to invoke it is also quite simple:

*/5 * * * *     git -C /home/andy/www/blog-src pull; \
                /home/andy/www/blog-src/tools/check-updates.py

If you look at check-updates.py you’ll find it just uses git log -1 --pretty=oneline to grab the ID of the current commit and compares it to the last time it ran — if there’s any difference, it triggers a run of Pelican. It has a few other complicating details like allowing generation in a staging area and doing atomic updates of the destination directory using a symlink to avoid a brief outage during the update, but essentially it’s doing a very simple job.

This was now great — I could clone my blog’s repo on to my laptop, perform local edits to the files, run a staging build with a local browser to confirm them and then push the changes back to the repo during brief periods of connectivity. Every five minutes my VPS would check for updates to the repo and regenerate the site as required. Perfect.

There’s an app for that

Well, not quite perfect as it turns out. While travelling with a laptop it was easy to find a Git client, SSH client and text editor, but sometimes I travel with just my iPad and a small keyboard and things were a little trickier.

However, I’ve finally discovered a handful of apps that have streamlined this process:

Working Copy
Since I put Git at the heart of my workflow it was always disappointing that it took so long for a decent Git client to arrive on iOS. Fortunately we now have Working Copy and it was worth the wait. Whilst unsurprisingly lacking some of the more advanced functionality of command-line Git, it’s effective and quite polished and does the job rather nicely. It has a basic text editor built in, but one of its main benefits is that it exposes the working directory to other applications which allows me to choose something a little more full-featured.
Textastic
This is the editor I currently use on both iOS and Mac. It’s packed with features and can open files from Working Copy as well as supporting direct SFTP access and other mechanisms. I won’t go through it’s myriad features, just suffice to say it’s very capable. I should give an honourable mention to Coda for iOS, Panic Inc.’s extremely polished beautifully crafted text editor for iOS, which I used to use. Coda has a builtin SSH client and is really heavily optimised for remote editing, so it’s a great alternative if you want to explore. The original reason I switched was that, with my unreliable uplink, Textastic‘s more explicit download/edit/upload model worked a little better for me than Coda’s more implicit remote editing with caching. Now the fact that Textastic supports local editing within the Working Copy repo is also a factor. I’ll also be totally honest and point out that I haven’t played with Coda since they released a (free) major update awhile back. I’ve nothing but praise for its presentation and overall quality, however.
Prompt 2
If Coda for iOS didn’t quite tempt me as much as Textastic, another of Panic’s offerings Prompt 2 is absolutely exactly what I need. This is by far the most accomplished SSH client I’ve used on iOS bar none. It supports all the funtionality you need with credentials, plus you can layer Touch ID on top if you want it to remember your passphrases. Its terminal emulation is pretty much perfect - I’ve never had any issues with curses or anything else. It runs multiple connections effortlessly and keeps them open in the background without issue. It can even pop up a notification reminder to swap it back to keep your connections alive if it’s idle for too long. As with any remote access on a less than perfect link I’d very strongly suggest using tmux, but Prompt 2 does about all it can to maintain the stability of your connections.

Summary

That’s about the long and the short of it, then. I’ve been very happy with my Git-driven workflow and found it flexible enough to cope with changes in my demands and platforms. Any minor deficiencies I can work around with scripting on the server side.

The nice thing about Git, of course, is that its branching support means that if I ever wanted to set up, say, a staging area then I can do that with no changes at all. I just create another commit on the server which uses the staging branch instead of master, and I’m good to go — no code changes required, except perhaps some trivial configuration file updates.

Hopefully that’s provided a few useful pointers to someone interesting in optimising their workflow for sporadic remote access. I was of two minds whether to even write this article since so much of it is fairly obvious stuff, but sometimes it’s just useful to have the validation that someone else has made something work before you embark on it — I’ve done so and can confirm it works very well.


  1. Not to be confused with a hilariously precocious first version that I created shortly after I graduated. 

  2. You may be more familiar with Jekyll, a tool written by Github co-founder Tom Preston-Werner which does the same job. The only reason I chose Pelican was the fact it was written in Python and hence I could easily extend it myself without needing to learn Ruby (not that I wouldn’t like to learn Ruby, given the spare time). 

  3. Of course, one could quite reasonably make the point that the online editing is more or less the defining characteristic of a wiki, so perhaps instead of “just like a wiki” I should be saying “almost wholly unlike a wiki but sharing a few minor traits that I happened to find useful, such as generating readable output from a simple markup that’s easier to maintain”, but I prefer to keep my asides small enough to fit in a tweet. Except when they’re talking about asides too large to fit in a tweet — then it gets challenging. 

  4. The SI unit of measurement is “chores”. 

19 Sep 2016 at 1:45PM by Andy Pearce in Software  | Photo by rawpixel.com on Unsplash  | Tags: web  |  See comments

☑ Brexit, or Brexit Not — There Is No Try

I voted against Brexit as I feel the UK is significantly better within the EU. However, the looming uncertainty over whether the UK will follow through is much worse than either option.

union jack

On Thursday 23 June the United Kingdom held a referendum to decide whether to remain within the European Union, of which it has been a member since 1973. The vote was to leave by a majority of 52% with a fairly substantial turnout of almost 72%. Not the largest majority, but a difference of over a million people can’t be called ambiguous.

So that was it then — we were out. Time to start comparing people’s plans for making it happen to decide which was the best.

Except, of course, it turned out nobody really had any plans. The result seemed to have been a bit of a shock to everyone, including all the politicians who were campaigning for it. Nobody really seemed to know what to do next. Disappointing, but hardly surprising — we’re a rather impulsive nation, always jumping into things without really figuring out what our end game should be. Just look at the shambles that followed the Iraq war.

Fortunately for the Brexiteers there was a bit of a distraction in the form of David Cameron’s resignation — having campaigned to remain within the EU he felt that remaining as leader was untenable. Well, let’s face it, that’s probably disingenuous — what he most likely really felt was he didn’t want to go down in history as the Prime Minister who took the country out of the EU, just in case (as many people think quite likely) it’s a bit of a disaster, quite possibly resented by generations to come.

This triggered an immediately leadership contest within the Tory party which drew all eyes for a time, until former Home Secretary Theresa May was left as the only candidate and assumed leadership of the party. At this point everyone’s attention seems to be meandering its way back to thoughts of Brexit and all the questions it raises.

And a lot of questions there certainly are. There are immigration questions, NHS questions, questions for the Bank of England, questions for EU migrants, questions for Northern Ireland, profound questions for Scotland1 questions for David Davis, and even a whopping great 94 questions on climate and energy policy, which frankly I think is rather hypocritical — they know full well that nobody has any use for so many questions and most of them will end up on landfill.

To my mind, however, there’s still one question that supercedes all these when talking about Brexit — namely, will Br actually exit?

You’d think this was a done deal — I mean, we had a referendum on it and everything. Usually clears these things right up. But in this case, even well over a month after the vote, there’s still talk about whether we’re going to go through with it.

Apparently the legal situation seems quite muddy but there are possible grounds for a second referendum — although Theresa May is on record as rejecting that possibility. I must say I can see her point — to reject the clearly stated opinion of the British public would need some pretty robust justification and “the leave campaigners lied through their teeth” probably doesn’t really cut it. It’s not like people aren’t used to dealing with politicians being economical with the truth in general elections.

Then we hear that the House of Lords might try to block the progress of Brexit — or at least delay it. Once again, it’s not yet at all clear to what extent this will happen; and if it happens, how effective it will be; and if it’s effective, how easily the government can bypass it. For example, the government could try to force it through with the Parliament Act.

What this all adds up to is very little clarity right now. We have a flurry of mixed messages coming out of government where they tell us that the one thing they are 100% certain of is that they’re definitely going to leave the EU, but not only can’t they give us a plan, they can’t even give us a rough approximation of when they’ll have a plan; we have a motley crew of different groups clamouring for increasingly deperate ways to delay, defer or cancel the whole thing, but very little certainty on whether they even have the theoretical grounds to do so let alone the public support to push it through; and we have an increasingly grumpy EU who are telling us that if we’re really going to leave then we should jolly well get on with it and don’t let Article 50 hit us in the bum on the way out.

Meanwhile the rest of the world doesn’t seem to know what to make of it, so it’s not clear that we’ve seen much of the possible impact, even assuming we do go ahead. But to think there hasn’t been any impact is misleading — even when things are uncertain we’ve already seen negative impacts on academics, education and morale in the public sector. Let’s be clear here, it hasn’t happened yet and it isn’t even a certainty that it will, and we’re already seeing a torrent of negative sentiment.

To be fair, though, we haven’t yet really had the chance to see any possible positive aspects of the decision filtering through. In fact, we probably won’t see any of those until the decision is finalised — or at least until Article 50 is triggered and there’s a deadline to chivvy everyone along.

That’s a big problem.

I think that the longer this “will they/won’t they” period of uncertainty carries on, the more we’ll start to see these negative impacts. Nobody wants to bank on the unlikely event that the UK will change course and remain in the EU, but neither can anyone count on the fact that we won’t. We’re stuck in an increasingly acrimonious relationship that we can’t quite bring ourselves to end yet. If they could find an actor with a sufficient lack of charisma to play Nigel Farage, they could turn it into a low budget BBC Three sitcom.

Don’t get me wrong, I voted firmly to remain in this EU. But whatever we do, I feel like we, as a nation — and by that I mean they as a government that we, as a nation, were daft enough to elect2 — need to make a decision and act on it. This wasteland of uncertainty is worse than either option, and doesn’t benefit anyone except the lawyers and the journalists — frankly they can both find more worthwhile ways to earn their keep.

So come on Theresa, stop messing about. Stick on a Spotify3 playlist called something like “100 Best Break Up Songs”, mutter some consoling nonsense to yourself about how there are plenty more nation states in the sea and pick up the phone. Then we can get on with making the best of wherever we find ourselves.


  1. Although they’re asked by Michael Gove so I dont know if they count — given his behaviour during the Tory leadership election I’m not sure he’s been allowed off the naughty step yet. 

  2. In the interests of balance I should point out that, in my opinion, more or less every party this country elected since 1950 has been a daft decision. Probably before that, too, but my history gets a little too rusty to be certain. The main problem is that the people elected have an unpleasant tendency to be politicians, and if there’s one group of people to whom the business of politics should never be entrusted, it’s politicians. 

  3. Assuming Spotify, being Swedish, are still allowed? 

4 Aug 2016 at 7:45PM by Andy Pearce in Politics  | Photo by James Giddins on Unsplash  | Tags: uk  brexit politics  |  See comments

← Page 1   |   Page 2 of 16   |   Page 3 →   |   Page 16 ⇒