Occasionally I overhear (or get sucked into) an argument that goes like this:

"Go is the best language for writing servers because X."

"No, you neanderthal, Haskell is, because Y."

"Well I've never needed Y! And Haskell is too out of touch with real problems."

"But how can you tolerate such a limited and poorly-designed tool?"

"Well, I like to use it and I get stuff done, so what do you care?"

Rarely do these types of discussions change anyone's mind. :)

People will argue all day long in a vacuum about what languages is best, but languages are tools and proper evaluation of a tool must be tied to a concrete context. Thus, the subject at hand: there are two types of servers. Specialized single-purpose servers differ from general business application servers.

Single-Purpose Infrastructure

Single-purpose infrastructure, like databases, request proxies, and messaging systems emphasize latency, throughput, reliability, and efficient use of hardware. Some examples are Redis, MySQL query proxies, and nginx. The choice of programming language matters to the extent that it facilitates the performance and reliability goals of the infrastructure. For these types of servers, Go is actually a great language. With Go, you get performance comparable to the JVM, a bunch of high-quality libraries, and the ability to plop your static binary onto any machine or container.

I'll share an example from IMVU. IMVU's primary content delivery system for all user-generated content, including 3D assets and user photos, was served by a pool of Perlbal proxies. This pool of machines served some of the highest traffic for the whole business, but the performance was not good and the code inside Perlbal was hard to maintain. Normally we wouldn't have cared about the code except that we had discovered some race conditions in Perlbal. One of my coworkers finally got fed up and, within a couple weeks, replaced this entire part of our system with a replacement written in Go. Two machines (hooked up to 10g switches of course) running the Go implementation could serve all of the traffic previously allocated to the entire Perlbal pool. (We ended up running more instances for redundancy, however.) Not only was the new code faster, it was dramatically more maintainable. This was a big success story for Go.

Special-purpose servers usually have only a few skilled authors, clear goals, and eventually reach a point where they're "done" and further maintenance is increasingly rare. (Though they might eventually be replaced with new infrastructure.)

Application Servers

On the other hand, every company has that server codebase with all the logic that makes the business go. You know the one. Where the code dates back to the founding of the company, and the programming language is whatever the founders liked at the time. :) It's had hundreds of contributors from many backgrounds. Perhaps the original authors are no longer with the company. The needs of the code have evolved over time with the business, and after about four or five years it's a person's or team's full-time job to manage large-scale refactoring.

I call these application servers, but I've heard the term "frontend server" used before too. Performance is a concern, but straight-line code execution is often less of an issue than efficient IO to the backend services.

Application servers are where the features live, and therefore they evolve with the needs of the business. The ability to define safe abstractions and refactor the code is more important than with special-purpose servers. Programming safety is also critical. You wouldn't want somebody's hack week feature to turn into a buffer overflow, runtime exception, corrupt data, or security vulnerability. Thus, type systems (or at least runtime invariant enforcement in languages like Python or PHP) are critical in application servers.

Go does not make a good language for application servers. The lack of immutable data makes it really awkward to enforce invariants. The lack of generics makes it awkward to express IO unless you generate code for each entity type in your database. Complicated arrangements of goroutines and channels are necessary to parallelize backend IO. Data structure traversals must be manually spelled out every time. Haskell, on the other hand is brilliant for application servers (assuming your team is comfortable with the learning curve). Libraries like Haxl allow specification of business rules in natural imperative style, while automatically extracting IO parallelism from the structure of the code. In Go, timeouts must be implemented manually every time, whereas in Haskell there is a single, always-correct timeout function. Go also doesn't have anything like Haskell's mapConcurrently, and while it can be done manually, it's tricky to remain correct in the presence of unhandled errors. (In a high-reliability system, you probably want to transfer a runtime error like division by zero or null pointer dereference to your request handler's error handler so it can be logged appropriately, rather than shutting down your whole server.) With Haskell, you can enforce 100% reliable unit tests or that transactions are never nested.

Even pre-generics Java was better than Go at defining invariants - with final, you could feel confident that a value is never changed after it's initialized. This entire post by Dave Cheney could have been replaced with one sentence: "Too bad Go doesn't support constant values in general."

I'm not saying Go can't be made to work for application servers, nor am I saying Haskell is necessarily the answer. Haskell has relatively poor usability characteristics, and I may write more about that later. But when you are trying to pick a language for a server, make sure you understand how it's likely to evolve over time. A small group of people solving a focused problem is a completely different beast from hundreds of people hacking on business logic across years, so choose your language appropriately. :)