Designing a Delightful Functional Programming Language

ML and Haskell have amazingly powerful type systems. Their type systems, and to some degree syntax, have widely influenced the modern crop of languages. Both can achieve machine performance. Haskell has fantastic concurrency. Both have concise, boilerplate-free syntax — some would say too concise. Both have the ability to build abstractions without impacting performance.

So why aren’t there more Haskell or ML programmers? Like, it’s fantastic that Rust and Swift borrowed some of their ideas, but why did it take two decades? Why do things like CoffeeScript and Go spread like wildfire instead? Go is especially bizarre because it’s not really very different from Java in many ways, and a lot worse in others.

I’m sure there are many reasons. Tooling is important. Marketing is important. Haskell is (or perhaps just was) largely a research language. The first-time user experience is important. I don’t claim to know every reason why some languages spread and others don’t, but I know at least one thing: syntax matters a whole lot more than we’d like to admit.

Consider all of the bracing and spacing and indentation arguments people have about C.

Remember that brilliant people have whined about the fact that Python uses whitespace to indicate nesting, as if that materially affects how much work they can get done.

Hell, CoffeeScript gained significant popularity by adding nothing but syntax to JavaScript. There were many opportunities to add actual features to CoffeeScript, like lightweight coroutine syntax, but CoffeeScript’s motto was “It’s just JavaScript.” It’s popularity is based on the strength of the syntax alone.

There are two ways to interpret this phenomenon. One is “stupid humans and their quirks and biases, if only the uneducated masses would see the light and focus on what really matters”. Another is the realization that programming is a deeply human activity. A programming language lets humans take their human thoughts and map them into something a computer can execute. The aesthetics of that activity are important.

If a language is going to get adopted, at least without significant corporate backing, it needs to appeal to peoples’ tastes.

Back to Haskell and ML. They have their proponents, but it’s only until now that they’re starting to dance with the mainstream. It’s a shame, because software would be in a much better place if all the material benefits of ML and Haskell (bidirectional type inference, cheap and safe concurrency, sum types) had spread faster.

I’ll share some relevant experiences.

My personal OCaml experience occurred in university. One day I decided to play with this OCaml thing. My rationale is perhaps embarrassing, but I quit for two reasons: the addition operator for integers is + but for floats it’s +.. The other thing was the pervasive use of ;;, at least in the repl. Honestly, it felt tasteless, so I dropped it and moved on, never getting to see the real benefits (polymorphism, type inference, modules, sum types).

Now, Haskell. I was part of the effort to get Haskell adopted at IMVU. The benefits of using Haskell were simply too great to ignore. But when we tried to get everyone to learn it, a nontrivial fraction of engineers developed a visceral aversion. “I understand all the benefits but… I just hate it.” “Why do you hate it?” “The whitespace, $, the two let forms, the weird function names… I dunno, it’s just ugly and I hate it.”

It made me deeply sad that these seemingly superficial complaints prevented what should have been a slam dunk, at least when it came to safety and performance and maintainability and development speed.

When I joined Dropbox, I had some conversations with teammates about the benefits of something like Haskell (most people either don’t know about it or think it’s some academic obscurity), and I got a serious negative reaction to the idea of static types. “They’re heavyweight.” “They’re hard to learn.” “They hinder refactoring.” “They hurt development speed.”

None of these are necessarily factual. After I poked and prodded, I figured out that the resistance to the idea of static typing comes from prior exposure to things like C++ or Java (or Go), which have especially verbose syntaxes and type systems. This is similar to what happens with functional programming, where a common reaction to the idea is “Ugh, I hated my lisp class in college.”

These realizations led Andy and I to take on a project: design a programming language with all the expressive power and semantic correctness of an ML, yet with a syntax and aesthetic that would be instantly recognizable to your typical Go or JavaScript programmer. Surely it’s achievable. This is largely a human factors project – while the PL theory is important, we do not intend to significantly innovate there. Instead, the main innovation is applying familiarity and usability to a domain with the opposite reputation.

In subsequent posts, I will describe some of DFPL’s design decisions.

PL Usability: Name Fewer Things

If you follow me on Twitter or know me in person, you know I’ve been somewhat obsessed with programming languages lately. Why? In my career I’ve written high-performance desktop software, embedded controllers, 3D engines, scalable and low-latency web stacks, frontend JavaScript stacks… and at this stage in my life I’m tired of dealing with stupid human mistakes like null pointer exceptions, uninitialized data, unhandled cases, accidentally passing a customer ID where you meant an experiment ID, accidentally running some nontransactional code inside of a transaction, specifying timeouts with milliseconds when the function expects seconds…

All of these problems are solvable with type systems. Having learned and written a pile of Haskell at IMVU, I’m now perpetually disappointed with other languages. Haskell has a beautiful mix of a powerful type system, bidirectional type inference, and concise syntax, but it also has a reputation for being notoriously hard to teach.

In fact, getting Haskell adopted in the broader IMVU engineering organization was a significant challenge, with respect to both evangelization (why is this important) and education (how do we use it).

Why is that? If Haskell is so great, what’s the problem? Well, there are a few reasons, but I personally interviewed nearly a dozen IMVU engineers to get their thoughts about Haskell and a common theme arose again and again:

There are too many NAMES in Haskell. String, ByteString, Text. Functor, Applicative, Monad. map, fmap. foldl, foldl’, foldr. Pure let … in syntax vs. monadic let syntax. Words words words.

Some people are totally okay with this. They have no problem reading reference documentation, memorizing and assigning precise meaning to every name. This is just a hypothesis, but I wonder whether mathematically-oriented people tend to fall into this category. Either way, it’s definitely true that some people tolerate a proliferation of names less than others.

As someone who can’t remember anything, I naturally stumbled into a guiding principle for API design: express concepts with a minimal set of names. This necessitates that the concepts be composable.

And now I’m going to pick on a few specific examples.


My first example is Haskell’s maybe function.

First of all, what kind of name is that. I bet that, if you didn’t know what it did, you wouldn’t be able to guess. (A classic usability metric is how accurately things can be guessed.)

Second of all, why? Haskell already has pattern matches. You don’t need a tiny special function to pattern match Maybes. Compare, for some m :: Maybe Int:

maybe (return ()) (putStrLn . show) m


case m of
  Just i -> putStrLn (show i)
  Nothing -> return ()

Sure, the latter is a bit more verbose, but I’d posit the explicit pattern match is almost always more readable in context. Why? Because maybe is yet another word to memorize. You need to know the order of its arguments and its meaning. The standard library would be no weaker without this function. Multiple experienced Haskellers have told me to prefer it over explicit pattern matches, but it doesn’t optimize for readability over the long term.

Pattern matches are great! They always work the same way (predictability!), and they’re quite explicit about what’s going on.


For my second example, the last thing I want to do is pick on this contributor, because the use case was valid and the pull request meaningful. (In the end, we found another way to provide the same functionality, so the contributor did add value to the project.)

But what I want to show is why I rejected this pull request. pathwalk was intended to be a small, light, learnable API that anyone could drop into a project. It exposes one function, pathWalk, with three variants, one for early exit, one for accumulation of a result, and one that uses lazy IO for convenience. All of these are intended to be, if not memorizable, quite predictable. The pull request, on the other hand, added a bunch of variations with nonobvious names, as well as six different callback types. For an idea as simple as directory traversal, introducing a pile of exported symbols would have a negative impact on usability. Thus, to keep the library’s interface small and memorizable, I rejected the PR.


Now I’ll discuss the situation that led me to writing this post. My team at Dropbox recently adopted TypeScript to prevent a category of common mistakes. For our own sanity, we were already annotating parameters and return types — now we’re simply letting the computer verify correctness for us.

However, as we started to fill in type annotations for the program, a coworker of mine expressed some concern. "We shouldn’t write so many type definitions in this frontend code. JavaScript is a scripting language, not heavyweight like Java." That argument didn’t make sense to me. Surely it doesn’t matter whether we’re writing high-performance backend code or frontend code — type systems satisfy the same need in either situation. And all I was doing was annotating the parsed return type of a service request to prevent mistakes. The specific code looked something like this:

// at the top of the file
interface ParsedResponse {
  users: UserModel[];
  payloads: PayloadModel[];
// ...
// way down in the file
function makeServiceRequest(request: Request): Promise<ParsedResponse> {
  return new Promise((resolve, reject) => {
    // ...
      users: parsedUserModels,
      payloads: parsedPayloadModels,

At first I could not understand my coworker’s reaction. But I realized my coworker’s concerns weren’t about the types — after all, he would have documented the fields of the response in comments anyway, and, all else equal, who doesn’t want compiler verification that their code accesses valid properties?

What made him react so negatively was that we needed to come up with a name for this parsed response thing. Moreover, the definition of the response type was located far away from the function that used it. This adds some mental overhead. It’s another thing to look at and remember.

Fortunately, TypeScript uses structural typing (as opposed to nominal typing), and the advantage of structural typing is types are compatible if the fields line up, so you can avoid naming intermediate types. Thus, if we change the code to:

function makeServiceRequest(request: Request): Promise<{
  users: UserModel[],
  payloads: PayloadModel[],
}> {
  return new Promise((resolve, reject) => {
    // ...
      users: parsedUserModels,
      payloads: parsedPayloadModels,

Now we can have type safety as well as the concision of a dynamic language. (And if we had bidirectional type inference, it would be even better. Alas, bidirectional type inference and methods are at odds with each other.)

Other Examples

git has the concept of an "index" where changes can be placed in preparation for an upcoming commit. However, the terminology around the index is inconsistent. Sometimes it’s referred to as the "cache", sometimes the "index", sometimes the "stage". I get it, "stage" is supposed to be the verb and "index" is the noun, but how does that help me remember which option I need to pass to git diff? It doesn’t help that both stage and index are both nouns and verbs.

In the early 2000s I wrote a sound library which was praised for how easy it was to get up and running relative to other libraries at the time. I suspect some of this was due to needing to know only a couple concepts to play sounds or music. FMOD, for on the other hand, required that you learn about devices and mixing channels and buffers. Now, all of these are useful concepts, and every serious sound programmer will have to deeply understand them in the end. But from the perspective of someone coming in fresh, they just want to play a sound. They don’t want to spend time studying a bunch of foundational concepts. Audiere had the deeper concepts too, but it was useful to provide a simpler API for the simple 80% use case.

Python largely gets this right. If you want to know the size of some x, for any x, you simply write len(x). You don’t need a different function for each data type.

To summarize, whenever you provide an API or language of some kind, think hard about the names of the ideas, and whether you can eliminate any. It will improve the usability of your system.

Thinking About Performance

Update: Bill Gropp covers this topic in depth in a presentation at University of Illinois: Engineering for Performance in High-Performance Computing.

Two things happened recently that made me think my approach to performance may not be widely shared.

First, when I announced BufferBuilder on reddit, one early commenter asked "Did you run your Haskell through a profiler to help identify the slow parts of your code?" A fairly reasonable question, but no, I didn’t. And the reason is that, if stuff shows up in a Haskell profiler, it’s already going to be too slow. I knew, up front, that an efficient way to build up buffers is a bounds check followed by a store (in the case of a single byte) or copy (in the case of a bytestring). Any instructions beyond that are heat, not light. Thus, I started BufferBuilder by reading the generated assembly and building a mental model of how Haskell translates to instructions, not by running benchmarks and profilers. (Benchmarks came later.) Any "typical" CPU-bound Haskell program is going to spend most of its time in stg_upd_frame_info and stg_ap_*_info anyway. ;)

Next, on my local programming IRC channel, I shared a little trick I’d seen on Twitter for replacing a div with a shift: (i+1)%3 == (1<<i)&3 for i in [0, 2]. One person strenuously objected to the idea of using this trick. Paraphrasing, their argument went something like "the meaning of the code is not clear, so tricks like that should be left to the compiler, but it doesn’t work for all values of i, so a compiler would never actually substitute the left for the right. Just don’t bother." We went back and forth, after which it became clear that what the person REALLY meant was "don’t use this trick unless you’re sure it matters". And then we got to "you have to run a profiler to know it matters." I contend it’s possible to use your eyes and brain to see that a div is on the critical path in an inner loop without fancy tools. You just have to have a rough sense of operation cost and how out-of-order CPUs work.

Over the years I have built a mental framework for thinking about performance beyond the oft-recommended "get the program working, then profile and optimize the hot spots". My mental framework comes from three sources:

  1. Rico Mariani has written a lot about designing for performance so you aren’t caught, late in the project, unable to hit your perf targets. That is, understand the machine and problem constraints, and sketch out on paper the solution so you can make sure it fits. Use prototypes as necessary.
  2. I’ve always been interested in how our programs run on physical silicon. The Pentium Chronicles: The People, Passion, and Politics Behind Intel’s Landmark Chips is an excellent history of the design of Intel’s out-of-order P6 line.
  3. I’ve been involved with too many projects run with the philosophy of "get it working, profile and optimize later". It’s very easy for this strategy to fail, resulting in a program that’s uniformly sluggish and uneconomical to fix.

Performance is Engineering

To hit your performance goals, you first need to define your goals. Consider what you’re trying to accomplish. Some example performance goals:

  • Interactive VR on a mobile phone
  • Time between mouse click in menus and action taken is under 100 ms
  • Load a full 3D chat room in five seconds
  • First page of search results from a mountain in Hawaii in half a second
  • Latest experimental analyses in half a day

Also, understand why you want to hit those goals. Todd Hoff’s excellent Latency is Everywhere and it Costs You Sales has several links to research showing how performance can affect your business metrics and customer satisfaction.

For BufferBuilder, our goal was to match the performance, in Haskell, of a naive C++ buffer
building library.

Now that you have a goal in mind, and presumably you understand the problem you’re trying to solve, there’s one final piece to understand. You could call them the "fundamental constants of human-computer interaction", which can be split into the human and computer halves.

Very roughly:

  • ~16 ms – frame budget for interactive animation
  • 100 ms – user action feels instantaneous
  • 200 ms – typical human reaction time
  • 500+ ms – perceptible delay
  • 3+ seconds – perceived sluggishness
  • 10+ seconds – attention span is broken

And on the computer:

  • 1 cycle – latency of simple bitwise operations
  • a few – maximum number of independent instructions that can be retired per cycle
  • 3-4 cycles – latency of L1 cache hit
  • ~dozen cycles – latency of L2 cache hit
  • ~couple dozen cycles – integer division on modern x86
  • couple hundred cycles – round-trip to main memory
  • 50-250 us – round-trip to SSD
  • 250-1000 us – in-network ethernet/tcp round-trip
  • 10 ms – spinny disk seek
  • 150 ms – IP latency between Australia and USA

Throughput numbers:

  • 8.5 GB/s – memory bandwidth on iPhone 5
  • ~100 MB/s – saturated gigabit ethernet connection
  • 50-100 MB/s – spinny disk transfer speed
  • 4.5 Mb/s – global average connection speed
  • 3.2 Mb/s – average mobile bandwidth in USA

These numbers can be composed into higher-level numbers. For example:

  • 1 ms – parsing 100 KB of JSON on a modern, native parser such as RapidJSON or sajson.

It’s unlikely JSON parsing will become appreciably faster in the future – parsing is a frustrating, latency-bound problem for computers. "Read one byte. Branch. Read next byte. Branch."

While throughput numbers increase over time, latency has only inched downwards. Thus, on most typical programs, you’re likely to find yourself latency-bound before being throughput-bound.

Above numbers are from:

Designing for Performance

Once you’ve defined your performance goal, you need to make the numbers fit. If your goal is interactive animation, you can barely afford a single blocking round-trip to a spinny disk per frame. (Don’t do that.) If your goal is an interactive AJAX application that feels instant from all around the world, you must pay VERY close attention to the number of IP round-trips required. Bandwidth, modulo TCP windowing, is usually available in spades – but accumulated, serial round-trips will rapidly blow your experience budget. If you want a WebGL scene to load in five seconds on a typical global internet connection, the data had better fit in (5s * 4.5 Mb/s) = 2.8 MB, minus initial round-trip time.

For BufferBuilder, since we knew our goal was to match (or at least come reasonably close to) C++ performance in Haskell, we didn’t need a profiler. We knew that appending a single byte to a buffer could be minimally expressed with a load of the current output pointer, a (predicted) bounds check, and a write of the new output pointer. Appending a buffer to another is a (predicted) bounds check, a memcpy, and updating the output pointer.

A profiler is not needed to achieve the desired performance characteristics. An understanding of the problem, an understanding of the constraints, and careful attention to the generated code is all you need.

We can apply this approach at almost any scale. Want a rich website that uses less than 100 MB of RAM yet has high-resolution images, gradients, transitions, and effects? Build prototypes to measure the costs of each component, build up a quantitative understanding, and make the numbers fit. (Of course, don’t forget to actually measure real memory consumption on the resulting page, lest you be surprised by something you missed.)

Want to design a mobile application that goes from tap to displayed network data in 200 ms? Well, good luck, because Android devices have touchscreen latency of 100+ ms. (Edit: mdwrigh2 on Hacker News points out that this data is out of date for modern Android handsets.) And it can take a second or more to spin up the radio!

To summarize, given a rough understanding of the latencies of various layers of a system, and knowledge of the critical path through the code, simply sum the numbers together and you should have a close approximation to the total latency. Periodically double-check your understanding against real measurements, of course. :)


Are you saying profilers are useless?!

Absolutely not. Profilers are awesome. I have a spent a great deal of time with the Python profiler. I particularly love AMD’s old CodeAnalyst tool. (Not the new CodeXL thing.) Sampling profilers in general are great. I’ve also built a bunch of intrusive profilers for various purposes.

But always keep in mind that profilers are for exploring and learning something new. By the time you’ve built your application, you may not actually be able to "profile your way to success".

Should I unilaterally apply this advice, irrespective of context?

Of course not. A large number of programs, on a modern CPU, trivially fit in all the performance budgets you care about. In these situations, by all means, write O(n) loops, linked lists, call malloc() inside every function, use Python, whatever. At that point your bottleneck is human development speed, so don’t worry about it.

But continue to develop an understanding of the costs of various things. I remember a particular instance when someone replaced hundreds of <img> tags on a page with <canvas> for some kind of visual effect. Bam, the page became horrifically slow and consumed an enormous amount of memory. Browsers are very smart about minimizing working set in the presence of lots of images (flushing decoded JPEGs from memory until they’re in view is one possible technique), but <canvas> is freeform and consumes at least width*height*4 bytes of pagefile-backed memory.

What about algorithmic complexity?

Algorithmic complexity can be a significant improvement. Especially when you’re Accidentally Quadratic. Reach for those wins first. But once you get into O(n) vs. O(lg n), you’re almost certainly limited by constant factors.

In any context, you should aim for the biggest wins first. Let’s say you’re writing a web service that talks to a database, fetching info for 100 customers. By far the biggest optimization there is to run one query to fetch 100 customers (as a single round-trip can be 1-10 ms) instead of 100 queries of one customer each. Whenever someone says "My new web service is taking 1.5 seconds!" I can almost guarantee this is why. Both approaches are technically O(n), but the query issue overhead is a HUGE constant factor.

In interviews I sometimes ask candidates about the performance difference between two algorithms that have the same algorithmic complexity. There is no precisely correct answer to my question, but "the execution time is the same" is wrong. Constant factors can be very large, and optimizing them can result in multiple-orders-of-magnitude improvements.

But Knuth said…!

Read what he actually said. Also, since then, we’ve developed a much deeper understanding of when mature optimizations are appropriate. Passing const std::string& instead of std::string is not a premature optimization – it’s just basic hygiene you should get right.

Code Reviews: Follow the Data

This is a mirror of the corresponding post at the IMVU Engineering Blog.

After years of reviewing other people’s code, I’d like to share a couple practices that improve the effectiveness of code reviews.

Why Review Code?

First, why review code at all? There are a few reasons:

  • Catching bugs
  • Enforce stylistic consistency with the rest of the code
  • Finding opportunities to share code across systems
  • Helping new engineers spin up with the team and project
  • Helping API authors see actual problems that API consumers face
  • Maintain the health of the system overall

It seems most people reach for one of two standard techniques when reviewing code; they either review diffs as they arrive or they review every change in one large chunk. We’ve used both techniques, but I find there’s a more effective way to spend code review time.

Reviewing Diffs

It seems the default code review technique for most people is to sign up for commit emails or proposed patches and read every diff as it goes by. This has some benefits – it’s nice to see when someone is working in an area or that a library had a deficiency needing correction. However, on a large application, diffs become blinding. When all you see is streams of diffs, you can lose sight of how the application’s architecture is evolving.

I’ve ended up in situations where every diff looked perfectly reasonable but, when examining the application at a higher level, key system invariants had been broken.

In addition, diffs tends to emphasize small, less important details over more important integration and design risks. I’ve noticed that, when people only review diffs, they tend to point out things like whitespace style, how iteration over arrays is expressed, and the names of local variables. In some codebases these are important, but higher-level structure is often much more important over the life of the code.

Reviewing diffs can also result in wasted work. Perhaps someone is iterating towards a solution. The code reviewer may waste time reviewing code that its author is intending to rework anyway.

Reviewing Everything

Less often, I’ve seen another code review approach similar to reviewing diffs, but on entire bodies of work at a time. This approach can work, but it’s often mindnumbing. See, there are two types of code: core, foundational code and leaf code. Foundational code sits beneath and between everything else in the application. It’s important that it be correct, extensible, and maintainable. Leaf code is the specific functionality a feature needs. It is likely to be used in a single place and may never be touched again. Leaf code is not that interesting, so most of your code review energy should go towards the core pieces. Reviewing all the code in a project or user story mixes the leaf code in with the foundational code and makes it harder see exactly what’s going on.

I think there’s a better way to run code reviews. It’s not as boring, tends to catch important changes to core systems, and is fairly efficient in terms of time spent.

Follow the Data

My preferred technique for reviewing code is to trace data as it flows through the system. This should be done after a meaningful, but not TOO large, body of work. You want about as much code as you can review in an hour: perhaps more than a user story, but less than an entire feature. Start with a single piece of data, say, some text entered on a website form. Then, trace that data all the way through the system to the output. This includes any network protocols, transformation functions, text encoding, decoding, storage in databases, caching, and escaping.

Following data through the code makes its high-level structure apparent. After all, code only exists to transform data. You may even notice scenarios where two transformations can be folded into one. Or perhaps eliminated entirely — sometimes abstraction adds no value at all.

This style of code review frequently covers code that wasn’t written by the person or team that initiated the code review. But that’s okay! It helps people get a bigger picture, and if the goal is to maintain overall system health, new code and existing shouldn’t be treated differently.

It’s also perfectly fine for the code review not to cover every new function. You’ll likely hit most of them while tracing the data (otherwise the functions would be dead code) but it’s better to emphasize the main flows. Once the code’s high-level structure is apparent, it’s usually clear which functions are more important than others.

After experimenting with various code review techniques, this approach has been the most effective and reliable over time. Make sure code reviews are somewhat frequent, however. After completion of every “project” or “story” or “module” or whatever, sit down for an hour with the code’s authors and appropriate tech leads and review the code. If the code review takes longer than an hour, people become too fatigued to add value.

Handling Code Review Follow-Up Tasks

At IMVU in particular, as we’re reviewing the code, someone will rapidly take notes into a shared document. The purpose of the review meeting is to review the code, not discuss the appropriate follow-up actions. It’s important not to interrupt the flow of the review meeting with a drawn-out discussion about what to do about one particular issue.

After the meeting, the team leads should categorize follow-up tasks into one of three categories:

  1. Do it right now. Usually tiny tweaks, for example: rename a function, call a different API, delete some commented-out code, move a function to a different file.
  2. Do it at the top of the next iteration. This is for small or medium-sized tasks that are worth doing. Examples: fix a bug, rework an API a bit, change an important but not-yet-ubiquitous file format.
  3. Would be nice someday. Delete these tasks. Don’t put them in a backlog. Maybe mention them to a research or infrastructure team. Example: “It would be great if our job scheduling system could specify dependencies declaratively.”

Nothing should float around on an amorphous backlog. If they are important, they’ll come up again. Plus, it’s very tempting to say “We’ll get to it” but you never will, and even if you have time, nobody will have context. So either get it done right away or be honest with yourself and consciously drop it.

Now go and review some code! :)

The Completionist’s Guide to Sims 3 iPhone

I play simulation games in two phases: first, I tackle the often-repetitive mechanics, unlocking every option and building up money so that I can creatively decorate my house or farm or whatever.

Sims 3 for iPhone was no different. Having accomplished nearly everything in the game, I will share a strategy for completing all 73 goals and acquiring the best job: $600 per day at the Pawn Shop.

The Objective

To open the pawn shop, you must complete the following 73 goals:

Goals for all Sims (55)

  • Try fishing
  • Try cooking
  • Buy fishing kit
  • Buy watering can
  • Buy repair kit
  • Buy a stove
  • Buy a bath
  • Gain a skill point at cooking
  • Gain a skill point at fishing
  • Gain a skill point at repairing
  • Meet a new Sim
  • Befriend a Sim
  • Begin a romantic relationship
  • Make an enemy
  • Make a Sim laugh
  • Annoy a Sim
  • Insult a Sim
  • Creep-out another Sim
  • Slap a Sim
  • Get a job
  • Buy something
  • Catch a fish
  • Catch a trout
  • Catch a salmon
  • Catch a catfish
  • Repair something
  • Discover a new recipe
  • Cook something
  • Cook grilled cheese
  • Cook steak & veggies
  • Cook minestrone
  • Grow something
  • Grow carrots
  • Grow corn
  • Grow tomato
  • Watch TV
  • Kick over a Trash Can
  • Sleep in another Sim’s bed
  • Use another Sim’s shower
  • Use another Sim’s toilet
  • Get a better couch
  • Get a better TV
  • Accumulate $1000
  • Catch 15 fish
  • Harvest 30 crops
  • Stay entertained for three days
  • Stay fed for three days
  • Stay rested for three days
  • Stay clean for three days
  • Meet all the Sims in town
  • Make a Sim jealous of you
  • Sleep in three other Sim’s beds
  • WooHoo with someone
  • Get a promotion
  • Reach the top of the career ladder

Maniac Personal Goals (4)

  • Use everyone’s toilet at least once
  • Use everyone’s shower at least once
  • Creep out five people
  • Watch three people sleeping

Sleaze Personal Goals (2)

  • Be romantically involved with three Sims
  • WooHoo eight times in one day

Power Seeker Personal Goals (3)

  • Accumulate $5000
  • Own the best house
  • Own the best TV, stereo and stove

Nice Guy Personal Goals (2)

  • Get married
  • Be liked by all the Sims in town

Jerk Personal Goals (4)

  • Be disliked by all the Sims in town
  • Slap four people
  • Insult five people
  • Kick over all the trash cans in town

Jack of All Trades Personal Goals (3)

  • Achieve the highest fishing skill level
  • Achieve the highest repairing skill level
  • Achieve the highest cooking skill level

There are six Sim classes:

  • Jack of All Trades
  • Nice Guy
  • Jerk
  • Sleaze
  • Power Seeker
  • Maniac

As previously mentioned, you will need to complete every Goal to unlock the Pawn Shop. Of the 73 total Goals, 55 randomly appear as Wishes, no matter which Sim class you choose, and they’re generally easy to complete.

On the other hand, each Sim’s Personal Goals are relatively difficult. You’ll need to create at least six Sims, one of each type. For the optimal path through the game, you should complete their personal goals from hardest to easiest. I will give an efficient play sequence below.

Play Order

Jack of All Trades

Create six Sims, and designate one as your “main” Sim. Your main Sim is the one with which you most identify.

Your main Sim is a Jack of All Trades (you’ll later need its Repair skill at the Pawn Shop). Play him or her first, building up skill levels as rapidly as possible. Learn to fish, because fishing is the best way to make money. As you accumulate money, buy as many of the cheapest stereos you can afford, filling your house. Each time you enter the house to eat or rest, turn them all on. Eventually, they will start breaking down, allowing you to practice repairing.

Practicing cooking is easy – simply buy bread, cheese, and the Grilled Cheese recipe and make it over and over again.

Once fishing, repairing, and cooking are level 5, save, quit and move to the next Sim.

Power Seeker

Your second Sim is your Power Seeker. Fish until you have $5000 and tool around town until you have the option to upgrade your house twice and buy the most expensive TV, stereo, and stove. Furniture and home upgrades simply take time. This is a good opportunity to accumulate general Goals.

At this point, you should have most of the 55 general Goals.

The Other Sim Types

Sleaze, Nice Guy, Jerk, and Maniac have fairly easy Personal Goals. The order in which you complete them does not matter.

At this point, you should have almost all of the Personal Goals. If not, keep playing — they’ll eventually show up.

Goal Tips

If you need the “Buy XXX” goal but you already have XXX, then try selling it. Eventually the Goal should appear as a Wish. Lock it in and repurchase the item.

If you need the “Gain a skill point at XXX” goal and you’re maxed out, switch to a Sim that is not maxed out.

If you’re having trouble Meeting all Sims, double-check all of the houses at different times of the day. I’ve noticed Bernie can be hard to find. Most Sims are sleeping in the middle of the night – try breaking into their houses, waking them up, and meeting them.

If you need to befriend a Sim but you’re already friends with all of the Sims, start a new Sim or insult a Sim until it becomes an enemy, then befriend him or her.

The best way to annoy, insult, and creep-out other Sims is to find a house with two Sims (e.g. Marcell and Theresa), barge in, and start flirting with one of them. Both will be creeped out and annoyed, and the flirting will insult the other and make him or her jealous. If that doesn’t work, start using their toilet, shower, and fridge.

Tips and Tricks

Fishing is the best way to make money. Each tuna sells at QuickMart for $100, and salmon for $65.

Besides the Pawn Shop, working at the Town Hall is the best. It’s easy to get promotions (both Ruth and the Town Hall are nearby) and you just need to be friends with everyone in town. As Vice President, you can make $300 per day.

Creating romantic relationships is easy. It sounds rude, but keep pestering the Sim with Flirt, Tender Embrace, Hot Smooch, and WooHoo. You should be able marry a Sim within two conversations.

Too many WooHoos and your Sim will die! Don’t overdo it in the bedroom!

If you get married, give your spouse a phone call and invite him or her over. Once your spouse arrives, you can invite him or her to move in.

Since the Town Hall is right by your house, it’s the best job until the Pawn Shop is open.

The Sims

  • Nina
  • Ruth
  • Johnny
  • Kia
  • Jake
  • Maggie
  • Jack
  • Jill
  • Walter
  • Luke
  • Bernie
  • Marcell
  • Theresa

The Jobs

Town Hall (boss: Ruth)

  • Campaign Intern ($100/day) M-F 8:30-18:00
  • City Council Member ($150/day) M-F 8:30-18:00
  • Local Representative ($200/day) M-F 9:00-18:00
  • Mayor ($250/day) M-F 10:00-18:00
  • Vice President ($300/day) M-F 10:00-17:00

Corsican Bistro (boss: Marcell)

  • Kitchen Scullion ($100/day) M-F 10:30-18:30
  • Ingredient Taster ($150/day) M-F 10:30-18:30
  • Line Chef ($200/day) Tue-Sat 11:00-17:00
  • Sous-Chef ($250/day) Tue-Sat 11:00-17:00
  • Chef de Cuisine ($300/day) Tue-Fri 11:00-17:00

Laboratory (boss: Kia)

  • Test Subject ($100/day) M-F 9:00-16:00
  • Lab Tech ($150/day) M-F 09:00-16:00
  • Fertilizer Analyst ($200/day) M-F 09:00-16:00
  • Carnivorous Plant Tender ($250/day) M-F 09:00-16:00
  • Genetic Resequencer ($300/day) M-F 10:00-16:00

Quickmart (boss: Bernie)

  • Filing Clerk ($100/day) M-F 08:30-18:30
  • It’s not worth getting promotions at Quickmart. Bernie is a dick and you have to work there FOREVER before he’ll promote you.

Pawn Shop (boss: Nina)

  • Con Artist ($300/day) M-F 16:30-23:55
  • Safecracker ($400/day) M-F 16:30-23:55
  • Cat Burglar ($500/day) M-F 17:00-23:55
  • Master Thief ($600/day) M-F 17:00-23:00

In the End

With the information given above, you should have no problem achieving 73 goals and as much money as you need to decorate your dream home.

If you get stuck at any point, feel free to leave a comment on this page and I’ll update the document.

Avoid Manual Initialization

Too many software libraries require that you manually initialize them before use. As illustration:

window = library.createLibraryObject()

Why require your users to remember initialization busywork? createLibraryObject should initialize the library, refcounting the initialization if necessary.

The primary benefit is cognitive. If a library requires initialization, its objects and functions have an additional, externally-visible behavior: “If not initialized, return error.”

If the library’s internal state is managed implicitly, it does not leak out of the API. createLibraryObject should lazily initialize the library on object construction. Optionally, on object destruction, it can deinitialize the library.

A secondary benefit of lazy initialization is performance: interactive desktop applications rarely need immediate access to all of its subsystems. By deferring as much initialization as possible, application startup time is reduced.

[This post is a test of my new host:]