IT KEEPS HAPPENING

art by Andrew Hussie

(CW; child sex abuse)

This article is inspired by a thing I hate which Keeps Fucking Happening. To be less ambiguous for a few specific people who wonder why I’m typing, it’s about HypnotistSappho and SPLURT. Unlike most posts on my blog, it’s targeted to all age groups.


Let’s say you’re sixteen. You meet a ten-year-old who says “I’m mature for my age.”

As a sixteen-year-old, you know a few things about this ten-year-old based on that statement. They’re probably in elementary school. They’re probably a little undersocialized, they’re good at grammar, they’re not good at fitting into their peer group. It’s really unlikely they will come off as “mature” by your sixteen-year-old standards.

When you ask an adult “OK, but what makes me immature?” they give a stock answer. Grinningly they tell you that the things that matter to teenagers don’t matter to adults.

That’s not true at all — most adults remember high school as a war zone because it was, and they’re sensitive to the exact things they were picked on the most from elementary school onward. Frankly, the things that matter the most to a teenager are real and still painful at any age.

Even if it were true that adults cared about different things, it wouldn’t make immaturity an objective thing. There’s not a clear reason why an adult’s perspective on what’s important is more valid than a teenager’s perspective. I’m not saying this patronizingly — I really think the idea that adults’ opinions matter more is hard to defend.

Adults’ views are born out of experience, but I don’t think it’s necessarily the kind of experience that leads to realistic views. It’s a kind of experience that creates silent shame. A lot of adults spend a lot of time trying not to think about the things that hurt the most as teenagers. They also try not to think about old embarrassments. When adults have their yearly identity crisis, they try to present it to themselves as completely different from the yearly identity crisis they had as teenagers.

All teenagers are, at least, potential punks. You’re a teenager. You could, say, announce that you’re a communist and really mean it. You depend on other people in a few ways, but the things you depend on your parents for — food, housing — are your natural rights. In other words, you owe them nothing. An adult can’t ignore their mutual dependence on their landlord and their employer.

Part of this is because society tells adults that nobody owes them anything. From that perspective, the teenaged perspective looks like a charmed one. They’re even a little resentful.

A lot of an adult’s mutual dependence on society comes from something else. Adults are coached to want more and more as they grow up than they wanted as teenagers — meaning their interdependence on society cuts much deeper. People who can’t bite the hand that feeds them call their changed attitude “maturation.”

True punks exist in every age group, but far fewer adults manage to be that way. You’ll run into adults who engender very little of that dependence — they work twenty hours a week, have four roommates, and sleep in a tiny space, and likely they live in a state where the cost of living is still low.


So, I’m saying all this stuff to get at something.

There’s a lot of adults who want to fuck kids. There’s a lot of teenagers who want to fuck, period.

“I’ll have sex with you if you’re not going to make it weird or bad,” say some teenagers — which is economically rational, if what you want to do is have sex.

“Oh, of course not.”

So the teenagers make some stipulations. “You have to really love me,” they might say.

“Oh, yeah, obviously, of course I do,” the adult says. “And I really understand you, too.”

They have sex — and then the bargaining happens.

“You can’t tell anyone we did that,” the adult says.

The teenagers (who understand age of consent laws) say “Well, yeah, obviously –“

“I mean it,” the adult says. “I could get investigated by the FBI. I could go to jail.”

This is all true, but it sounds more like a threat.


I don’t want to essentialize anything.

In particular, I don’t want to act like “weird power dynamic” is the inevitable result of a huge, age-gapped relationship.

However, I think a lot of adults are going to say “well of course I can relate to you,” and they’re going to say it to get into your pants.

Adults who want to have sex with a kid are cognizant of the many ways it might hurt you, even if you’re not. They’re aware that you’re likely to feel used, like a toy, and they’re aware that they may later have to lie and discredit you if you tell anyone. It’s very common for adults who have sex to threaten the kids by saying “you’ll be in trouble if anyone finds out.”

Sometimes they say “we’ll wait.” (In other words, “I’ve said yes for you and made plans that entail you don’t leave me.”) Other times they’ll say “I won’t force you” or “I won’t rape you.” (In other words, “I could.”)

Someone you don’t even fucking know is not your mom or your dad, even if they draw the comparison themselves. Someone who you have only consumed on YouTube or Twitter is a face in the void. If they want to have sex with you, it’s not because they love you or understand you.

From what I can tell, a lot of people who are in a situation that isn’t normal or OK will say “well, of course this is normal. Of course this is OK.” This even happens when what you’re hearing starts to become threatening or controlling.

I’m also sad because, if you’re a twelve-year-old, there’s not really a responsible way for you to find a partner, and if you’re really pent-up — which is apparently possible for kids, probably half because porn is everywhere and half because puberty’s a bitch — you’re inevitably going to end up coming onto an adult.

Obviously I’d prefer you didn’t do it at all — because at this point I see more harm than anything else, but if you must pursue a relationship with an adult online, ask yourself some questions. Is it safe to say “no?” Did the relationship move faster than you wanted it to? Did you share your dark secrets and do you know theirs? And this is the big one — “will something bad happen if you leave?”

If you’re soliciting an adult, tell someone else your age what is going on. In any highly dangerous activity, whether that’s scuba diving or extreme sports, you have to use the buddy system. Soliciting MAPs is a highly dangerous activity.

If you’re not sold, read some stories!

Certain things are not the Vampire’s Castle

art by kuroi

This post won’t make very much sense unless you read Exiting the Vampire Castle by Mark Fisher.

Preemptively: I like Mark Fisher’s post. However, I think it’s legacy has been kinda weird.

Starting with the places where I think I agree with Fisher: There are basically two tendencies of identitty politics.

There’s a basically legitimate tendency that says “race and gender are ‘real’ in that society sees them and they are necessary for completely describing what society is doing.”

To be specific about what this means — well, there’s two things! For one thing, race might be a pretty good proxy for the actual factors needed to describe what society is up to. There’s a racial wealth gap and a gender pay gap, and those don’t appear to go away completely if you control for other factors.

That doesn’t mean the gaps are essentially caused by race or gender, but it means that with current social science it is not possible to eliminate race and gender from the explanation.

There’s a second thing that it means — because the agents we’re talking about are humans, it’s likely the actual concepts of race and gender have something to do with society’s decisions. Someone might have a problem with black people — not “people who are likely to have lower wealth for historical reasons.”

In other words, there are cases where you can’t reduce “race” out of an explanation — because the events you’re describing involved real people who were using “race” as a concept.

For the purpose of this article, I’m taking the stance that “race” and “gender” are concepts that can be relevant in an explanation of social events and calling it “identity politics, type one.”

Note that having this view doesn’t automatically opt you into conspiracy theories like “Donald Trump is a member of the KKK.” It also doesn’t automatically imply that leftist groups should be segregated by race or gender, or that they should contain a pecking order based on level of privilege. The entire question of “representation” is irrelevant to “identity politics, type one.”

Your main line of critique can still be class-based under identity politics, type one — this is because the question of whether “race” can be part of a valid explanation is completely different from whether it is part of a valid explanation.

There’s a second tendency, which Fisher is concerned about, which I’m calling “identity politics, type two.” It doesn’t correspond to a specific viewpoint — it’s kind of a nest of related viewpoints. The gist is that all of those things that aren’t implicitly part of identity politics, type one? Those are more or less part of identity politics, type two.

Based on that, its mission statement might be “You can’t personally object to using concepts like race and gender, even if you try, because you’re doomed to reproduce systems that oppress people.”

I’ve rarely seen anyone say this out loud, which is a pity because I this thesis has some historical evidence behind it — but even if it does, it needs some defending. Aside from that, if it’s your main party line in practice, you’re saying “this tendency isn’t perfect, so don’t even try.” One of Fisher’s strong contentions is that when people start talking like that and thinking like that, they end up opposing productive leftist activity.

There’s something to that, I think. If you only want to support minority-led leftist groups then you can’t work with most leftists, because white people are advantaged in a lot of idpol-type-one ways that pertain to starting groups, organizing them, getting money. Is that unjust? Clearly. But unfortunately, while “attention” is a finite resource and some deserving causes get less attention than they deserve, any allocation of attention (and money) is better than no attention and no money.

This means that idpol-type-two is not a practical stance if you want to make friends politically.

The fascinating thing to me is that Fisher’s critique — which mostly targets idpol-type-two behavior — has been heavily coopted by centrists on social media, especially Democrats. There also appears to be a tacit agreement between Democrats and idpol-advocates-type-two.

For instance, in 2020, NBC invented a call for greater female/nonwhite representation in the White House, marketed Kamala and Warren as an answer for that, then had them both concede and pitch their support to an old white ghoul. Democratic socialists were receptive to this (some supported Warren) but abandoned her when she got rid of class-based policy and, seeing no realistic course of action to support democratic socialism in the presidency, grudgingly coalesced behind Biden.

At the same time, Biden abandoned all his class-oriented promises and made idpol-flavored appeals instead. (“If you have a problem figuring out whether you’re for me or Trump, then you ain’t black.”) He did this despite his record: mass incarceration through the Violent Crime Control and Law Enforcement Act, and increased minimum sentences for crack cocaine relative to powder cocaine through the Anti-Drug Abuse Act of 1986. In other words, he sponsored laws more likely to affect lower-class people and more likely to be visibly enforced against black people.

On the other hand — idpol-type-one believers who point out the class-based decision-making behind the scenes are accused (by centrists) of idpol-type-two behavior because they invoke the concepts of race and gender, but also accused (by idpol-advocates-type-two) of centrism because they invoke the concept of class. For instance, the Warren campaign accused Sanders of gender chauvinism while NBC tarred Sanders as a racist. Meanwhile center Democrats have deftly avoided the “critical race theory” challenge by claiming not to believe in it, but gesturing towards the class-interested faction of the party and acting as if those guys do.

To me, it seems to me like a lot of people read Fisher’s article — or the mealymouthed, centrist-oriented paraphrases later made in venues like The Atlantic and The New York Times — and added it to their list of ways to silence leftists whose critique is uncomfortable to them. Of course the Democrats are going to embrace idpol-type-two — it’s easy and satisfying, it makes you lose to Republicans, and you can do it completely without engaging with class. All you need to do is find a rich white oligarch and have that guy find a black dude who’s willing to say “yeah I agree with that guy” for money or power.

And don’t get me started on queer-washing. Don’t get me wrong; I love a good pride flag. But I hate capitalism more.

I run into a lot of leftists who want to ditch identity politics altogether because of the terrible state of idpol-type-two critique. Frankly, I don’t blame them for it, as long as they don’t use it as a license to be racist.

Assessing Haskell

exasperated bat
art by KrAzOn89

Most college-educated programmers know a few different languages before they enter the workforce. At my school we learned Java and C, and we were expected to pick up some Python on the side because it made things easier. Meanwhile, I see a lot of self-taught people learning Lua, thanks to Roblox.

I think people who only know a few programming languages tend to look at a new language like the key to a lock. They start to learn it because they need to know it in order to work on whatever project they have to do.

On the other hand, people who use a lot of programming languages usually start to look at programming languages as software products — which is what they are, especially from the point of view of their developers. They’re able to compare them on their merits, rather than defaulting to whichever language solves the particular problem they’re stuck on now, and they can evaluate those merits without relying on the claims of an outside authority.


Those of you who have hung out in novice communities will notice that a lot of beginners are preoccupied with the idea of “graduating” to C++. It’s rare for them to know C++ especially well — their reasons for liking it seem to be based on claims they’re not qualified to evaluate (often about its performance compared to other languages, or its level of use in industry) or implicit emotional appeals made by others, usually also novices.

From what I can tell, the C++ community doesn’t see this as a big problem. (At least, I’ve never seen a C++ programmer complain about this tendency, and I’ve seen a lot of C++ programmers spread fear and uncertainty about close competitors, especially Java.)

I think this kind of community exists for several other technologies — Java programmers do this and so do PHP programmers.


A lot of people pitch Haskell to beginners by saying “it eliminates whole classes of errors at compile-time.”

Well, what’s that mean? Some people bristle because it sounds like an obvious lie — or at least a case of salesmanship. Other people believe it. It turns out the statement is partially true.

Haskell can panic just the same as Java does, but it doesn’t panic on a NullPointerException — only on nonexhaustive pattern matches, which in practice are a generalization of NullPointerExceptions.

Based on my pretty complete Haskell knowledge, I think it’s safe to say that Haskell’s feature is less likely to generate errors in actual use. I also think the claim is not literally true — Haskell has pattern match errors at runtime — and that even if it were literally true, beginners are completely unqualified to evaluate it.

To clarify, I’m not saying that beginners are stupid. My opinion is that when your main impetus to learn a language is that it allows you to use a technology you want — the “key” model that applies to a lot of novices — then you’re likely to know one language for each category of development you do, and you’re not likely to understand it that deeply because you don’t have a lot of things to compare it to.

So, I think it’s implausible that the claim is being read literally. I also think the claim that Haskell eliminates whole classes of error contains an implicit emotional appeal. You’re meant to hear “whole classes of error” and think “that sounds cool!” even if you don’t know what it means.

When you talk to a novice who’s heard of Haskell, they’re likely to say that it’s more difficult than other languages, it’s weirder, it’s more complicated. They might be saying it negatively or they might be saying it positively.

Generally, these are all conclusions that you’d normally draw by looking at some underlying facts about the language. Some are statements of opinion — meaning that you can’t legitimately arrive at them without underlying facts to make a decision about. Others are conclusions it would be better to arrive at by assessing multiple Haskell features at once, meaning that, again, you’d need those underlying facts.

There’s usually little evidence that those novices understand those facts.


There’s a common feature between all the technologies where I’ve accused their communities of making emotional appeals to novices.

If I’m charitable, then the common feature is that all of them — Java, C++, PHP — encourage a pretty narrow usage model. Most make some strong assumptions out the gate and tell you that if you stick to them, it’s going to make your software a lot better in some dimension.

That’s certainly true of Haskell, too!

Those technologies I listed are usually half-right about their usage model having advantages — PHP’s statelessness is a good model for software that needs to scale up; C++’s memory management discipline can be used to write especially fast software; Java-style OOP is a pretty good model for a lot of tasks.

They’re half-wrong, because in each case there are significant disadvantages, often located outside the things the pitch is about, which the language doesn’t adequately mitigate — and that’s where the headache is.

If I were less charitable about those languages, I’d probably say something else, which is that all three of them are legacy. Most of them are not a particularly successful take on the model they claim to be an example of — most of them have been superseded by later technologies that took the good parts and accentuated them.

If I were even less charitable, I’d say this: I think all three technologies are pretty bad. Most people I see praising Java, C++, or PHP only do so with reservation — or they do so because their chosen technology fits like a key into a lock for a technology they do want to work with.

I could be wrong about this, but I think I’m right. It seems to me like a little bit of manipulation might be one good way to keep a legacy technology alive — and if you’ve seen someone evangelizing C++ in a space for new game devs, then you know what I’m talking about.

Unfortunately, I think that it’s possible the Haskell community is engaging in similar behavior. I frankly think not enough people are asking the question of why, for instance, monads as a pattern for controlling access to resources are a good idea, and I think a lot of beginners are expected to assume there’s wisdom behind the decision when that’s not evident.

I’m going to make an effort to assess Haskell fairly but without presupposing that any of its decisions are correct in isolation. Some of the things I criticize might be valid reactions to existing problems and misfeatures — but I’m assessing them as they present themselves now, with indifference to their history.


I considered trying to assess Haskell without mentioning my own opinions at all. I don’t think that’s fair or possible, nor do I think that would lead people to an accurate assessment of Haskell. I basically think that novices who aren’t used to working in the product framing can still work inside it if they’re given pretty comprehensive and relevant information.

I think it’s hard to be objective about this issue in the sense that what you say is something no one can object to. I don’t think you can show, to everyone’s satisfaction, that a product is good or bad.

I do think you can demonstrate the sort of facts that lead people to conclude a language is good or bad, then explain your conclusions based on those facts, then let other people draw their conclusions now that you’ve outlined the things people might object to.

I also think most people who evangelize Haskell are already committed to the idea that programming language fans have a roughly shared set of standards. When you sell your language mostly based on comparisons, you’re making appeals to people who, you assume, have some standards in common with you.

You’re doing this even if the comparisons aren’t meant to be literally understood by your audience — like the “whole classes of errors” claim — because if there wasn’t at least an imaginary audience that your comparisons made sense for, you wouldn’t be making them.

Based on what I think is the set of assumptions I share with beginners and Haskell evangelists, I’m going to try to assess how Haskell measures up in a variety of domains that most people see as relevant when they assess a programming language. I’ve picked these based on what people seem to assess when they try a new language, and also based on what Haskell sees as a selling point.

My hope is that if you read this article, you’ll come to the conclusion that several of Haskell’s decisions seem to have major drawbacks without obvious advantages.

Importantly, there are still a lot of things about Haskell that I like and I think you’d be entitled to promote it even if you agreed with all the claims I’m about to make. But I’m hoping you’ll come out of it with a strong inclination to treat encumbrances as encumbrances, if not outright flaws!

One other note: when I refer to a Haskell feature, I’m going to try to explain what it is before I criticize it. Beginners might still benefit from reading a few chapters of Learn You A Haskell so they can follow my examples. (It’s not the most popular Haskell guide any more, but I’m recommending it because it’s short.)


Safety

Haskell users tend to praise the language for allowing them to write reliable code, and for allowing them to precisely specify what input they expect and what output they produce.

Some Haskell programmers call the language “safe” — by this they mean that it is hard to write code that has unexpected consequences.

Garbage collection

Haskell is garbage-collected — you don’t need to manually free memory, or track which object owns a value.

(I personally think garbage collection is a basic usability feature that most languages should seek to include unless they’re trying to target a hard-realtime use case.)

Error handling

Haskell has mandatory error handling in some cases.

It accomplishes this using its type system. Suppose a is the type of a value. In that case, Maybe a is the type of a value that could be Nothing. (Haskell’s version of null) Either String a is the type of a value that could be an error message instead. (written Left "error" or Right result respectively)

Haskell enforces that you handle errors by converting Maybe a values back to as before doing a-specific operations on them. It also supports null-coalescing — taking a Maybe a and doing an operation resulting in a Maybe b will result in a Maybe (Maybe b), which can be converted back to a Maybe b.

Testing

Haskell code is usually easy to unit test because Haskell programs have no access to their environment except through their function arguments.

However, Haskell code that hasn’t been written to support testing can (for instance) refuse to be run without a network connection, meaning that once access to its environment is granted, it needs its whole environment to be configured in a similar way to the conditions it needs under production.

This is similar to the problem that other languages solve with mock subclasses — Haskell usually solves it with type parameters, where the type parameter for a service client is replaced with the type of a mock for that service client. This pattern can mean adding an extra type parameter for every mockable component, which will often bubble up to calling code.

Haskell programmers are big fans of property-based testing through the library QuickCheck, which I think is just a great idea.

Danger zones

Haskell’s compile-time safety features are (in my opinion) fairly well-designed, but I think it has some eye-popping behavior at runtime.

Note that any code in Haskell can crash the program by throwing an exception. This kind of fault is not checked-for at compile time. You can also loop forever, which will not crash the program but will certainly break it.

(Technically, throwing an exception just unwinds the stack, but if you don’t handle it, the thread or the program will crash.)

Note that bracket (Haskell’s equivalent of try/finally) does not necessarily run your finally block for non-main threads on program exit. Haskell unceremoniously terminates all threads except the main thread without raising an exception.

Haskell contains several functions that do IO operations at an unspecified time — for instance, this program is not guaranteed to read the file before writing it:

main = do
  writeFile "test.txt" "Hello, bats!"
  bats <- readFile "test.txt"
  writeFile "test.txt" "Hello, Nyeogmi!"
  putStrLn bats -- might say "Hello, Nyeogmi!"

(There is an alternative library of IO functions that do not exhibit this behavior, but it’s bad that these are the default ones.)


Terseness

Haskell has a reputation for being terse.

It has a few features (mostly syntax-level) that enable this:

  • You can define functions in the middle of a line: saying f x = 1 + x is the same as saying f = \x -> 1 + x.
  • Functions are values. For instance, you can say f x = 1 + x, then say g = f, then use g as if it were f. You can put functions into a list: [\x -> x + 1, \x -> x + 2] and then manipulate the list.
  • It has type inference: if you write x = "abc", you typically don’t have to specify x :: Text.
  • Shorthand syntax exists for certain ways of turning functions into values. Instead of writing \x -> x * 2 (“x times 2”), you can write (*2). This is called “operator sections.”
  • It has builtin functions read and show that turn values into text or back from text, which allows you to save temporary results.
  • It has implicit “currying” — if you write f x y = x + y, then f 1 is equivalent to \y -> 1 + y. This has similar advantages to operator sections
  • It has an operator for composing functions. For instance, putStrLn . show is equivalent to \x -> putStrLn (show x)

It makes some choices that are cumbersome. For one thing, updating a field of a record is verbose: updateName newName x = x { name = newName }

I also think its notation for code that does input and output is fairly verbose, but that’s a topic I plan to introduce later.

Naming and the standard library

A lot of the functions built into Haskell have short, non-descriptive names, such as ap, pure, nub, and return. This seems to have an advantage for productivity, but may have disadvantages for readability.

Earlier I said Haskell has an operator for composing functions. Actually, Haskell has multiple operators to compose functions which are not compatible — for instance, putStrLn . show is valid, but putStrLn . getLine is not (it must be written putStrLn =<< getLine) — likewise, return . return and return <=< return have very different meaning.

Note on user-written code

What I’ve conveyed above is that the standard library of Haskell provides a lot of functions that operate on functions. What I’d like to add to that is that most user-written Haskell code actually defines a lot of functions that operate on other functions.

This style can be very concise, because it allows taking a pattern that appears in existing code, wherever it is, and replacing it with a function that implements this pattern.


Performance

Haskell tends to achieve C-adjacent performance in Benchmarks Game-style activities.

However, its optimizer achieves a lot of performance gains through application of rewrite rules. Across module boundaries, Haskell will sometimes struggle to rewrite temporary data structures out of existence. In general, whole-program optimization is a weak point.

Haskell’s garbage collector suffers from the same periodic pauses that are typical of most garbage-collected languages. These pauses may cause frame drops in realtime game development but are unlikely to be a big deal for other program categories.

Out of all that — it appears to perform much better than the average scripting language, but I would not expect it to perform at close to the same speed as Java or C++ without profiling.

Data structures

Haskell’s primary data structure is the linked list.

By default, Haskell strings are represented as linked lists of characters, but this causes performance problems so most users switch to using an array-based representation instead. (such as Text) Code that is compatible with one representation is not automatically compatible with other representations.

Haskell has some built-in data structures to do state management with — most are not very efficient. WriterT, which exists for logging, can have terrible performance in some cases, and so can StateT.

Most Haskell data structures are “persistent,” meaning that modifying the data structure results in a new data structure with no changes to the old one. There are some fast implementations of ephemeral data structures (the kind you are probably used to) but they usually require IO to use — that is, the type signatures of their methods are unusually complicated.

Laziness

Haskell is “lazy” — that means that it typically only evaluates your code when it needs your return value.

This means that Haskell code may appear to produce a result more quickly than the result is actually calculated, and that your client code can cause a function you call to produce results much more slowly than it’s supposed to.

For instance, if a function produces a binary tree, and you explore one path to the bottom, then you might see logistic time complexity — if you explore the tree from left to right, you might see linear time.

When Haskell does not evaluate a value, it produces an unevaluated value called a thunk. (which is a pointer to the code that would generate the result)

Laziness is a good way to skip work, but if the value’s going to be created anyway, it can have a performance cost. Most Haskell programs, to be fast, will opt out of laziness in at least some cases.

Many builtin Haskell data types come in strict and lazy varieties — usually with the same names. These varieties are usually incompatible from the point of view of library code that has to operate on them, so most libraries silently pick the data structure that they think will guarantee the best performance to outside users.


Libraries

Like every other language in the world, Haskell comes with a standard library which covers input and output — it also comes with a large user-created package repository called Hackage, and two popular distributions of compatible packages called Stackage and Haskell Platform.

The reason that multiple package distributions are common for Haskell is that Haskell packages have a reputation for getting into complicated versioning situations.

(I personally don’t understand the reasons for this reputation and think the problem is overstated, but it’s such a consistent complaint I feel the need to acknowledge it. I’ve experienced this problem with dependencies that provide development tools, but nothing else.)

Idiom

The Haskell community likes math a lot.

Many types and libraries in Haskell are named after mathematical objects, even if the connection is somewhat tenuous. (For instance, Haskell’s Functor typeclass corresponds to something more specific than the mathematical notion of a functor — a covariant functor.)

Beginners are usually expected to learn to use the names of mathematical objects when explaining their code.

Haskell programmers tend to write very golfy code with a lot of symbolic names. Many people use point-free style, where functions’ arguments are not named or written out, and instead code is written as a composition of shorter functions.

There are a variety of utility functions designed for use with Haskell’s IO and error-handling types which are technically allowed to operate on functions, with surprising results:

f = join (+)
-- equivalent to
f x = x + x

Mastering these tricks is a mark of pride in the community.

Module system

In Haskell, record fields are in the module namespace, which means two types in the same module usually can’t have the same field name. This means that field names in Haskell are usually two words:

data Person = Person { personName :: Text }
data Dog = Dog { dogName :: Text }

Your code is written in files called modules. If module A uses module B, module B usually cannot use module A.

Functions in Haskell cannot be associated with a type. (they are in the module namespace) In the standard library, some functions are prefixed or suffixed with an extra character (ex mapM and map) specifically to avoid problems caused by needing to provide the same function for two types.

It’s pretty common for users to have to come up with unusual names for functions belonging to their own types, since the obvious names tend to exist in library code.


Tooling

Building

Haskell has two build systems: Cabal and Stack.

Stack depends on Cabal.

Stack has an enormous number of GitHub issues, most of which do not appear to be responded to. Over four attempts in the past three years, I have never been able to get Stack to work on Windows.

From what I can tell, Stack is the more popular build system, and I don’t know why.

Cabal produces boring, statically-linked binaries which will run anywhere. (that is, anywhere compatible with boring, statically-linked binaries for the platform in question)

Debugging

Tracing is awkward in Haskell. Its builtin logging tool, WriterT, can have a significant negative performance impact because instead of logging right away, it dumps the logs to a data structure which, ideally, would be optimized out but which, in practice, will not be.

There’s a second tracing tool in Haskell called Debug.Trace, but it traces in the order that work is done — which is to say, basically random order, because Haskell does not make guarantees about evaluation order except the guarantee that it will evaluate your code before you need its result.

The built-in Haskell debugger works step by step in basically-arbitrary order, same as the builtin tracer.

A minor note — Haskell doesn’t have a New Relic client, which would have been useful to me at work, although it does have a third party Datadog client.


Customizable control flow (Monad)

Haskell has a syntax feature called do notation, along with a corresponding library-level feature called Monad, which together allow Haskell to provide customizable control flow and imperative-looking programs.

Here’s a sample program in do notation:

main = do
  beerBottles 10

beerBottles 0 =
  putStrLn "I don't know a lot about beer bottles or songs actually"

beerBottles x = do
  putStrLn "I don't know a lot about beer."
  print x
  putStrLn "I'm really sad."
  putStrLn "Sorry, Beavis."
  beerBottles (x - 1)

There are ten million Monad explanations on the internet, so I’m not writing one here.

There’s a little bit of snark in this section, which might bother some readers — I think that this is a case where Haskell’s design is pretty difficult to present without judgment, because it has so many immediately apparent problems.

Usage

Monad allows people to write programs that, line-by-line, have similar structure to programs in ordinary scripting languages. Unlike in normal Haskell (where evaluation order doesn’t matter), Monad can force evaluation to take place in a specific order.

These programs do not have access to C-style for or while loops. They have access to foreach-style loops through the forM family of functions, but those loops don’t allow you to set variables for future iterations, nor is it possible to break early or continue from the next iteration.

Even in do notation, Haskell does not provide mutable variables. However, its built-in IO type provides IORefs, which act like objects that gate access to a single mutable field. Haskell also provides STRefs, which do exactly the same thing but are completely incompatible in every way.

Monad allows users to handle errors using the previously-mentioned Maybe and Either types. To do this, users need to use the <- statement (in do notation) or the equivalent >>= function. (provided by the Monad typeclass) This is also needed whenever users do IO. If users want to do IO and handle errors at the same time, they need to use the ExceptT String IO type instead, for some reason.

That type is provided in a separate library that is not installed by default. Because IO operations are defined for the IO type and not for the ExceptT String IO type, using ExceptT requires users to prefix every IO operation with the function liftIO.

There are a variety of libraries designed to avoid this inconvenience, called “effect systems.” For instance, you can avoid this by installing polysemy or mtl, which will figure out how to convert the type you were given into an IO value that you can operate on directly. (Note that polysemy requires ten language extensions and two compiler flags, and it is not recommended to use it without its GHC plugin and compile-time code generation.)

Monad also allows Haskell users to express LL(*) parsers tersely. There are several libraries that do that.

Danger zones

Because Haskell implements Monad for (->) a, you will get a cryptic error message if you mistype the number of arguments for a function. (in plain english; an IO value will support the Monad interface, but so will a function that would produce an IO value if called)

Haskell’s MonadFail typeclass has an extra fail method that will swallow some failures normally caused by pattern matching, which may be surprising to some people who expect pattern matching failures to result in a runtime error. It’s even implemented for Maybe, where the behavior is to suppress the error and return Nothing. (aka null)

You can use do notation with types that do not instantiate Monad, without getting an error message. (The error you get will be on an invalid invocation of >> or >>=, two of the functions from the Monad interface, if an error happens at all.)

You’ll receive a compile-time type error if you use the wrong function to operate on a monadic value — if you have an IO Text, printing it will require you to call >>= to get the Text out, or use the (<-) statement, and failing to do so will result in a fairly clear error message.


Conclusions

I think Haskell suffers from a variety of major problems.

Ignoring its purpose as a research tool and describing it only as a programming language, its apparent niche is this: it’s a pretty fast garbage-collected language with a lot of safety features and very terse syntax. In my opinion, it lacks serious competition in this category.

Some languages that target a similar niche to Haskell are C#, Ruby, Rust, and Scala, but each of those languages has misfeatures that disqualify it as a clear competitor:

  • C#: Microsoft; lacks sum types; until recently, limited null safety
  • Ruby: has no static type system; is very slow
  • Rust: has a relatively complicated type system; no garbage collector; verbose; pretty crappy support for iterators
  • Scala: has a relatively complicated type system; has a preponderance of fascists; sbt sucks

(I’m not including Kotlin, because while it competes with Scala, it lacks true pattern matching and doesn’t have traits, implicits, or typeclasses.)

I would argue that the reason Haskell lacks serious competition in this category is that its two goals are contradictory. Goal one is to make it as easy as possible to write code in shorthand; two is to increase maintainability. Even Haskell’s standard library leans hard into short names and point-free style. It can genuinely be very hard to read other people’s Haskell code.

I racked my brain for a while for serious contenders to Haskell’s category and — not repeating Scala, which was on the previous list and might otherwise be your best bet — this is what I thought of:

  • the OCaml family (OCaml, F#, and ReasonML)
  • Purescript (which I had a much easier time getting to work on Windows, relative to Haskell)

I’m open to additional recommendations, especially because I can’t recommend Haskell in good conscience based on its other problems. Please note — I’m only looking for things that are suitable for production use, so I don’t intend to add hobby projects or Elm to the list.

Even though Haskell seems to be king of its niche, there are a few things I want to single out for additional scorn. I put this at the bottom so people who don’t like rants can skip them — I think my points are substantiated OK, but I’m a lot angrier while I’m making them.

The tool situation

After a while, you just get tired of seeing WONTFIX.

I don’t really understand how it happened, or particularly care — but every time Stack or GHC fell over for me and died, it was caused by a known bug.

From what I can tell, this is a consequence of Stack (and occasionally Haskell Platform) developers being aggressive about pinning to a particular version, while also not testing on Windows. However, there’s a lot of bug reports on Linux that are very similar to what I experienced. I don’t know how to explain this other than apathy.

There’s never really been a time when the experience I had with IDE-like tools in Haskell was comparable to what I’ve had in Rust, even though Rust until recently had a smaller market share than Haskell. I don’t know what it is that prevented the basic “invoke compiler, get error message” flow from working well.

I do know that the vast majority of Haskell development tools have, historically, been pinned to highly specific versions of other development tools. I frankly don’t know if it has ever been possible to install them all at once. I’m sure somebody must have tried it.

Monad

In my opinion, Monad is perceived as mysterious to learners because at basically every turn, Haskell exposes it in a way that is useless to them:

  • There are several major misfeatures that hurt readability and maintenance without a clear selling point. (The worst is Monad ((->) a), which seems redundant given that Reader exists and is equivalent. Allowing do with non-Monad types also seems problematic, as well as rewriting into >>= and >> without first checking that each line has a matching type for the overall structure.)
  • There are no effects systems that aren’t significantly more bureaucratic than whatever scripting languages were already doing.
  • Users are required to use extra utility functions to operate on values in Monad wrappers, compared to ordinary values.

In addition, Monad seems to have little advantage over other tools to relegate access. There are basically two subcategories of monads:

  • Monads that affect the number of exit points code takes. (ContT, ExceptT)
  • Monads that do not. (IO, Reader)

In case one, there are a few major patterns outside of FP-world which the bulk of imperative languages can now support: coroutines, exceptions, and backtracking capture the main cases, and the vast majority of languages can express them. (even if they’re a little limp in the case of backtracking)

In case two, there’s no apparent value to using monads to provide the sandboxing over, say, an object to manage access to each resource. In fact, Haskell’s design seems worse, since IO provides access to all resources at once rather than one resource at a time.

This is the design choice made in the vast majority of OOP languages — not only is it simpler, but it seems to me that it’s much safer however you slice the decision that Haskell made.

The one remaining benefit I can see for monads as a control flow tool — similar to Java with its checked exceptions, Haskell allows you to change the types of parts of your code based on their failure modes. However, converting between monad transformer stacks is typically prohibitively annoying, so most Haskell programmers build a standardized Monad transformer stack for their whole program, losing them even that advantage.

There might still be a case for monads as a tool to provide data structure operations — Haskell uses them for its list comprehensions — but I personally think Haskell’s list comprehensions are arguably underfeatured compared to, say, C#’s, which provide “order by” functionality — and I think that’s a consequence of supporting them through Monad.

I’ll add something else: you really can’t write most loops in Haskell that you can write in other languages. I think having to resort to forM and mapM is really bad, actually, and the lack of an ergonomic interface to mutable variables really sucks.

pytype probably should not be your last line of defense

taming a python

Python is a dynamically-typed programming language with cursory support for static typing.

What that means specifically is that, at compile-time:

  • You can annotate any variable name with a type.
  • The types can be parameterized: List[str] — “list of strings” — and List[int] — “list of integers” — are different types.

At runtime:

  • Python ignores all type annotations.
  • Types cannot be parameterized: type([“a”, “b”, “c”]) == type([1, 2, 3]) == list

Python includes a convenient function isinstance(x, y) which returns True if x is a member of type y.


pytype is a tool from Google that analyzes your program based on its type annotations and determines if those annotations are broken by your program. For instance, pytype will complain if you pass [1, 2, 3] where a List[str] was expected.

There are a few practical problems with pytype which I found when trying to use it at work:

  • pytype seems to have trouble with variance — it complained earlier when I wrote code that expected a List[Dict[str, Union[int, str]]] and received a List[Dict[str, int]] instead. Frankly, I don’t know how to deal with this kind of thing in Python, but an error doesn’t seem right.
  • pytype doesn’t seem to understand the magic tricks used to implement SQLAlchemy, the database library we use

Unfortunately, several competing tools I used had worse problems. In addition to that, Python appears to have completely changed the typechecking-relevant APIs in every major version between 3.5 and 3.9, so many typechecking libraries — including dynamic ones, such as typeguard — didn’t work at all on the relatively modern version of Python we used at work.


At work, we have historically added checks at the public interface boundary which use isinstance(x, y) to fail if a value has the wrong type. This has the advantage that, if code is used in a way we didn’t expect, we always get an error.

In general, I think code should resist misuse when possible. Using code the wrong way should basically always cause a crash. pytype doesn’t create that guarantee, so while I recommend using pytype, I also recommend using type assertions of the kind we use when you’re at runtime.

Some of these checks have a big performance cost. When we expect a list of integers, checking isinstance(x, int) for every integer in a big list is pretty expensive. Our experience is that most of the time, when we’ve written an assertion like this, though, we’ve managed to trigger it.

We have also benefited from checking that values are in their expected ranges (for instance, an age shouldn’t be greater than 100, even though Python integers are unbounded) and that input objects aren’t implausibly large. (a list of 1,000 search keywords would be far too many) Most typechecking libraries don’t help with this and for this sort of thing, you should really be validating your input anyways.

Most Python implementations are very slow, so chances are, if you’re using Python, you’ve already accepted a high implicit performance cost. You probably have the infrastructure to scale to a greater number of servers if needed.

That being said — if you’re fearless, disregard this advice! I could be wrong.

A double standard for posts about controversial subjects

art by dexidoodles

Today I wanted to write a post about grift in the functional programming community. I still want to do that, but I got caught up in another topic which I’m thinking about first.

Some of the claims I want to make here are a little controversial. Saying controversial things can really suck, because people will be generally unpleasant in response.

You can usually deal with this by playing to the standards of the community you’re worried about. For instance, while veteran Wikipedia editors tend to come into disagreements with a direct and hostile tone, they tend to change their tone a little bit if you cite Wikipedia’s Assume Good Faith policy.

I generally think that programmers, when they read controversial text, are looking for a sign that the person they’re reading is cosmopolitan and willing to change their mind. They tend to have other, higher standards than that, too: your post should make sense, your conclusions should be supported by your argument. But looking openminded is one of the baseline requirements you have to meet to be tolerated.

One of the best ways to seem openminded is to seem rational.


Unfortunately, I’m pretty used to finding a programmer post on Reddit or Hacker News, finding out it was written by a woman or a marginalized person, then seeing replies that seem outright antirational. Usually upon reading these, I get the strong feeling that ingroup bias happened and that’s the reason the poster was discounted.

Kind of as a result of this behavior, I think a lot of programmers try to adapt to the response they’re expecting.

What do I mean by that? Well, often, when someone has an unusual thesis, I think they try to make a greater display of being rational. This usually means saying “this is just my opinion, not the facts. I can be convinced” right up at the beginning of the post.

It often means getting to the ultimate conclusion step-by-step, hoping that if people don’t object to any of the steps along the way, they won’t object when the controversial statement is finally made. That strategy often means making a pretty verbose case for each step, and it means that someone who stops reading early will not have seen the final conclusion.

My problem with this is that, generally speaking, when people write more words, it increases the attack surface against their argument, even if they’re making a better case for what they were trying to say. I think this makes it easier to dismiss people who are already trying harder not to be dismissed.


Generally — constructing a good argument often means giving multiple lines of reasoning. If you only give one line of reasoning, then your argument is likely only going to be convincing to people who believe a specific set of premises.

Many people who want to say something controversial will recognize this, and they’ll respond by giving more than one example of whatever they’re trying to argue. They assume some people might balk at their first example and need a second one to be convinced.

The problem is that, as far as I can tell, people usually stop reading the first time they get to a statement they disagree with.

That’s a very big problem for long posts!


As far as I can tell, when you think someone is playing fast and loose with the facts, you should stop treating them as a credible source.

However, I think most programming arguments aren’t about subjects where you can really play fast and loose with the facts. Usually they’re about situations where almost all the information is public.

In general, if I’m giving you a line of reasoning (rather than claims of fact), then you don’t have to trust my credibility at all to hear out whether I’m telling the truth. If I’m telling you that 528 times 256 is equal to 135168, I’m not asking you to take anything on faith. You can independently verify that.

I also think that it’s pretty rare, when someone’s expecting a hostile reception, for that person to be straightforwardly wrong. Usually their claims are objectionable — not wrong, objectionable.

Frankly, you can argue both sides of a lot of issues in programming, because a lot of arguments come down to the question of whether things are satisfactory (a value judgment) rather than objective questions about how they are.

Usually when I see a programming discussion disappear up its own ass, it’s about that sort of thing — not the facts.


Unfortunately, I think programmers aren’t going to change in the short run. I guess my feeling is that this has led to a pretty sticky situation.

It seems like people who feel relatively accepted can claim wild and dubious things — look at the assertions people have been making for years about Haskell performance and compare them to the performance-in-practice of StateT, WriterT, and other abstractions that don’t benefit that much from stream fusion.

On the other hand, there are people who feel marginalized who try to defend their points better to match their audience’s skepticism. Those people get held to task on minor details, even if their overall argument is somewhat well-supported.

I have two recommendations. One is to disclaim upfront that you’re offering multiple reasons to believe the thing you’re arguing. You can reply to critics by attempting to show that they didn’t respond to other parts of your argument. I personally think this is foolish; the last few times I’ve seen people try this, they got subtly ridiculed. In general, I don’t think you can win back your credibility in the eyes of random people online.

My other recommendation is to make short, impressionistic arguments and appeal to the idea that there are multiple ways to demonstrate a claim that you’re making, rather than listing them. When you say “well, there are multiple reasons to believe this,” without listing them, you’re making a single claim that can’t be readily disputed and you’re appealing to the idea that there is a well-known source of authority — collective wisdom — located elsewhere.

You can also list some of those reasons by name without explaining them in detail so that someone who objects to one is forced to notice that there are other ones.

(Of course, you should always have a good explanation of what you mean in mind, just in case someone asks.)

I don’t know how to deal with the general problem that some people are marginalized. There’s a lot of assholes and idiots out there who should probably be evicted, but when people do that, it creates a vacuum that is eventually filled by worse assholes and idiots. It’s a sad state of affairs, although you can avoid it partially by using Rust.

Greetings!

image of a bat waving hello
art by skdaffle

Hey, I’m Nyeogmi!

Some of you have encountered me on Twitter before, where I post about kink and furry topics. However, I’m also interested in code, politics, philosophy, and social adaptation.

In real life, I’m the lead backend developer of a dating site, where I have a particular interest in infrastructure and performance. Sometimes you’ll catch me opining about areas adjacent to my job, like game design and enterpreneurship.


If you’ve never kept up with a blog before, and you’d like to follow mine, then you have some options:

  • bookmarking my page with your browser
  • following me on WordPress (this will cause you to be emailed)
  • following me on Twitter (this will expose you to furry content)
  • using a feed reader such as Feedly (free) or BazQux Reader (paid, but worth it)

In general, blogs are hard to find. Unlike Tweets, they don’t move on their own, and there’s a lot more friction involved in subscribing. Because of that, if I write anything that fascinates you, you should probably repost it to Twitter, Reddit, and other social media.