• 0 Posts
  • 13 Comments
Joined 2 years ago
cake
Cake day: July 3rd, 2023

help-circle
  • Shit! Sorry, got my wires crossed, I actually meant locality of behavior. Basically, if you’re passing a monad around a bunch without sugar you can’t easily tell what’s in it after a while. Or at least I assume so, I’ve never written anything big in Haskell, just tons of little things.

    I’m not sure if I entirely follow, but in general you actually have much better locality of behavior in Haskell (and FP languages in general) than imperative/OOP languages, because so much more is explicitly passed around and immutable. Monads aren’t an exception to this. Most monadic functions are returning values rather than mutating some distant state somewhere. Statefulness (or perhaps more precisely, mutable aliasing) is the antithesis of locality of behavior, and Haskell gives you many tools to avoid it (even though you can still do it if you truly need it).

    I’m not really sure what you mean by “don’t really know what’s in it after a while”. It might be helpful to remember that lists are monads. If I’m passing around a list, there’s not really any confusion as to what it is, no? The same concept applies to any monadic value you pass around.

    Yeah, that makes tons of sense. It sounds like Transaction is doing what a string might in another language, but just way more elegantly,

    I think you might have misunderstood what I was describing. The code we write doesn’t actually change, but the behavior of the code changes due to the particular monad’s semantics. So for example, let’s say I write a query that updates some rows in a table, returning a count of the rows affected. In this Transaction code block, let’s say I execute this query and then send the returned number of rows to an external service. In code, it looks like the API call immediately follows the database call. To give some Haskell pseudocode:

    example :: Transaction ()
    example = do
      affectedRows <-  doUpdateQuery
      doApiCall affectedRows
      return ()
    

    But because of how Transaction is defined, the actual order of operations when example is run becomes this:

    1. Send BEGIN;to DB
    2. Call doUpdateQuery
    3. Send COMMIT; to DB
    4. If transaction was successfully committed, execute doApiCall affectedRows. Otherwise, do nothing

    In essence, the idea is to allow you to write code where you can colocate your side-effectful code with your database code, without worrying about accidentally holding a transaction open unnecessarily (which can be costly) or firing off an API call mistakenly. In fact, you don’t actually have to worry about managing the transaction at all, it’s all done for you.

    which fits into the data generation kind of application. I have no idea how you’d code a game or embedded real-time system in a non-ugly way, though.

    I mean, you’re not going to be using an SQL database most likely for either of those applications (I realize I assumed that was obvious when talking about transactions, but perhaps that was a mistake to assume), so it’s not really applicable.

    I also generally get the impression that you have a notion that Haskell has some special, amorphous data-processing niche and doesn’t really get used in the way other languages do, and if that’s the case, I’d certainly like to dispel that notion. As I mentioned above, we have a pretty sizeable backend codebase written in Haskell, serving up HTTP JSON APIs for a SaaS product in production. Our APIs drive all (well, most) user interaction with the app. It’s a very good choice for the typical database-driven web and mobile applications of businesses.

    Ironically, I actually probably wouldn’t use Haskell for heavy data processing tasks, namely because Python has such an immense ecosystem for it (whether or not it should is another matter, but it is what it is)… What Haskell is great at is stuff like domain modeling, application code (particularly web applications where correctness matters a lot, like fintech, healthcare, cybersecurity, etc.), compilers/parsers/DSLs, CLI tools, and so on.*


  • I’m not sure what you mean by “locality of reference”. I assume you mean something other than the traditional meaning regarding how processors access memory?

    Anyway, it’s often been said (half-jokingly) that Haskell is a nicer imperative language than imperative languages. Haskell gives you control over what executing an “imperative” program actually means in a way that imperative languages don’t.

    To give a concrete example: we have a custom monad type at work that I’m simply going to call Transaction (it has a different name in reality). What it does is allow you to execute database calls inside of the same transaction (and can be arbitrarily composed with other code blocks of type Transaction while still being guaranteed to be inside of the same transaction), and any other side effects you write inside the Transaction code block are actually collected and deferred until after the transaction successfully commits, and are otherwise discarded. Very useful, and not something that’s very easy to implement in imperative languages. In Haskell, it’s maybe a dozen lines of code and a few small helper functions.

    It also has a type system that is far, far more powerful than what mainstream imperative programming languages are capable of. For example, our API specifications are described entirely using types (using the servant library), which allows us to do things like statically generate API docs, type-check our API implementation against the specification (so our API handlers are statically guaranteed to return the response types they say they do), automatically generate type-safe API clients, and more.

    We have about half a million lines of Haskell in production serving as a web API backend powering our entire platform, including a mobile app, web app, and integrations with many third parties. It’s served us very well.


  • As a senior engineer writing Haskell professionally, this just isn’t really true. We just push side effects to the boundaries of the system and do as much logic and computation in pure functions.

    It’s basically just about minimizing external touch points and making your code easier to test and reason about. Which, incidentally, is also good design in non-FP languages. FP programmers are just generally more principled about it.


  • Because they are universally incapable of coming anywhere close to the full power of git.

    I can’t tell you how many times I’ve had GUI-only people ask me to unfuck their repo (fortunately not at my current job, because everyone uses the CLI and actually knows what they’re doing). It’s an impedance to actually learning the tool.

    Ultimately any GUI is a poor, leaky abstraction over git that restricts many of the things you can do for little actual benefit.


  • The point of the joke is not that the Python interpreter will change types mid-program on its own, but that you don’t have any real way of knowing if you’re going to get the type you expect.

    Programs are messy and complicated, and data might flow through many different systems before finally being used for output. It can and often does happen that one of those systems does not behave as expected, and you get bugs where one type is expected but another is used in actuality.

    Yes, most likely what would happen in Python is a TypeError, not actual output, but it was pretty clearly minor hyperbole for the sake of the joke.