

Yup. It’s insanity that this is not immediately obvious to every software engineer. I think we have some implicit tendency to assume we can make any tool work for us, no matter how bad.
Sometimes, the tool is simply bad and not worth using.


Yup. It’s insanity that this is not immediately obvious to every software engineer. I think we have some implicit tendency to assume we can make any tool work for us, no matter how bad.
Sometimes, the tool is simply bad and not worth using.
Naturally, vim is still acceptable.


Except it’s not seamless, and never has been. ORMs of all kinds routinely end up with N+1 queries littered all over the place, and developers using ORMs do not understand the queries being performed nor what the optimal indexing strategy is. And even if they did know what the performance issue is, they can’t even fix it!
Beyond that, because of the fundamental mismatch between the relational model and the data model of application programming languages, you necessarily induce a lot of unneeded complexity with the ORM trying to overcome this impedance mismatch.
A much better way is to simply write SQL queries (sanitizing inputs, ofc), and for each query you write, deserialize the result into whatever data type you want to use in the programming language. It is not difficult, and greatly reduces complexity by allowing you to write queries suited to the task at hand. But developers seemingly want to do everything in their power to avoid properly learning SQL, resulting in a huge mess as the abstractions of the ORM inevitably fall apart.


Access modifiers are definitely something I despise about OOP languages, though I understand that OOP’s nature makes them necessary.


The encryption thing is definitely weird/crazy and storing the SQL in XML is kinda janky, but sending SQL to a DB server is literally how all SQL implementations work (well, except for sqlite, heh).
ORMs are straight trash and shouldn’t be used. Developers should write SQL or something equivalent and learn how to properly use databases. eDSLs in a programming language are fine as long as you still have complete control over the queries and all queries are expressable. ORMs are how you get shit performance and developers who don’t have the first clue how databases work (because of leaky/bad abstractions trying to pretend like databases don’t require a fundamentally different way of thinking from application programming).
I think the point is not that it’s a MacBook, but that the senior is using a single laptop instead of a full multi-monitor setup.
Personally as a senior, I use 4 monitors. My eyes are too shit to stare at a tiny laptop screen all day, and I want slack/browser/terminal windows on their own screens. It’s much more comfortable as well.


Heh yeah that’s pretty straightforward:
SELECT a.*, COALESCE(b.some_col, 'some_default_val') as b_result
FROM a LEFT JOIN b ON (a.id = b.id);
This will produce at least 1 row for every row in a, and if a.id doesn’t match any b.id, the value of b_result will be 'some_default_val'.
Not sure if that’s exactly what you were describing (since it was a little ambiguous), but that’s how I interpreted it.
Ultimately it’s just a matter of investing a little time to learn it. It’s not fundamentally difficult or complex, even though you certainly can write very complex queries.


To be honest, it’s remarkably simple for what it’s doing. There’s a ton of details that are abstracted away. Databases are massively complex things, yet we can write simple queries to interact with them, with semantics that are well-understood and documented. I think, like anything else, it requires a bit of effort to learn (not a lot, though). Once you do, it’s pretty easy to use. I’ve seen many non-technical people learn enough to write one-off queries for their own purposes, which I think is a testament to its simplicity.


It doesn’t arbitrarily double rows or something. For each row in the relation on the left of the join, it will produce 1 or more rows depending on how many rows in the relation on the right of the join match the join condition. The output relation of the join may have duplicate rows depending on the contents of each joined relation as well as what columns you are projecting from each.
If you want to remove duplicates, that’s what DISTINCT is for.


That’s the whole point of a left join? Anything else wouldn’t be a left join anymore.


No variables, no functions
Every major SQL implementation includes both of those things. Of course, it’s rarely needed or desirable if you know how to properly write SQL.
“So why can’t you do that with expressions?”
You can alias expressions.
And then you try put a MAX in a where and it won’t let you because you gotta pull all the maxes out in their own query, make a table, join them in, and use them like a filter…
Wtf are you talking about? For one, filtering by the output of an aggregate is what the HAVING clause is for. But even if that didn’t exist, you could just use a subquery instead. You don’t need to make table…
Tbh it just sounds like you don’t know SQL very well. Which is fine, but doesn’t make for a very compelling criticism. SQL does have warts (even though it’s great overall), but none of what you described are real problems.


Yep. Postgres rocks. No idea why anyone would bother with anything else. They all suck in comparison.
That’s what happens when you start using LLMs for all of your software development. Garbage code all day long.
Because they vibe code the shit out of everything now. Insane shit is bound to happen.
I’ve done that. It sucks.


If your experience with FP is as limited as you say, then, respectfully, you lack the requisite experience to compare the two. It’s an entire paradigm shift that requires you to completely change how you think if you’re accustomed to OOP, and really requires one to program in a language designed for it. The features that OOP languages have cribbed from FP languages are very surface-level and not at all representative of what it actually is (and I’d say they largely miss the point of FP).
I have been writing Haskell professionally for the last 5 years over the course of 2 jobs developing database-backed web services for web and mobile apps, and spent about ~5 years before that developing in OOP/imperative languages. I have a good deal of experience with both paradigms.
OOP is more useful as an abstraction than a programming paradigm.
It’s a very poor (and leaky) abstraction, and you can achieve much more powerful abstractions with FP languages (especially Haskell, which has a type system that is far more powerful than any OOP language is capable of). This is evidenced by the fact that a whole slee of design patterns exist to solve problems created by OOP itself. FP languages, on the other hand, have little need for design patterns, because useful patterns are easy to abstract over and turn into libraries.
Real, human, non-computer programming is object-oriented, and so people find it a natural way of organizing things.
I have no idea what this even means. Programming is taking input and producing output. That, at its core, is a function. Pure and simple. Many useful ideas are quite difficult to express as a noun, which is how you end up with a whole array of super awkwardly named classes that try to “noun-ify” verbs (think classes named like Serializer, Resolver, Initializer, and so on).
It makes more sense to say “for each dog, dog, dog.bark()” instead of “map( bark, dogs)”.
This entirely misses the point of FP. What data is actually being transformed here? It’s a nonsensical example.
A good use case for OOP is machine learning. Despite the industry’s best effort to use functional programming for it, Object oriented just makes more sense.
Not really sure what you’re talking about. No one uses functional programming for machine learning outside of research. To be clear, Python is not functional programming. The reasons for that having nothing to do with the paradigm and everything to do with social reasons and inertia. Python was already used by a lot of academics for some ecosystem reasons, and the ecosystem has grown since then. Had the history of things been different, functional programming absolutely would have excelled at it and would have been a much better fit, because machine learning is fundamentally a pipeline of transformations on data.
You want a set of parameters, unique to each function applied to the input. This allows you to use each function without referencing the parameters every single time. You can write “function(input)” instead of “function(input, parameters)”.
This is trivial to do in functional programming languages with a variety of methods. Commonly this is done with the Reader Monad, or even simply partial application of functions (also known as closures).


I mean, I have an OOP background. I found FP as a result of my dissatisfaction with OOP. In fact, I used to teach OOP languages to new students and saw the same mistakes over and over again, mistakes that are simply not possible in FP. It’s a very similar story for everyone I work with, too. We all had jobs in various OOP languages before we managed to get jobs writing Haskell.
Oh, and I’m currently teaching Haskell to someone at work who has a CS degree and has only done OOP languages (and C), and while it’s different than what he’s used to, he’s still picking it up very quickly (working towards making him a junior engineer, which I think shouldn’t take too much longer). In fact, just the other day we pair programmed on a bug ticket I have and he was not only following along with the code, he spotted issues I hadn’t seen yet. Part of it is certainly that’s he smart (which is why I’m doing this in the first place), but part of it is also that, with a bit of familiarity, FP languages are incredibly easy to read and follow. The primary difference is that FP does everything explicitly, whereas OOP encourages a lot of implicit (and hidden) behavior. When you organize code around functions, there’s necessarily more explicit arguments and explicit return values, which makes it far, far easier to follow the flow of logic of code (and test!). Recently I was trying to read through our Kotlin codebase at work (for our Android app), and it was so much harder because so much is implicit.


I was required to take an ethics class, but it was a complete joke. Guy just wasted a bunch of time on very surface-level, Philosophy 101 stuff like talking about who Aristotle was. I’m not sure we even had homework actually. Real ethics were nowhere to be found.


FP has been around a long time, and yes, it is outright better than OOP. Sorry. I write Haskell professionally, and never have I ever felt like something would be easier done in an OOP language. Quite the opposite.
Unrestricted mutation makes programming really hard, and OOP and mutability go hand-in-hand. If you try to make immutable objects, then you’re just doing FP with worse ergonomics.
Wtf are you talking about? It doesn’t have a fucked up name origin at all. It was named “master” as in “master recording”, like in music production. Proof: https://x.com/xpasky/status/1271477451756056577.
Master/slave concepts were never a thing in git. The whole renaming thing was really fucking stupid. Caused plenty of breakage of scripts and tools for absolutely no good reason whatsoever.