• 1 Post
  • 45 Comments
Joined 2 years ago
cake
Cake day: June 18th, 2023

help-circle

  • First time I’m seeing Uiua, and I like it. It’s kind of cute, even though I know I’ll probably never use it.

    However, seeing one of their goals being “code that is as short as possible while remaining readable” is kind of ironic, given how it looks and reads. But I don’t mind, it’s still pretty adorable.

    It looks like it’s hell to learn and write. It’s possible that once you learn all the glyphs (which IMO adds unneccessary complexity that goes against their goal of being readable), it might be easier to parse. I’m probably not the target audience, though.


  • I was doing cybersecurity for a few years before I moved to gamedev, and I vaguely remember that at least the older versions of GUID were definitely not safe, and could be “easily” guessed.

    I had to look it up, in case anyone’s interrested, and from a quick glance to the GUID RFC, it depends on the version used, but if I’m reading it right, 6 bits out of the 128 are used for version identification, and then based on the version it’s some kind of timestamp, either from UTC time or some kind of a name-space (I didn’t really read through the details), and then a clock sequence, which make it a lot more guessable. I wonder how different would the odds be for different versions of the UUID, but I’m too tired to actually understand the spec enough to be able to tell.

    However, for GUID version 4, both the timestamp and clock sequence should instead be a randomly generated number, which would give you 122 bits of entropy. It of course depends on the implementation and what kind of random generator was used when generating it, but I’d say it may be good enough for some uses.

    The spec also says that you specifically should not use it for auth tokens and the like, so there’s that.





  • Aren’t neural networks AI by definition, if we take the academic definition into account?

    I know that thermostat is an AI, because it reacts to a stimuli (current temperature) and makes an action (starts heating) basted on it’s state. Which is the formal AI definition.

    Wait. That actually means transformers are not AI by definition. Hmm, I need to look into it some more.

    EDIT: I was confusing things, that’s the definition of AI Agent. I’ll go research the AI definition some more :D


  • Definitely, but the issue is that even the security companies that actually do the assesments also seem to be heavily transitioning towards AI.

    To be fair, in some cases, ML is actually really good (i.e in EDRs. Bypassing a ML-trained EDR is really annoying, since you can’t easily see what was it that triggered the detection, and that’s good), and that will carry most of the prevention and compensate for the vulnerable and buggy software. A good EDR and WAF can stop a lot. That is, assuming you can afford such an EDR, AV won’t do shit - but unless we get another Wannacry, no-one cares that a few dozen of people got hacked through random game/app, “it’s probably their fault for installing random crap anyway”.

    I’ve also already seen a lot of people either writing reports with, or building whole tools that run “agentic penetration tests”. So, instead of a Nessus scan, or an actual Red Teamer building a scenario themselves, you get a LLM to write and decide a random course of action, and they just trust the results.

    Most of the cybersecurity SaaS corporates didn’t care about the quality of the work before, just like the companies that are actually getting the services didn’t care (but had to check a checkbox). There’s not really an incentive for them to do so, worst case you get into a finger-pointing scenario (“We did have it pentested” -> “But our contract says that we can’t 100% find everything, and this wasn’t found because XYZ… Here’s a report with our methodology that we did everything right”), or the modern equivalent of “It was the AI’s fault”, maybe get a slap on the wrist, but I think that it will not get more important, but way, way more depressing than it already was three years ago.

    I’d estimate it will take around a decade of unusable software and dozens of extremely major security breaches before any of the large corporations (on any side) concedes that AI was really, really stupid idea. And at that time they’ll probably also realize that they can just get away with buggy vulnerable software and not care, since breaches will be pretty common place, and probably won’t affect larger companies with good (and expensive) frontline mitigation tools.


  • I have worked as a pentester and eventually a Red Team lead before leaving foe gamedev, and oh god this is so horrifiying to read.

    The state of the industry was alredy extremely depressing, which is why I left. Even without all of this AI craze, the fact that I was able to get from a junior to Red Team Lead, in a corporation with hundreds of employees, in a span of 4 years is already fucked up, solely because Red Teaming was starting to be a buzz word, and I had passion for the field and for Shadowrun while also being good at presentations that customers liked.

    When I got into the team, the “inhouse custom malware” was a web server with a script that pools it for commands to run with cmd.exe. It had a pretty involved custom obfuscation, but it took me lile two engagements and the guy responsible for it to leave before I even (during my own research) found out that WinAPI is a thing, and that you actually should run stuff from memory and why. And I was just a junior at the time, and this “revelation” got me eventually a unofficial RT Lead position, with 2 MDs per month for learning and internal development, rest had to be on engagements.

    And even then, we were able to do kind of OK in engagements, because the customers didn’t know and also didn’t care. I was always able to come up with “lessons learned”, and we always found out some glaring sec policy issues, even with limited tools, but the thing is - they still did not care. We reported something, and two years ago they still had the same bruteforcable kerberos tickets. It already felt like the industry is just a scam done for appearances, and if it’s now just AIs talking to the AIs then, well, I don’t think much would change.

    But it sucks. I love offensive security, it was really interresting few years of my carreer, but ot was so sad to do, if you wanted to do it well :(



  • For me, the issue isn’t as much that they are forcing the data collection (on some/free people, to be clear).

    I have issues with the way they are spending their development money, that I give them for the product. I don’t care about the AI hype slop, that apparently can’t even get good results (which they outright admit in the blogpost), instead of actually making the core features of the editor better. Everyone knows at this point it’s a hype bubble that will never be usable, and they are grasping at straws.

    I don’t want to pay 200$ a year only for them to add a dumb chatbot and data collection into my IDE, or make the code completion dumber and random instead of actually being deterministic. So I don’t, canceled my subscription and I’m sticking to the perpetual license while slowly switching to nvim. But I can still make fun of them about it. I have been recommending JetBrains products for most of my life, and they have disappointed me with the direction they are going, so I’ll make sure to un-recommend it.


  • The context is that they made a blogpost that’s written in, at least in my opinion, extremely pleading tone. They are basically crying that they can’t make a good AI with public data, and if you please could turn on their new AI data collection that would steal all your code. I’ve seen a few “we will use your data for AI” posts, and this was just unsettling, with the tone in which it was written.

    I can’t really say why, but I find this style of communication pretty unsettling. It does have exactly the same wibe as the picture in the post.

    So, if you pay for their IDEs, nothing changes, but you can opt-in into them using your data for AI training, and they are pleading you do. If you use the free version, it’s opt out and turned on by default.


  • I don’t think it’s misleading, or at leas the point was not to imply that they are forcing the data collection (which they are, for free users, but it is opt-out). The point is that they are actually downright emotionally manipulating in the blogpost. The blogpost in which they announce it, at least in my opinion, is written in exactly the same tone as the picture. They are basically crying that they can’t make a good AI without stealing your private data, pleading you to turn it on.

    I’ve seen a few similar posts of products announcing AI data collection, and this one was the most unsettling, hence the meme.


  • This was one of my biggest issues, but I did manage to succesfully switch to nvim few months ago, by installing ideavim into Rider, vscode-vim into vscode (so I can’t easily escape it when I get lazy), but most importantly - setting LazyVim as my default editor, which has been a lifesaver.

    It has a pretty good LazyExtras interface for easily installing a ton of plugins, almost for every language. You just open the LazyVim menu, select a language you want, and it installs LSPs, debuggers and whatnot you may need for it. It’s probably using the nvim-lspconfig mentioned in other comments, but it has been pretty seamless.

    But any other pre-made nvim config will work, this one is just more approachable than someone’s random plugin list.


  • That’s exactly what I did, switching from Rider. LazyVim helped with getting a usable setup (especially LSPs are pain to setup without it), https://www.vim-hero.com/ taught me the absolute basics of navigation, and then I simply installed IdeaVIM into Rider to force myself to use it, and switched my default editor to LazyVim.

    It has already been a few months, and I’m pretty used to it. I still fumble here and there, I still have to stop and think then doing more involved operations, but for the basic editing I wouldn’t go back.

    The most important observation I have is that it does not make me more efficient at editting text, the fumbles and mistakes usually offset any gains I have from the many navigation/jump/repeat keys, and reaching for the mouse would be quicker, but -

    It’s super fun. Learning new motions is satisfying, you can see progress, and by slowly adding a new motion, then trying to get it to your muscle memory is simply fun. And there’s always something to learn, a new motion to add or make more efficient. It’s basically gamified text editting, and if you like mastering things in the muscle memory sense, it’s awesome. I’d absolutely recommend everyone to make the switch, but not for “being a faster/more efficent at text editting” reason, because if you want that, learning every single IDE keybind will make you faster faster.

    Also, it’s surprisingly comfortable not having to reach for a mouse. It has only been a few months, and I’m getting slightly annoyed whenever a program doesn’t have a hotkey for proper navigation and I have to touch my mouse, hah.






  • I’m more fan of the https://www.vim-hero.com/.

    Also, one think I was surprised by when I switched to Lazyvim/Ideavim/vscodevim setup few months ago - it’s a lot of fun. Learning vim properly is like the dark souls of typing. Sure, you probably won’t be as efficient for the first few years, but learning new motion combos is pretty fun, to the point where the minor loss in efficiency doesn’t really bother me. Blasting out combos you’ve been practicing to do that one move efficiently, or discovering another new cool way how to do something is a continuous and fun process. It’s basically gamifying typing.

    So, if you want a boost in efficiency, just learn all the keybinds your current text editor has (jump to next param/function, multi-line editting, go to definition without using mouse, etc.), and start using them. You’ll probably master all of them in few weeks and be much more efficient.

    If, however, you enjoy slowly mastering something, vim will give you years of stuff to learn and master. Is it worth it? Probably not, but it’s suprisingly satisfying!