

Is there really a technology difference between Windows 7, Windows 8, Windows 10 and Windows 11 ?
https://en.wikipedia.org/wiki/Features_new_to_Windows_8
Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.


Is there really a technology difference between Windows 7, Windows 8, Windows 10 and Windows 11 ?
https://en.wikipedia.org/wiki/Features_new_to_Windows_8


Mexican drug lord.
One thing you might do that would address the broader issue is submit news that isn’t about Trump. I mean, so far, this is the only post you’ve submitted. You want to see news about something else, someone has to submit it. It won’t submit itself.
Sure. Why wouldn’t they?
I mean, some research isn’t very amenable to near-term use.
https://en.wikipedia.org/wiki/Blue_skies_research
Blue skies research, also called blue sky science, is scientific research in domains where “real-world” applications are not immediately apparent. It has been defined as “research without a clear goal”[1] and “curiosity-driven science”.
That’s harder to justify, hard to make a return on. For very long-term research, maybe you have an easier time with governments doing research.
But if you can produce valuable intellectual property that they can use, sure, businesses will hire you to produce stuff for their business. I guarantee you that businesses are going to be funding a whole lot of AI research right now, for example: what breakthroughs happen there will have enormous impact on things like whether or not OpenAI’s investments to get an early lead in hardware and datacenters pay off.
My own experience in private-sector research is that there’s a fuzzier line between research and development than you might think. That is, a company might want to have people in their labs directly facilitate research turning into product that can come to market.
But if you go out and look at, say, patent applications, you’ll find immense numbers filed by companies.
searches
Some numbers to support the above:
https://www.nsf.gov/nsb/updates/new-report-shows-business-rd-funding-dominates-us-rd
The United States is the largest performer of research and experimental development (R&D), with $806 billion in gross domestic expenditures on R&D in 2021, followed by China, with $668 billion. While overall funding of R&D in the U.S. continues to rise rapidly, the share of basic research funded by the federal government has fallen in the previous decade. Business funding of U.S. R&D surpassed federal funding in the 1980s and now dominates the U.S. R&D enterprise.
The business sector is by far the largest performer of U.S. R&D, performing an estimated $693 billion in 2022, or 78% of U.S. R&D. Nearly 80% of business investment is in experimental development – the stage when the promise of near-term commercial benefit is real.
If databases are involved they usually offer some method of dumping all data to some kind of text file. Usually relying on their binary data is not recommended.
It’s not so much text or binary. It’s because a normal backup program that just treats a live database file as a file to back up is liable to have the DBMS software write to the database while it’s being backed up, resulting in a backed-up file that’s a mix of old and new versions, and may be corrupt.
Either:
or:
In general, if this is a concern, I’d tend to favor #2 as an option, because it’s an all-in-one solution that deals with all of the problems of files changing while being backed up: DBMSes are just a particularly thorny example of that.
Full disclosure: I mostly use ext4 myself, rather than btrfs. But I also don’t run live DBMSes.
EDIT: Plus, #2 also provides consistency across different files on the filesystem, though that’s usually less-critical. Like, you won’t run into a situation where you have software on your computer update File A, then does a sync(), then updates File B, but your backup program grabs the new version of File B but then the old version of File A. Absent help from the filesystem, your backup program won’t know where write barriers spanning different files are happening.
In practice, that’s not usually a huge issue, since fewer software packages are gonna be impacted by this than write ordering internal to a single file, but it is permissible for a program, under Unix filesystem semantics, to expect that the write order persists there and kerplode if it doesn’t…and a traditional backup won’t preserve it the way that a backup with help from the filesystem can.


I suspect that it wildly varies based on user.
If you use video streaming from, say, YouTube and TikTok heavily, and you do so outside of WiFi networks, I bet that you can burn through a lot of data pretty quickly.
Another user could be sitting on Lemmy or whatever all day every day and just not use much data.


@dumbass@lemmy.world @dumbass@sh.itjust.works @dumbass@sopuli.xyz
Lot of that going around, apparently.


https://en.wikipedia.org/wiki/Multi-user_dungeon
For a list of still-active MUDs:


I think that the problem will be if software comes out that’s doesn’t target home PCs. That’s not impossible. I mean, that happens today with Web services. Closed-weight AI models aren’t going to be released to run on your home computer. I don’t use Office 365, but I understand that at least some of that is a cloud service.
Like, say the developer of Video Game X says “I don’t want to target a ton of different pieces of hardware. I want to tune for a single one. I don’t want to target multiple OSes. I’m tired of people pirating my software. I can reduce cheating. I’m just going to release for a single cloud platform.”
Nobody is going to take your hardware away. And you can probably keep running Linux or whatever. But…not all the new software you want to use may be something that you can run locally, if it isn’t released for your platform. Maybe you’ll use some kind of thin-client software — think telnet, ssh, RDP, VNC, etc for past iterations of this — to use that software remotely on your Thinkpad. But…can’t run it yourself.
If it happens, I think that that’s what you’d see. More and more software would just be available only to run remotely. Phones and PCs would still exist, but they’d increasingly run a thin client, not run software locally. Same way a lot of software migrated to web services that we use with a Web browser, but with a protocol and software more aimed at low-latency, high-bandwidth use. Nobody would ban existing local software, but a lot of it would stagnate. A lot of new and exciting stuff would only be available as an online service. More and more people would buy computers that are only really suitable for use as a thin client — fewer resources, closer to a smartphone than what we conventionally think of as a computer.
EDIT: I’d add that this is basically the scenario that the AGPL is aimed at dealing with. The concern was that people would just run open-source software as a service. They could build on that base, make their own improvements. They’d never release binaries to end users, so they wouldn’t hit the traditional GPL’s obligation to release source to anyone who gets the binary. The AGPL requires source distribution to people who even just use the software.


I will say that, realistically, in terms purely of physical distance, a lot of the world’s population is in a city and probably isn’t too far from a datacenter.
https://calculatorshub.net/computing/fiber-latency-calculator/
It’s about five microseconds of latency per kilometer down fiber optics. Ten microseconds for a round-trip.
I think a larger issue might be bandwidth for some applications. Like, if you want to unicast uncompressed video to every computer user, say, you’re going to need an ungodly amount of bandwidth.
DisplayPort looks like it’s currently up to 80Gb/sec. Okay, not everyone is currently saturating that, but if you want comparable capability, that’s what you’re going to have to be moving from a datacenter to every user. For video alone. And that’s assuming that they don’t have multiple monitors or something.
I can believe that it is cheaper to have many computers in a datacenter. I am not sold that any gains will more than offset the cost of the staggering fiber rollout that this would require.
EDIT: There are situations where it is completely reasonable to use (relatively) thin clients. That’s, well, what a lot of the Web is — browser thin clients accessing software running on remote computers. I’m typing this comment into Eternity before it gets sent to a Lemmy instance on a server in Oregon, much further away than the closest datacenter to me. That works fine.
But “do a lot of stuff in a browser” isn’t the same thing as “eliminate the PC entirely”.


“Write code without learning it!” I get it. I’ve struggled learning to program for 10 years. But every time I hear a programmer talk about AIGen code, it’s never good, and my job’s software has gotten less stable as AIGen code as been added in.
I’m similarly dubious about using LLMs to do code. I’m certainly not opposed to automation — software development has seen massive amounts of automation over the decades. But software is not very tolerant of errors.
If you’re using an LLM to generate text for human consumption, then an error here or there often isn’t a huge deal. We get cued by text; “approximately right” is often pretty good for the way we process language. Same thing with images. It’s why, say, an oil painting works; it’s not a perfect depiction of the world, but it’s enough to cue our brain.
There are situations where “approximately right” might be more-reasonable in software development. There are some where it might even be pretty good — instead of manually-writing commit messages, which are for human consumption, maybe we could have LLMs describe what code changes do, and as LLMs get better, the descriptions improve too.
This doesn’t mean that I think that AI and writing code can’t work. I’m sure that it’s possible to build an AGI that does fantastic things. I’m just not very impressed by using a straight LLM, and I think that the limitations are pretty fundamental.
I’m not completely willing to say that it’s impossible. Maybe we could develop, oh, some kind of very-strongly-typed programming language aimed specifically at this job, where LLMs are a good heuristic to come up with solutions, and the typing system is aimed at checking that work. That might not be possible, but right now, we’re trying to work with programming languages designed for humans.
Maybe LLMs will pave the way to getting systems in place that have computers do software engineering, and then later we can just slip in more-sophisticated AI.
But I don’t think that the current approach will wind up being the solution.
“Summarize a book!” I am doing this for fun, why would I want to?
Summarizing text — probably not primarily books — is one area that I think might be more useful. It is a task that many people do spend time doing. Maybe it’s combining multiple reports from subordinates, say, and then pushing a summary upwards.
“Generate any image!” I get the desire, but I can’t ignore the broader context of how we treat artists. Also the images don’t look that great anyway.
I think that in general, quality issues are not fundamental.
There are some things that we want to do that I don’t think that the the current approaches will do well, like producing consistent representations of characters. There are people working on it. Will they work? Maybe. I think that for, say, editorial illustration for a magazine, it can be a pretty decent tool today.
I’ve also been fairly impressed with voice synth done via genAI, though it’s one area that I haven’t dug into deeply.
I think that there’s a solid use case for voice query and response on smartphones. On a desktop, I can generally sit down and browse webpages, even if an LLM might combine information more quickly than I can manually. Someone, say, driving a car or walking somewhere can ask a question and have an LLM spit out an answer.
I think that image tagging can be a pretty useful case. It doesn’t have to be perfect — just be a lot cheaper and more universal than it would to have humans doing it.
Some of what we’re doing now, both on the part of implementers and on the R&D people working on the core technologies, is understanding what the fundamental roadblocks are, and quantifying strengths and weaknesses. That’s part of the process for anything you do. I can see an argument that more-limited resources should be put on implementation, but a company is going to have to go out and try something and then say “okay, this is what does and doesn’t work for us” in order to know what to require in the next iteration. And that’s not new. Take, oh, the Macintosh. Apple tried to put out the Lisa. It wasn’t a market success. But taking what did work and correcting what didn’t was a lot of what led to the Macintosh, which was a much larger success and closer to what the market wanted. It’s going to be an iterative process.
I also think that some of that is laying the groundwork for more-sophisticated AI systems to be dropped in. Like, if you think of, say, an LLM now as a placeholder for a more-sophisticated system down the line, the interfaces are being built into other software to make use of more-sophisticated systems. You just change out the backend. So some of that is going to be positioning not just for the current crop, but tomorrow’s crop of systems.
If you remember the Web around the late 1990s, the companies that did have websites were often pretty amateurish-looking. They were often not very useful. The teams that made them didn’t have a lot of resources. The tools to work with websites were still limited, and best practices not developed.
https://www.webdesignmuseum.org/gallery/year-1997
But what they did was get a website up, start people using them, and start building the infrastructure for what, some years later, was a much-more-important part of the company’s interface and operations.
I think that that’s where we are now regarding use of AI. Some people are doing things that won’t wind up ultimately working (e.g. the way Web portals never really took over, for the Web). Some important things, like widespread encryption, weren’t yet deployed. The languages and toolkits for doing development didn’t really yet exist. Stuff like Web search, which today is a lot more approachable and something that we simply consider pretty fundamental to use of the Web, wasn’t all that great. If you looked at the Web in 1997, it had a lot of deficiencies compared to brick-and-mortar companies. But…that also wasn’t where things stayed.
Today, we’re making dramatic changes to how models work, like the rise of MoEs. I don’t think that there’s much of a consensus on what hardware we’ll wind up using. Training is computationally expensive. Just using models on a computer yourself still involves a fair amount of technical knowledge, the sort of way the MS-DOS era on personal computers prevented a lot of people from being able to do a lot with computers. There are efficiency issues, and basic techniques for doing things like condensing knowledge are still being developed. LLMs people are building today have very little “mutable” memory — you’re taking a snapshot of information at training time and making something that can do very little learning at runtime. But if I had to make a guess, a lot of those things will be worked out.
I am pretty bullish on AI in the long term. I think that we’re going to figure out general intelligence, and make things that can increasingly do human-level things. I don’t think that that’s going to be a hundred years in the future. I think that it’ll be sooner.
But I don’t know whether any one company doing something today is going to be a massive success, especially in the next, say, five years. I don’t know whether we will fundamentally change some of the approaches we used. We worked on self-driving cars for a long time. I remember watching video of early self-driving cars in the mid-1980s. It’s 2026 now. That was a long time. I can get in a robotaxi and be taken down the freeway and around my metro area. It’s still not a complete drop-in replacement for human drivers. But we’re getting pretty close to being able to use the things in most of the same ways that we do human drivers. If you’d have asked me in 2000 whether we would make self-driving cars, I would say basically what I do about advanced AI today — I’m quite bullish on the long-term outcome, but I couldn’t tell you exactly when it’ll happen. And I think that that advanced AI will be extremely impactful.


Now my question is, who’s making that one query that leaks my domain name? Is it Apache on startup
If you’re wanting a list of DNS queries from your system, assuming that it’s DNS and not DoH, maybe:
# tcpdump port domain
Then go start Apache or whatever.
Okay, this is unfortunately DIY, but if you’re willing to spend time:
Get a plywood box and put it in there.
If you hear vibrations, put sorbothane between the NAS and the box.
If you need more sound absorption, put acoustic foam on the inside.
If you need more cooling, drill two holes, mount a case fan, and run the air through some kind of baffle. Like, maybe attach insulated flex-duct, like this:
https://www.amazon.com/Hon-Guan-Silencer-Reducer-Ventilation/dp/B07HC8CXQG
I think that it’s going to be hard to provide a meaningful answer. There are a wide range of fields that use the scientific process, the stuff that you’d call “science”.
Some of those, no doubt, produce a strong return on investment. You could say, purely on finnacial terms, that research there makes a lot of sense. Producing, say, the integrated circuit is something that transformed the world.
I am sure that if you looked, you could find some areas that don’t do that.
In some of these latter cases — say, cosmology — I doubt that there are likely direct financial returns, but if we want to understand where rhe universe has been and where it’s going, we have to place some kind of value on that and fund it to that value.
But…science isn’t a single entity that you fund or don’t fund to a given amount. It’s people working in a wide range of fields. It’s like saying “should we fund sysadmins more” or “should we fund human resource departments more”. The answer is almost certainly going to be “it depends on the specific case”.
I tried a fully fledged consumer NAS (QNAP with Seagate 12 TB NAS drives) but the noise of the platters was not acceptable.
If you have a NAS, then you can put it as far away as your network reaches. Just put it somewhere where you can’t hear the thing.


“What this tomato needs is some tomato sauce.”


Dice 'em both, cook 'em together and eat them both.
Tomato and potato work together, so I figure that totato and pomato should too.


I wouldn’t.
Depressions aren’t a zombie invasion or a nuclear war. They’re a reduction in economic activity. Some percentage of people get laid off, are out of work for a time.
If you get laid off, you’re probably going to want a financial buffer, and buying stuff ahead of time is most likely not a great idea. Better to have the buffer.
If you don’t get laid off, not likely a lot you can do to prepare.
I’d also add that while depressions affect a large area, it’s entirely possible for particular areas to see economic decline even if the country as a whole is seeing rising growth. Like, say a major employer in a small town goes out of business.


If you think that the post shouldn’t be up, contact the moderators or admins — they are the ones to make a call on that. It’s not the role of individual users to play moderator.
Well, you could want to not dig into swap.