Executives today:
This means if we put AI somewhere in our decision making, we can no longer be held accountable.
That’s why board executives and business are so excited about it
They can finally get rid of McKinsey and blame it on cheaper and faster trendy butthole logo of the month.
Why would they get rid of McKinsey? That would make dinner at the club super awkward!
It’s not my fault
I was just following ordersIt’s just company policyIt’s just a misstep in the algorithm
I’m sorry the computer said layoffs so… Get fucked.
Ai says you’re not a citizen. Deported.
Isn’t that exactly why they do use them for management decisions?
Yup!
“I’m sorry but your contact is terminated because our management software designated your position as redundant and unnecessary. It wasn’t our decision to let you go, but it was our decision to begin using that software and it was our decision to program it to try to fire as many employees as possible, but it’s not our decision and therefore we can’t be held responsible. Goodbye.”
The same argument for cartels. “We didn’t all increase our prices to the exact same amount, we just paid a consulting company to tell us which price we should use. Of course our competitors used the exact same company, but that’s just a coincidence”.
Since when are managers held accountable? Is this new?
You know “accountability”, it’s when an executive fucks up and gets to retire early with a multimillion dollar golden parachute.
That’s the neat thing, you can deny accountability by blaming the computer’s decision
A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
THEREFORE A COMPUTER MUST
NEVERMAKEAMANAGEMENT DECISIONsdeleted by creator
I understand that there is always a fall guy. Even before AI was shoved everywhere, those really responsible for the problems they created were not held accountable and put the blame on a fall guy.
A complete one-eighty nowadays…“As a highly paid “business” exec I have no ideas…computer, tell me what to do.”
TBF Management can barely make any management decisions either…
are are rarely held accountable.
As a US citizen, this logic need to be applied to corperations. The C_Os make all the decisions for the company, the Campany should not be held as responsible for the shitty actions of its Board. The Board should be held accountable for the companies actions be required to served by all the C_Os. I say served, I mean fines and prison time ,in all cases, as a fine is paid personally by the person and time is served aslo bu the person.
I know fine are just a temporary for “legal fo .a price” fine should be paid to hut them so Retirement accounts are taken, future earning are taken, income from salary+bonus at time of infraction are taken, and close loops of off shore accounts
Agreed except you better not touch my extremely meager retirement account for some shit the CEO did. I will go full uno bomber.
Thats where the legislation can put the lawyer talk in to address it is the personal accounts of the C_Os
“No networked computers!” Colonial fleet high command standing orders
Cylons hate this little trick.
One of many reasons why I love BSG. As a retro-computing enthusiast, the idea that antique systems are naturally impervious to conventional digital attacks, just felt so validating.
Sure, our navigation system is based on a Commodore-64, but good luck getting it to divulge mission-critical information over bluetooth. Or any information for that matter.
The computer can’t be held accountable, but the programmer and operator can.
I could go on a whole thing about mission rules and command decisions here, but I’m sick of typing for the day.
So when is Musk getting held accountable for making a literal US funded Nazi waifu bot
When the humans win the class war against the lizards.
Lol what? I’m so out of the loop
I generally agree.
Imagine however, that a machine objectively makes the better decisions than any person. Should we then still trust the humans decision just to have someone who is accountable?
What is the worth of having someone who is accountable anyway? Isn’t accountability just an incentive for humans to not just fuck things up? It’s also nice for pointing fingers if things go bad - but is there actually any value in that?
Additionally: there is always a person who either made the machine or deployed the machine. IMO the people who deploy a machine and decide that this machine will now be making decisions should be accountable for those actions.
deleted by creator
Tbf that leads to the problem of:
Company/Individual makes program that is in no way meant for making management decision.
Someone else comes and deploys that program to make management decisions.
The ones that made that program couldn’t stop the ones that deployed it from deploying it.
Even if the maker aimed to make a decision-making program, and marketed it as so. Whoever deployed it is ultimately the responsible for it. As long as the maker doesn’t fake tests or certifications of course, I’m sure that would violate many laws.
The premise is that a computer must never make a management decision. Making a program capable of management decisons already failed. The deployment and use of that program to that end is already built upon that failure.
I believe those who deploy the machines should be responsible in the first place. The corporations who make/sell those machines should be accountable if they deceptively and intentionally program those machines to act maliciously or in somebody else’s interest.
Imagine however, that a machine objectively makes the better decisions than any person.
You can’t know if a decision is good or bad without a person to evaluate it. The situation you’re describing isn’t possible.
the people who deploy a machine […] should be accountable for those actions.
How is this meaningfully different from just having them make the decisions in the first place? Are they too stupid?
You can evaluate effectiveness by company profits. One program might manage a business well enough to steadily increase profit, another may make a sharp profit before profit crashes (maybe by firing important workers) . Investors will demand the best CEObots
Edit to add: of course any CEObot will be more sociopathic than any human CEO. They won’t care about literally anything unless a score is attached to it
This… requires a person to look at the profit numbers. To care about them, even. I’m not really sure what you’re getting at.
I think you’re saying that computers can be very good at chess, but we are the ones who decide what the rules to chess are.
Well, I might get disliked for this opinion, but in some cases it’s perfectly fine for a computer to make a management decision. However, this should also mean that the person in charge of said computer, or the one putting the decision by the computer into actual action, should be the one that gets held responsible. There’s also the thing where it should be questioned how responsible it is to even consider the management decisions of a computer in a specific field. What I’m saying is that there’s no black and white answer here.
The directors not going to like this
I asked computer if I should read the article, it said no. Am I in an abusive relationship?
That is ridiculous, clearly. I’ll use mainstream search engine, tailor made to my needs, to make sure it cannot happen
But a computer works for “free” so “not being held accountable” is even better!!
Midrange might, but mainframe users pay ongoing amounts to IBM for however much compute they use for the life of the machine








