It’s not generative AI at all, it’s degenerate AI • The Register

[ad_1]

Episode 13 “Yes, but you can’t do it like that!” exclaims the boss.

“Of course we can!” said the PFY.

“And we did it!” I added.

“You have no right to do this. You are trying to change the terms of employees’ contracts.”

“Not really. Anyway, they agreed.”

“They only agreed because they couldn’t connect to their devices without it.”

“Their work devices,” I specify.

“It doesn’t matter if they’re working devices, you can’t just tell people that all their inputs are going to form a generative AI model.”

“We didn’t do it. We said it was to train a degenerate AI model.”

“You have to read the fine print,” says PFY.

“NOBODY could read the fine print!” exclaims the boss.

“It got smaller and smaller until it became unreadable.”

“They still accepted it,” I emphasize again.

“They accepted it because the OK button had an invisible inscription indicating that they accepted it.”

“It wasn’t invisible. It was a 2-point police,” protests the PFY.

“IT LOOKED LIKE AN UNDERLINE. Either way, it caused problems. People don’t want their professional activity used to train AI models.”

“What if we just used a small portion of it – say 5% – to train the AI ​​models?” I suggested. “They probably wouldn’t mind. We could leave out 85% of their activity – you know, when they’re staring at the screen or searching online for places to eat, drink, stay or buy stuff? Not to mention when they’re on social media, checking their bank balance or showing their colleagues what they did over the weekend.”

Andwe could save another 10% by not training the AI ​​to complain.

“Not to complain?” asks the boss.

“Yes, the daily complaints about room temperature, cafeteria food, uncomfortable chairs or not being able to vape indoors is a violation of the Geneva Convention.”

“See, if you were going to train an AI to do a job properly,” I explained patiently to the boss, “you wouldn’t use a human being from this workplace.”

“Or even one of the non-human beings in this place,” the PFY said.

“It doesn’t matter, you can’t do it,” he insists.

“But we did it.”

“Yes, and there was an outcry. We have to prepare a statement.”

“A declaration?”

“Yes, to reassure our people that we are not going to replace them with AI.”

“It would probably be better to clarify that we don’t use their data to train AI” . “

“What is the difference?”

“Well, by saying we don’t do it has a temporal context around it. That means we don’t know it now. But we did, back then. And if we say we don’t use it for train AI This will still leave the door open for them to be replaced by AI in the future – with the training we did back then.

“But we are not going to replace them with AI in the future,” says the boss.

“In any case,” I continued before the boss could pursue the subject further, “when that first statement doesn’t work, we would simply pretend that people didn’t understand the wording, and then we would clarify it – by issuing a completely different wording – with a completely different meaning.”

“No one will believe this,” the boss said, shaking his head.

“Could we tell them that we are using AI to observe what they are doing to improve the workplace?” suggests PFY.

“Yes, and then get rid of all the complainers!” I said.

“No, it’s a very bad idea. People are afraid their jobs are at stake.”

“Oh, well, that’s easy. We’ll have a few sycophants from around the company make rounds around the water coolers explaining that it was all a misunderstanding and we’re all friends now.”

“What, you think people would do that?”

“People will do surprising things for money – or to ensure they are not replaced by AI,” PFY responds.

“Servile obsequiousness is a skill set that AI has yet to convincingly master,” I note.

“No one will be replaced by AI,” insists the boss.

“And yet,” I said, handing the boss a simple envelope.

“What is this?” he asks as he opens it. “What?!”

“Yes, I’m afraid that in the two days between you clicking the OK button and complaining about the OK conditions, the degenerate AI model learned everything it needed to from you.”

“What? No, that’s ridiculous. He couldn’t teach me in a few days.”

“It actually took me about 10 minutes, because it has a catalog of all your interactions across email, calendar, notes, tasks, and documents. It was just waiting for you to click OK to “learn” everything.

“Well, he can damn well unlearn it!”

“That’s not possible. You’re now part of a complex data set that can no longer be disentangled,” I explained. “You’re now just an infinitesimal part of a large number of weighting algorithms.”

“Your contribution was almost insignificant compared to the weighted average,” the PFY adds.

“I’ll talk to HR about it.”

“This letter is Since “HR”, I note.

“Or rather, the new HR AI,” PFY says. “Or HAIR, as we call it.”

“I’m going to talk to my lawyer!”

“And he or she will talk to our AI lawyer – who, by the way, doesn’t charge anything to send a legal letter – or several hundred letters for that matter.”

“I’ll talk to the director!”

“Of course you can. Do you want to use my keyboard?…”

“But..”

“I know, but the positive side is that you’re making the company a better place for the three people who will stay here…”

Leave a Comment