Don't let AI move in

AI telling porkies as major PR push in OZ

The people flogging AI are on a major PR push in Australia. There are major issues with AI in the workplace.

Waleed Aly, Fairfax, Columnist, author and academic has pointed out some in his article below.

“AI may be the most transformative technology in human history,” wrote the Treasurer this week.

It’s the rarest kind of statement: at once emphatically grand and altogether too modest. Grand, because it has AI outstripping, say, the wheel, electricity or the internet.

Modest, because it puts AI on the same continuum as all this, as though its difference is merely one of scale and speed, rather than something more fundamental. In this way, Jim Chalmers echoes the spirit of the Productivity Commission’s latest interim report, also released this week, examining how AI will change our economy.

It would be fair to say the report is optimistic, seduced by AI’s promise of significantly increased productivity. To that end, it warns the government against overregulating AI, on the basis that this would slow down its productive march.

Chalmers’ version of the same idea has government regulating “as much as necessary to protect Australians, but as little as possible to encourage innovation”. All of this presumes some kind of clear-eyed assessment of AI’s risks. And it’s hereabouts that all this optimism should give us pause.

Not because either the government or the commission deny there are serious risks. But because they characterise these risks – much as they characterise AI – as mere extensions of previous experience. Indeed, the commission’s report could hardly be clearer on this point.

After running through a series of potential problems – including serious ones such as AI making mistakes in high-risk situations such as healthcare or law enforcement – it concludes there is ultimately nothing new to see here: “AI can exacerbate existing risk of harm but does not create wholly new risks where none existed before”.

Meanwhile, Chalmers acknowledges the possibility of significant unemployment, but believes it will not be widespread or structural. To this end, he makes the observation that while technological developments always eliminate jobs, they create more than they destroy. “We’ve seen this play out before,” he affirms.

But this is more an assumption than an argument. It assumes that all technological advancement is some single, undifferentiated phenomenon, such that its history broadly repeats. But this is something the Albanese government must know not to be true. It is, after all, implementing a ban on social media platforms for children under 16, a belated response to a damaging technology we spent years assuming would be as benign as, say, television.

Now we seem to be assuming similarly AI will neatly fit into a benign pattern. That assumption only holds to the extent AI is analogous with most of what has come before. And in the circumstances, we’d be wise to examine it far more rigorously before settling on it because there are good reasons to suppose it is a different species altogether, for which history is a poor guide.

That’s because technology has hitherto been something we build. We determine what it is and what it will do, and design it to achieve those ends. We cannot foresee all of its applications and cannot prevent people appropriating it for nefarious purposes. But however it is used, it is an extension of our various goals. We understand how it works, and we put it to use accordingly. But AI is not something we build. It is instead something we train. That is, we give it the tools to direct itself.

As if to illustrate this starkly, the companies developing AI are training it to code. That is, they are training AI to program itself. And as AI reaches the point of thinking faster and better than any human can, it follows that AI will quickly produce the fastest, most sophisticated computer programmers on the planet. At that point, it will make no sense to talk of AI in the way we talk of other technology. It will no longer be a tool; it will be an agent.

Once it achieves this, it can begin developing itself. What might have taken human coders years might take AI weeks. This development makes it better and faster, which in turn makes it better and faster, all of which amounts to exponential acceleration.

At this point, we’re no longer even talking about artificial intelligence as we understand it. We’re talking about what Daniel Kokotajlo calls “superintelligences”, where AI will become superhuman at everything, and humans will have become largely redundant. At this point, mass job losses will be the least of our concerns.

Kokotajlo should know of what he speaks. He is a former employee of Open AI – which developed ChatGPT – who left the company because he no longer thought it capable of addressing the risks it was unleashing. He thinks we’ll start seeing superintelligences by 2027 or 2028. Whatever the timeline, the fundamental problem here is one of mastery. Unlike previous technology, even at this crude, early stage, AI is already behaving in ways its inventors don’t understand.

One example Kokotajlo offers is that AI has begun lying. Not making an error or drawing on bad data. But giving answers that are untrue, and which it knows to be untrue. No one knows why this is happening, and its owners are trying to stop that happening. But if we think of AI as a digital brain that can generate its own thoughts, we should be unsurprised that it starts pursuing its own aims, which have little to do with ours, and which we may not even know. Once that accelerates, any pretence that we are AI’s masters will have evaporated.

Of course, Kokotajlo is offering but one forecast. It is one based on an intimate knowledge of the technology and of the companies producing it; companies he says are well aware of the scenarios he describes and even embrace aspects of it. But even so, prediction is fraught, and may not come to pass.

All manner of things could intervene in this process. Something like, say, regulation. Kokotajlo thinks this is unlikely because humans tend to be terrible at dealing with risks we haven’t experienced and can’t easily imagine. You might even say he’s assuming history will simply repeat. Perhaps, then, we should resist doing the same.”

Put your best foot forward

Malcolm builds expert resumes, cover letters and LinkedIn profiles, which unleash an unbeatable business case to promote you as a ‘must have’ asset to an employer.