If you don’t work in tech, most probably you have heard of ChatGPT, and thought that it’s amazing. Actually many who do work in tech would share your amazement—a system that can mimic humans to the extent that, many argue, it can pass the Turing test, the gold standard for artificial general intelligence.
But several tech workers who did actually use it reached the conclusion that it’s not as amazing as people thought it would be. This did not stop people oblivious of technology’s inner workings overstating its impact on our society, to the extent that people who do exactly what the limits of this technology did join them in unison, singing its praises while pretending to warn the masses against the technology falling into “bad hands.”
It appears as if the likes of the CEO of OpenAI are fear-mongering among the masses to put pressure on governments to pass new regulatory legislation that would give their corporations an unfair advantage. So, rather than open-sourcing the technology, so that it would become available to researchers, they would control it as a new means of production, accessible only to whomever owns the capital.
Tech workers who did actually use the technology years ago, before it even went mainstream, know how fickle and unreliable it is to be used in production, let alone replacing actual workers. They know it’s just another false narrative propagated by Silicon Valley technology determinists to make it look as if this is the future that all Wall Street investors should line up behind.
ChatGPT as AI didn’t replace workers; yet even though its underlying GPT technology was released a couple of years ago, it’s just getting the spotlight recently, because of it being accessible for non-tech people, so they would be able to talk to and make sense of AI for the first time in history on such a scale.
But you know which AI technologies did replace workers? The kind that you can’t talk to or whose decisions you can’t make sense of—better known as machine learning (ML) algorithms, built as an antithesis of classical computer science theory.
You see, in the beginning, computers as machines were meant to ingest data, and process it into information, so that it would provide a human being with knowledge that could support them in decision-making.
Let’s say, for example, a sales manager is looking at information generated from data gathered from the market so as to make a decision whether to increase or decrease the price of a certain product. The assumption back then was that this individual’s experience is irreplaceable, and the machine can only support them in taking those decisions.
That, however, wasn’t the case with many ML implementations, where decisions were reached directly from massive amounts of data gathered from online users. No information was generated in the process; therefore those models didn’t provide any reasoning behind their decisions. So, in the process, the irreplaceable human was replaced by a machine.
Unlike ChatGPT, you can’t talk to these machines, but you can see them in action on online retail web sites, such as Amazon. They might not look as smart as GPT derivatives, but, thanks to loopholes in American legislation, established by lobbyists to gain an unfair advantage by evading taxes, these machines were placed in a near-monopoly market position to extract data from online users and turn it into profit, in the process abusing suppliers and destroying bricks-and-mortar competing retailers.
One visit to any bricks-and-mortar retailer and you will realise that they are fighting a losing battle. No wonder that many of them had to either downsize or totally shut down, even with their futile attempts at digital transformation to compete with Amazon online.
And if this was the case for large retailers, how about small shops? They wouldn’t stand a chance, given that now everyone’s preference is to shop online.
So where can these ML algorithms be found elsewhere? Oh, well, they are ubiquitous. In every industry you will have a few of them consolidating the market and causing mass unemployment, or near-enslavement working conditions: Uber for transport, Deliveroo for catering, Youtube for entertainment, AirBnB for accommodation, and Facebook for media, to name a few. For each of these industries they would manage by gathering the data and ingesting it into ML models to place themselves in a monopolistic market position, in many cases without even owning the physical assets they are managing.
You see, these are the new means of production. Creating such a new class of assets is better known as Uberisation, which was the rage in the business world a decade ago.
The Uberisation fad did fade, but it left a legacy behind, with corporations entrenched in near-monopolistic market positions while continuously applying competition-stifling manoeuvres to perpetually consolidate the markets while in the process either fixing prices or wages or, usually, both! All this while not owning the actual assets being used or doing the actual work on the ground. It’s simply people having to pay them rent to use their online platforms.
Obviously such a problem on such a scale can’t be resolved on an individual basis, and it’s the same governments that created such fertile environments for capitalistic parasites that should intervene and drain the swamps, by adopting legislation to force the transparency of those ML black boxes.
In the meantime, don’t feed the beast! Encourage small businesses, and stop using those rent-seeking platforms whenever possible. If you are a techie, then try to find alternatives that are open-sourced and decentralised, then spread them into your community. The acid test is that if it’s harder to find alternatives, then you should avoid it. Having little or no alternative means that they had killed the competition already.
In my own experience, I’ve found that doing this is most rewarding, whether for health and well-being, richness of life, or ultimately not to mention all this while having a good cause.