The new ‘imitation game’

So I’ve been having this internal debate on whether to continue in the line of my two previous articles on dissidence or take a break from that and address ‘the elephant in the room’ for many people around the world: Artificial Intelligence and its practical, immediate social effects.

It’s kinda of evident that I decided to go with the second option, and yet in this complex constellation of potential and conflict, they’re not entirely divided from each other.

So, the first thing I want to clarify is: that this is not a moral discourse on AI, it’s not a condemnation of it and certainly, not proselytism in favor of its adoption and benefits. There’s a lot of people doing that, and I find it counterproductive for the most part, for the following reasons:

  • It’s a logical step in mass production and digitalization of society.
  • Stemming from the above, it’s completely supported by the “big money” because it’s cost-effective and highly efficient for business.
  • That’s it: it follows a direction that modern civilization has progressively followed since the Industrial Revolution (hence it’s extremely difficult to stop), and it’s effectively making rich people richer.

This doesn’t mean I’m a cynic or I’m OK with anything that has mainstream support. Not at all, there are huge efforts and sacrifices that could improve society and they should be addressed!

But I’m a person who believes in “choosing your battles wisely”, and after much thinking about the pros, cons, and impact of AI, I simply believe it’s something that will only move forward (the specifics may be a but out of the scope of this article).

No software is an asshole

People are.

We’re not at a place in time and research that Skynet is suddenly gonna realize humans need to be exterminated, thankfully.

Yeah, I know chatGPT is sooooo capable, right?

It is, as a matter of fact, as a technologist I can’t but admire the amazing work OpenAI has put into developing it. Having said that, I use it quite often, and more than once I ended up proving it wrong with a little bit of math and a little bit of programming, and I’m no genius. Several times it apologized to me after I proved that its answers were inaccurate, even after stating all needed data/requirements clearly. More than once.

So, no, neither ChatGPT nor Sophia the creepy robot is organizing a human massacre to take over, have a drink, and celebrate we’re not there yet, LOL.

The truth is, most AI-related functionality currently implemented is of a highly specialized nature due to the immense complexity involved in recreating human-like consciousness, and even chatGPT which has an extremely broad knowledge in many fields is a very sophisticated expert conversation emulator but it certainly doesn’t think on its own, even less, have a personality.

So yeah, the media will make a big deal of things and well, they capitalize from generating feelings and reactions in you and me; take it with a “grain of salt” when they go way too “Terminator-mode” about AI topics.

So, not a single piece of software has ill intent or is an asshole, but a lot of people are, and this is my key concern that I’d like to address in this article.

A human dilemma

AI is a divisive topic, every significant change with massive repercussions is, and in our globalized landscape the effects of something as disruptive as these technologies are broad and not always that easy to fully understand.

Given this is such a broad subject, I will focus on one of the key concerns: the impact on employment.

If you think this topic is future tense, I’m glad you’re reading this article so I can tell you that you really need to catch up and take this as seriously as you can. Just earlier this year I used to consult for a company that eliminated a whole team that used to support a business function because they replaced that function with AI for a fraction of the cost and obtained exponentiated productivity: so yeah, the fact that there’s no “rebellion of the machines” doesn’t mean there’s no danger in it.

What I was able to see was just an example, but it’s happening, and I believe some of the noticeable post-COVID negative changes in some job markets are tightly related to AI technologies becoming more proficient and capable of replacing humans. No point burying our heads in the sand with this one.

But I think we’re still on time to enter a dialogue as a society on how to manage the changes in the workforce, and that’s where I believe we need to make a serious effort not to be assholes. Let me illustrate this with some bullet points since they help me focus and keep on topic:

  • Inequality is reality: Everyone’s different and no one recipe fits all. Part of the sad reality of “AI business function takeover” is, that it begins with the most basic positions, with repetitive relatively ‘low-skilled’ tasks. So from the get-go, it affects a huge population segment if you think about it. 
  • Before you assume you’re safe: Plans for AI development are ambitious and we’re just scratching the surface; there’s a lot of talk about AI replacing programmer jobs and being a programmer requires quite a bit of study, just to give one profession as an example. The reality is, that properly training expert systems can reduce the workload in medicine, law, accounting, teaching and mentoring, and many more professions. This is global impact, friends.
  • The indirect impact is still impact: Have you heard about the butterfly effect? It’s essentially and systemic principle and there’s no denial: most of us humans live in an ever-evolving system. Let’s not be assholes, a fast food employee or a customer support agent losing their jobs does have an impact on us all, and this impact is amplified when it’s not one but a massive group of people suffering from these undesirable changes in their realities.

So we need to start giving it some serious thinking: what are we going to do with the people currently performing the “AI-replaceable” job functions? If there’s a roadmap for the development of the technology, and if there’s a roadmap for the shareholders to be optimistic about the use of technology and its savings, there also has to be a roadmap for the people who are being suddenly ejected from the system.

I’m happy to listen to ideas, I have some of my own, but that will come in a different article.

Leave a comment