Logo Simple vs. Naive

Will AI Replace Your Job?

AI will outdate two kinds of jobs. But it won’t displace a third kind. Here’s how you can vault to safety.

AI is a fascinating technology that’s here to stay. Which is why you must first understand what AI does, so that you realize what it doesn’t. Only then can you thrive in the AI-era. Many vested interests—often peppering their creations with the words: “AI”—misdirect your attention toward the AI crisis to make their pockets deeper. (Read more about pretences in my essay: Ordinary is the Extraordinary – How to Succeed Simple and Not Naïve)

For our journey, we’ll start with two important concepts: representation and learning.

Researchers started exploring machine intelligence from the early 1940s. Regardless of AI or not, computers solve computable problems by operating on mathematical symbols. Symbols represent real-world things numerically: in “Let x = 3 be the number of apples,” x quantifies without being the thing itself.

You transform a real-world problem into numbers to make it computable.

For example, to know if something fits your budget, you assign “p = $50” as the item’s price and “n = 5” as the quantity. The computer responds with the cost: “n x p = $50 x 5 = $250.” You then check if you can afford it. It’s a simple calculation.

However, you could also ask the computer for the quantity that you can get for $210. The computer would respond with “n = 4.” This is an example of optimization, which is computing under constraints toward an objective.

In both cases, you take real-world objects, get numerical values for symbols like price and quantity, perform calculations, and use the numerical results back in the real world.

Similarly, computer scientists and software engineers devise ways of distilling the real-world into numbers, which is an art in itself. But you don’t see this in action if you’re constantly misdirected by the computer, never looking at the computer scientist. Observing, experiencing, and taking actions in the real-world is a huge part of being human. Computing merely allows us to take a short-cut, by not having to do things in the real-world to know what would happen if we did, just like a thought experiment.

In the past decade, language experienced several breakthroughs in getting distilled into numbers, making it more computable. Word2Vec converted words into lists of numbers (vectors). It didn’t encode the meaning of the words, but only the relationship between them. It knew that “good” and “bad” are opposites, even if it didn’t know what they individually meant. GPTs made the representation richer by accounting for the flow of the language rather than just words in isolation.

GPTs learn numerical representations by emulating language. They feed on volumes of text, and learn to predict the next word given the previous series of words (called context). This concept, called generative modelling, has existed for quite a while, but has only recently become feasible. The trained model is able to turn text into numbers, which can further be used for task-specific computations.

Now that we’ve understood representation, let’s attend to learning.

The GPTs are further fine-tuned (trained) on specific tasks by leveraging the rich numerical representations that it has learnt. Tasks are usually textual input/output pairs. For example, the input could be natural language and the output SQL, teaching the model to write queries requested in English.

GPTs can do complex operations. For example, to send out a personalized invite to 1000 people, you might use a template with placeholders. But anyone comparing two invitations would notice the boilerplate. However, GPTs can generate different sentences for each invitee, making it appear as if everyone got a novel invitation. There’s still a pattern, but it’s not as noticeable. All of this because GPTs can “understand” inputs as a whole instead of evaluating words individually.

In addition to the trained tasks, GPTs can also be instructed to do tasks they were not trained on using prompt engineering. A fundamental idea that makes this possible is called transfer learning. Transfer learning is similar to learning an analogy: A → B as C → D. If you’ve taught a model to map natural language to Python code, then with some prompt engineering, it can map English to SQL too. That’s because the process of both those mappings have similarities that the model can exploit, just like humans would. The model “pretends” it’s doing Task A → B while doing Task C → D, while taking into account the differences. Thus, it’d be incorrect to say that the AI has learnt to do something “new” that we didn’t train it on. It’s just using the old patterns for the new tasks too. Many times, this works out well. Other times, it wreaks havoc.

That’s our peek “under the hood” of what makes AI tick. Let’s apply that knowledge to understand what AI does and does not do.

The primary differences between humans and AI boil down to creativity, even in highly technical jobs.

However, creativity is often misunderstood. Pessimists argue that today’s ideas are mere recombination of that of yesterday. They are projecting their own lack of originality onto the world. Thomas Edison said, “To invent, you need a good imagination and a pile of junk.” The pessimists would attribute inventions to junks.

In truth, however, creativity enters at all levels. Humanity keeps adding value by treating products as the next raw materials. But building each of those products needs imagination as an ingredient, not just piles of junk. Unfortunately, whatever imagination produces then becomes an “obvious fact,” and thus loses admiration.

Creativity is you originating something from nowhere. You need creativity to solve a previously unseen math problem without cheating. That’s how the very first person would have had to solve it. It’s the essence of fluid intelligence.

AI, however, merely recombines what it previously knows. The only creative input it gets are human prompts. It happily randomizes over its vast knowledgebase, making you mistake the unfamiliar for the creative. Also, remember that AI can synthesize knowledge that it has not learnt, but that’s because of transfer learning, not creativity. Pretending that one problem is the same as another is workable in a lot of areas, but it’s not creativity. Rather, the invention of transfer learning itself was creativity.

In machine learning, it’s also important to understand the exploration and exploitation trade-off. Exploration is finding out new things. Exploitation is using what you’ve learnt. GPTs exploit while relying on “training data” for exploration. If everyone on Earth used only GPT to write, without originating fresh content, we’d get trapped in a complex echo chamber.

It is humans who come up with ideas. Machines operate on representation of ideas. Even the operations themselves are human ideas. However, hustlers today claim that “ideas are cheap.” I counter, “only cheap ideas are cheap.”

Here’s how humans can leverage creativity in the era of AI.

There are two kinds of work that are pure mechanistic labour devoid of ideas. One is about the WHAT, the second is about the HOW.

First are tasks you’d love to avoid. Even in software engineering, programmers would happily avoid 80% of the “boilerplate” code they have to write to get to the 20% that require creative thinking. In graphics design, one would prefer realizing a vision than getting tangled in the arbitrary layout of a graphics tool. In short, any work will have a small creative component, followed by an enormous quantity of labour, just like a creative building architecture which is followed by the tedious task of putting bricks together. This covers the WHAT.

Second are tasks that are creative, but for which people take mechanistic shortcuts. You read to learn something new, yet “content marketers” spam websites with ChatGPT nonsense devoid of a single original idea. “Novice programmers” let GPT write shabby code that they don’t understand. Companies that “value your business” provide immediate assistance through unhelpful chatbots. You expect humanity in these tasks, but are bombarded with mechanistic laziness. This covers the HOW.

Both kinds of tasks deserve to be automated. It frees up humans for creative things, and exposes the crowd delivering substandard work. Anything that you’re bored of doing, especially because you see a “pattern”, is ripe for automation, only because there’s a pattern to automate.

But the one thing that won’t get replaced is creative thinking; anything requiring you to think. Thinking is not a job in itself, but an important aspect of all jobs. It’s in the HOW rather than the WHAT. Anything that’s creative cannot be automated because by definition it does not have a pattern.

Of course, researchers claim that AI creates “art” while assuming that art is merely a combination of colours on the screen. Art is about the message, not the medium. Yet, all that AI automates is the medium, while hardly bothering about the content. That’s not art, but a feeble attempt at it.

In short, any job with a creative component that’s ethically done is safe.

For example, when building a product, you actually talk to customers, understand them, and create something valuable. You do not take the shortcut of “building a random product through this 1-click AI tool” and “forcing it down into customer’s throats.”

Or, when writing code, you understand the design, where the product is going, and how it might evolve. You do not try to “generate” a spaghetti code that somehow solves the immediate problem, so that you can go home early.

Or, when writing content, you understand the audience, walk in their shoes, and then write to communicate a powerful idea. Ideas that will matter to someone, make them understand, bring a smile to their face. You do not try to merely generate words on the screen and call that abysmal act “writing.”

In short, the landscape of automation looks like:

  1. What AI can do are things you don’t want to do. The repetitive kind.
  2. What AI can do are things you shouldn’t do. The shortcut kind.
  3. What AI can’t do are things you want to do. The creative kind.

As long as you put humanity in every work you do, don’t skip steps, and truly think through things yourself, you’re safe. It’s not the kind of jobs in particular, but how it’s done that matters. The entire luxury industry is built on the premise that value depends on how things are built rather than what things are built. Why wouldn’t that be applicable to you?

Did you know?

I wrote 2,400 words of prewriting, a single draft of 2,300 words, and edited it 3 times to get to 1,750 words and a clearer language.