AI explained, simply

Laypersons trying to wrap their heads around Artificial Intelligence, may enjoy this explanation of the direction in which AI technology is advancing.

The first thing you should understand is that the basic principles of AI design have not changed in a long time: The AI has a bunch of neurons arranged into 3 or more layers, with the first layer being input and the last layer being output. The neurons are connected to each other by a manmade algorithm, the learning ability of the AI is limited by this algorithm.

Absolutely everyone uses this design, from early hobbyists to Google’s highest paid programmers.

The learning algorithm works in one of two ways:

  1. Input-output matching:
    When the AI requires human input to know if it did well (e.g. identifying an image), the AI will be given thousands of inputs and the correct outputs for those inputs. The algorithm finds an arrangement of neurons that makes the AI give the correct outputs for those inputs, and as a result an input will give an output similar to that of a similar input.
  2. Trial and error:
    When the AI is able to know whether or not it did well (e.g. winning or losing a game), without input with a human, the algorithm tests out the AI then modifies it a little bit and tests it again, countless times. If the modification makes it do worse, it’s undone, if it makes it do better it’s kept.

So, there are three ways in which AI can advance:

  1. Computers get more powerful, obviously allowing everything to be upscaled
  2. The learning algorithm is improved
  3. A new way to use the current design of AI is discovered

1 and 2 make AIs better at what they currently do, vertical advancement. 3 is gaining new abilities, lateral advancement.
However, every lateral advancement of AI is the same: If the learning algorithm is matching inputs and outputs, the AI just learns to do something that humans can already do but faster and worse. If the learning algorithm is trial and error, the AI is playing a game or optimising something.

So, say that someone attempted to teach an AI to write code based on what a user describes:

  • It would have to be input-output matching, as an AI cannot judge for itself if it has matched a user specification.
  • It would be fairly trivial to teach an AI to write HTML based on simple English or even drawings; all the AI has to do is translate words/drawings to HTML representing words, colours, animations, and shapes.

But the number of input-output pairs required for an AI to understand even half of what a user is talking about in their software requirements would be enormous, and even then it would take a miracle for the AI to be able to translate it to code modules and link those modules together in a way that does anything even close to what the user asked for.

AI can replace the most basic web programmers (people working on purely visual stuff), but AI as we know it cannot handle anything more complex, no matter how much more powerful computers get. Software programmers are in zero danger until this series of events happens:

  • An individual designs a new set of AI rules that is far better than the current set everyone uses, doing something that major companies and generations of researchers have failed to do
  • They get hired by Google, which will inevitably offer said individual as much money as they want
  • Google uses the new AI for everything they can find an excuse to use it for
  • Everyone gradually copies Google over the next decade

Until the Alan Turing of AI shows up then gets hired by Google, human programmers are in no trouble. Plus, until AI can judge quality for itself (which isn’t likely to happen for a very long time), human programmers will always be useful as the quality of work from an AI is equal to or less than the quality of work from a human.

Leave a Reply

Your email address will not be published. Required fields are marked *