If one wants an accurate idea of when AI might become powerful enough to be potentially dangerous, one must first have a sense of how powerful it is now and how fast it is changing. In other words, how intelligent is Artificial Intelligence? Opinions vary widely, with some people pointing to surprising abilities of modern AI systems as evidence that superintelligence is just around the corner while others point to AI’s limitations as evidence that it is all just hype. Both lines of reasoning, however, are going about the question all wrong.
Imagine three Chess-playing programs. The first can beat the best human player in the world and works by using a giant lookup table for every possible state the board could be in and the best possible move from that state—meticulously put together by hundreds of Chess grandmasters over decades. The second program can beat most amateur Chess players, but would lose in the first or second round of any major tournament, and works by observing millions of games, recognizing patterns in board positions tend to be better than others (without specifically recording moves and their results in long term memory), and learning how to play Chess for itself without human programmers explicitly telling it anything other than the rules of the game. The third program plays like a beginner and was trained on a variety of tasks, only a few of them being games and none of those being Chess, then inferred both the rules and strategy of Chess after observing just a few matches. Which of these programs is the most impressive? If all you care about is the ability to play Chess, obviously the first program is the best, followed by the second, with the third coming in dead last. But if you are trying to estimate how close AI is to transforming the world, this order is reversed. The first program is obviously useless for anything other than Chess. The second program is also probably limited to Chess, but hints at a process that seems like it could eventually be applied to other things. The third program sounds like something out of science fiction—one might even question whether it was sentient. What matters when measuring an AI’s intelligence is not how well it performs on any specific task; what matters is the process it uses to accomplish any task—and how generalizable that process is. AI has a history of going through periods of great excitement followed by so-called “AI winters” of disappointment. Actual progress in AI, however, has had a fairly linear relationship to the amount of time, money, and processing power invested into it; what has varied is how much people care about the things AI happened to get good at most recently. ChatGPT is a big deal because it is a useful product…though to be fair, this success is actually very relevant to AI progress because it is spurring a flood of investment. So then how general is AI? As far as I am aware, there is no test for generality as such, so I can’t cite some numerical value or show you a nice line graph illustrating how this nonexistent number has been changing over time. And I certainly can’t tell you when that imaginary line goes above the generality of human cognition because we don’t have a measure for that either. To me, it seems likely that current AI has a fair bit of generality, less than a human, but I can’t be any more specific than that. We can at least gain an intuition for how to think about generality, however, by considering how AI works. We are used to thinking of computers as highly capable, but very inflexible. A pocket calculator, for example, is extremely good at arithmetic, but can’t tell an original joke or play Starcraft. Programmers can add conditions, variables, and other tricks to make their code more flexible, but these additions often make the program bigger and harder to change and in any case take a lot of time to build. Further, there are some tasks, like classifying images, that are extremely difficult to express in terms of a set of predefined instructions. Modern AI, specifically Machine Learning, blows past these limitations. With Machine Learning, there is no human-written series of instructions for the computer to follow. Instead, vast quantities of data are fed into a network of artificial neurons, each of which applies a surprisingly simple mathematical process, and then passes the result on to another layer of neurons until you get a result. At first, this produces random garbage, but with each calculation, the system sends an error signal back through the network telling it to adjust all the weights and biases such that the same data will get a result with less error. As more and more data is passed through the network, all the neurons eventually find an optimal set of weights and biases that allow the system to give consistently good results most of the time, including when it is given data it has never seen before. Large Language Models, like ChatGPT, take this a step further. They are trained on vast quantities of text from all over the Internet to learn the underlying structure of language. The result is a foundational model, which can then be finetuned to learn specific tasks relatively quickly. Interestingly, while the ability of large language models to predict text vs. the amount of data and processing power they use follows a fairly predictable pattern, for many subtasks within communication, AI starts out incapable and stays that way for a long time until, at some unpredictable point, it rockets up to superhuman—which may be why ChatGPT can sound so intelligent one moment and stupid the next. In the Chess analogy I described at the beginning, traditional code works like the first program, relying entirely on knowledge directly given to it by its creators. Most systems of Machine Learning work like the second program, making lots of observations and recognizing patterns. Large Language Models are not quite like the third program, but clearly moving in that direction. The path to superintelligent AI is not of a dog-like AI becoming child-like, becoming a college student, becoming Einstein. Instead, it’s a gradual broadening of the range of skills present in a single model. And there is one particular skill that changes everything: AI research. Humans built AI, which means there is some cognitive skill we have that makes this possible. Once an AI learns this critical skill, it can potentially build the next generation of AI, which builds the next, and the next, and so on, taking humans out of the loop so that development can proceed much faster. And while such an autonomous AI may be bottlenecked by access to processing power, architectural innovations that allow it to run more efficiently may be within its reach. And unlike humans, the intelligence of AI is not limited by brain size, the need to preserve energy, the difficulty of communicating ideas between individuals, or the glacially slow pace of biological evolution. AI minds could be expanded to fill entire server farms, consume as much power as a small city, make unlimited copies of themselves, share new understandings between them almost instantly, and grow exponentially—quickly leaving humans far behind. There is no way to know when AI will reach AGI, or superintelligence. The rate of progress has been rapid and accelerating, but how much improvement is needed for AI to match the full generality of human cognition is unknown, for the simple reason that we don’t fully understand how our own minds work. It seems likely that a few—but only a few—key breakthroughs are needed…but how long does it take to have an insight? With the amount of money and expertise, including help from AI, being thrown at the problem, perhaps not long. The key breakthroughs that make superintelligence a mere question of economics could be years, months, or decades away. Ultimately, however, speculating about such timelines misses the more important question: will we be prepared?
1 Comment
Elizabeth
8/15/2023 10:59:43 am
Thank you - helpful in building some understanding of a worrying scenario.
Reply
Leave a Reply. |
Archives
April 2024
Articles
AI, from Transistors to ChatGPT Ethical Implications of AI Art What is Alignment? Learned Altruism Superintelligence Soon? AI is Probably Sentient Extinction is the Default Outcome AI Danger Trajectories What if Alignment is not Enough? Interview with Vanessa Kosoy Unity Gridworlds Fixing Facebook A Hogwarts Guide to Citizenship Black Box |