Right, so AI.
Game AI is terrible, and doesn't represent the state of modern AI programming. The goal of a game AI is not to win. The goal of a game AI is to provide a reasonable challenge for a player to overcome, and do so with as tiny a computational budget as possible while taking as little developer time/dollars as possible to create it. This is why early game AIs, especially in strategy games, were allowed to cheat. They aren't playing to win or some similarly navel-gazing objective like that, they exist to give the player a challenging experience. You generally can create game AIs better than any player which learn and adapt over time. It's just that nobody cares, because that takes a larger amount of time (Money) better used elsewhere, takes a larger amount of computational power better used elsewhere, and the end result is a game AI which gives players a shitty experience that will make them go play something else.
This is why game AI uses simple scripts. You're supposed to be able to learn its moves and countermoves, and thus gain mastery over the system and beat it. Also because it is highly predictable (easy to debug on a short deadline), can be tweaked by designers via config files, and takes very little time to implement.
History of AI:
But all of those are just ancient techniques from before the first AI winter. Initially, there was optimism that such state machines with their 'if x do y' tables could create powerful AI; back at the very dawn of the field half a century ago. Of course, then came notions like 'combinatorial explosion' which is where most people's modern misconceptions about AI come from (the whole 'the just read scripts' thing seen in this thread). At that point, it became very clear that such techniques could not, in fact, do much of anything. And so came the AI winter (1970s). A time when research funding sort of dried up and the field was much more subdued.
Then came the real progress that led up the the field of today. The earlier problems were beginning to be addressed with new techniques. The theory behind Neural Networks came about; a technique which attempted to solve problems through learning algorithms based on the workings of neurons in a brain. Algorithms which could be trained to act on imprecise, noisy, and previously unseen information. Likewise, the idea of 'embodied cognition' came into popularity; the belief that the best path to powerful AI was to simply have it explore the world around it, figuring out things as it went. So that was the 80's.
At this point it must be mentioned that at this time, there were two main camps in the AI field; referred to loosely as the 'messies' and the 'neats.' The messies were all about things like neural networks; essentially believing that powerful AI would be achieved through use of algorithms which worked, but whose full functioning couldn't really be understood. In a neural network, you can train it to do something, but picking apart how it does it is effectively as difficult as figuring out someone's thoughts by watching their neurons firing. You get some general ideas, but not a comprehensive, detailed version. The neats, on the other hand, wanted more mathematical rigor, essentially believing that powerful AI would be such a complicated system that a messy approach would end up being too difficult to get anything useful out of. As is obvious by this point, the two sort of went back and forth for a while in the field. The neats pretty much owned the early days, as simple models were more powerful when constrained to 60s era computational power. The messies became more popular after the events and lessons leading up to the AI winter brought the previous efforts of the neats to naught (well, aside from game AI, where the 60s era AI is still very much in vogue today).
Then came the 90s, when everything changed in a big way, creating the modern field of AI. Statistical and economic theory were brought in, resulting in "The Victory of the Neats." Statistical reasoning, with roots in things like Bayes' Rule for updating beliefs, became central to the field of AI. This gave not only the ability to figure out information from a set of data, like the neural networks had done a decade earlier, but to do so in a way which was both mathematically rigorous for which error values could easily be calculated, but which were also fully transparent. Suddenly, you could not only prove that your algorithm would actually work, but you could even say exactly why it was working. You now had things like Bayesian Networks, which could not only learn like neural networks, but through virtue of its internal transparency, you could now easily create AI which could reason about their own reasoning. From economic theory came the idea of the utilitarian "intelligent agent" or the "ideal rational agent," which gave a firm foundation to the theory behind how an AI should act and react to its environment; including unknown factors it had previously been unaware of. Thus, all the benefits of the techniques used by the 'messies' were incorporated into the modern, comprehensive techniques of the 'neats,' while still fitting within an overall framework of mathematical rigor and provability espoused by the neats. They can learn, reason, understand natural language, and investigate. They can even learn and reason about their own ability to learn and reason!
This has been the paradigm over the past 20 years. The result? Self driving cars. Search engines which categorize and find information from exabytes of data on the web. An AI which beat the human champions of Jeopardy, went on to become better at diagnosing some illnesses than doctors and invent cooking recipes. A stock market which is increasingly run by financial corporations' AI. A multibillion dollar industry of information brokers trading anonymized information for companies to plug into their marketing algorithms. And so on, pervading more or less subtly in nearly every aspect of our lives.
Not that you would know any of this from the media. Typical descriptions of AI are still your ridiculous 60's era AI, even in much of sci-fi. Which is why things like Watson or self driving cars have come as such a shock to people. The general public isn't aware of just how far things have come, and how fast things are going now. And it's actively to their detriment, as they simply can't understand contemporary events like the news about the NSA without such knowledge of where AI is at... and just how utterly intense its benefits and dangers are in the modern world.
As for AI consciousness... well, they already think about thinking and reason about reasoning, so there's that. The Watson AI, for example, had hundreds of individual learning algorithms (look up IBM's videos about Watson for details; they're definitely worth watching, especially the 45+ minute long ones which go really in detail) at the lower levels. Other algorithms were keeping track of how well each of these 'thought processes' worked for varying categories of questions (of course, the categories themselves had to be figured out too, so it's definitely non-trivial), and adjusted how much Watson trusted each of these for different questions accordingly. So that's not terribly difficult; and by that measure, it is, strictly speaking, conscious. Though, coming back to the embodied cognition idea from earlier, it's certainly a different kind of consciousness, as there is much less there to be aware of. It doesn't have millions of rods and cones, millions of pressure and heat sensing nerves, ears, hunger or thirst, or a health and body whose protection it was created to ensure.
edit: oops, I accidentally a 1300 word essay >_>