google-at-delphi

the Google at Delphi

Stevey's Drunken Blog Rants™

Google is building an AI.

I've gone and let the cat out of the bag right up front, because I don't want you to hear my thesis halfway through and feel I've been wasting your time. If you aren't interested in hearing why I think Google is in the process of building the world's first large-scale Artificial Intelligence, please stop now, because that's all this essay is about.

Still here? Cool. Let's get to it, then!

As usual, today's blog entry will be a synthesis of various personal observations and wild, drunken speculation. In other words, take all this with more than a grain of salt. It's just my own personal opinion of the moment, nothing more.

What do I mean by AI?

First things first: When I say "AI", I'm not talking about the creation of Hal 9000, or any equally science-fiction-ish possibly evil and/or superhuman intelligence. That might well happen someday, but I have no idea (nor even an opinion) as to whether such an eventuality will occur in this century. In short, this isn't intended to be a science-fiction essay. If you're interested in that sort of thing, I might recommend starting with William Gibson's Neuromancer, then perhaps moving on to an Isaac Asimov or Neal Stephenson or Orson Scott Card book. But that's not the kind of AI I'm talking about.

When I say AI, I'm talking about a system that's reasonably self-aware (although perhaps not to the level of a human being or even a dog), and in particular one that can solve interesting problems that will make the "owner" of the AI a whole bunch of money. Whether or not it can pass the Turing Test is irrelevant, as long as it's a system that's smart enough to make useful decisions and predictions for me and you.

Most software developers consider computers to be nothing more than fancy programmable calculators, and most software we write is unintelligent. I mentioned this very briefly in a short essay I wrote a few days ago about a famous book on AI called Gödel, Escher, Bach. We program the computers to operate at what you might call "base level" intelligence, meaning they're just executing routines that we've told them to execute. Mostly fetching data from databases, putting it back into the databases, and transforming it in various ways. Very mechanical. It's useful, and it's enabling people to buy things through our website, but most of our code isn't intelligent in any sense of the word.

At the next level up the intelligence heirarchy, we're trying to build monitoring systems that observe our base-level systems and try to figure out when things are going wrong — for instance, queues backing up. The monitoring systems aren't intelligent either, but the two kinds of systems combined are slightly more intelligent than systems with no monitoring or diagnostics, because in tandem they're somewhat self-aware.

<rant-mode>

I'll now take a momentary detour to make some of the general inflammatory statements that have made my blogs infamous. What can I say... I aim to displease! My asbestos suit is on for the moment, and I hope you've donned yours as well.

It is my studied opinion that our use of C++ is killing us, because systems written in C and C++ are opaque to external diagnostics — unless you go to tremendous lengths to build windows into the systems yourself. Systems written in higher-level languages (starting with Java, and working up the spectrum) are automatically layered, and have more hooks and facilities for reflection and introspection built into the language and virtual-machine frameworks, so you automatically get code that's slightly more "intelligent". Or at least it's more tractable to building intelligence into it.

This is not the only reason C++ is hurting us, not by a long shot. The fact that it's a language that makes it inconvenient to build anything other than brutishly unintelligent systems is only the beginning. I'm gradually preparing a paper in which I demonstrate that we've spent well over (company-confidential large number elided) staff-years doing nothing but porting our C++ code from chipset to chipset, from OS to OS, and from compiler version to compiler version. We're in the middle of two such ports right now, and more will come.

Porting the C++ code has been a horrendous resource drain, and it's been destabilizing our systems more and more each year. Worse, each time around it gets harder, owing both to the astounding growth of our C++ code base, and also to the fact that C++ is as un-parseable as Perl, and hence not amenable to any kind of automated porting support. Porting C++ code constitutes an unacceptably high tax on our ability to innovate, one that I don't think we should be paying.

And that tax is just from porting the code and hoping it will continue to work at least as well as it did before the port. In addition to C++'s lack of introspection facilities and its lack of portability, I can think of half a dozen other areas in which C++ is killing us, including the nearly nonexistent code-sharing support, the incredibly long and unavoidable build times, the increasing difficulty of hiring people who can write good C++ code and who actually wish to do so, the dangerous traps laid by C++'s language syntax and semantics, the never-ending security holes, and others as well. Believe you me, I am on a crusade against the use of C++ in the creation of our servware at Amazon, and I have much war left to wage.

But not today.

</rant-mode>

Today I'll leave this very brief discussion of our C++ systems with the observation that building separate monitoring systems does make the combined system one step smarter, and it's good that we're doing it. But until we reach the collective understanding that it's every engineer's personal responsibility to write self-monitoring code, and that you can't easily do that in an unintelligent language, our systems are never going to be as effective as we need them to be.

For now, my advice is that if you are a person who loves C++, you should start learning Java right now. You will very quickly (faster than you think) become a better Java programmer than most folks who learned how to program in Java directly, and you will become far more effective than you are today. I recommend Java because its syntax and semantics are reasonably close to those of C++, and because Java is the most widely supported, well-studied, and well-documented language platform out there next to C++. It's an easy step for you. I promise.

Not My Idea

A simple Google search for "is Google building an AI?" turns up all sorts of links to people who've already reached the same conclusion and written about it.

One of the first entries is Google as Artificial Intelligence, where a blogger tests Google's AI abilities by typing in questions like "what is the best time to visit Montreal", and getting some pretty good results. I might try that approach — pretending Google is already an AI and asking random questions — later in this essay. The author of that blog decides that Google will eventually become conscious enough to pass the Turing Test, but it's obviously not there yet.

Here's an even better one: The Secret Source of Google's Power, which is another blog essay (of course!) that asks questions like "What are all those OS researchers doing at Google?" The author links to the OS papers published by Googlers. For that matter, take a look at the entire list, which includes a great many papers on AI, machine learning, data mining, and so on.

In fact there's all sorts of speculation as to what Google is building. It's well-known that they're doing some Firefox work. And they're hiring all sorts of people in different disciplines, including compiler design, cognitive science, OS and kernel development, natural language processing, machine learning, and a host of other interesting areas. You could make a compelling argument that they're building an OS, a giant networked computer, a worldwide data store, or just about anything else. And you'd probably be right.

High-Profile Hires

Although Google seems to hire a lot of Ph.D. researchers, they also hire people from the industry. In July they hired Adam Bosworth, who left his position as chief architect and senior vice president at BEA to join Google. He's considered one of the top experts in Web Services, and he was part of the original development team for Microsoft's Internet Explorer. They've also snagged Joe Beda, Microsoft's lead architect for the presentation subsystem of Longhorn (the next-generation Windows), and Josh Bloch, distinguished engineer for Java at Sun, author of Effective Java.

They also hired Peter Norvig, the author of one of the all-time best books on AI; he's now their Director of Search Quality.

But we have lots of high-profile hires too; I could easily list the names of some really famous people working for Amazon, although I won't embarrass them. (I'll give you a hint, though; at least one of them is featured prominently in various places in Gödel, Escher, Bach).

But what about down here in the rank-and-file? In the various development groups, are we trying to hire the same kinds of people as Google is hiring? You can bet your b-hind we're not. We hire people who know C++. That in itself is sufficient to get into most groups here. Basic data structures, basic algorithm analysis, and a brain full of useless C++ trivia, and you've likely got yourself an offer.

In fact, I wanted to make an offer to a Ph.D. in cognitive science this summer, and after I was forced to turn him down, I promised I'd make a huge stink about it. I'll believe I'll do that right now!

This candidate had recently graduated with a Cognitive Science doctorate, although his earlier background was in particle physics. He had done his Ph.D. thesis on a model for how the human brain makes associations, and he'd written a bunch of code to simulate his model. He had done stints as a visiting scientist at Los Alamos research laboratories (you know, the place where among other things they invented the atom bomb), and my phone screen with him was one of the strongest screens in my ten-year interviewing history. He was amazing. And he really wanted to come work for me.

This guy's strongest language was Python, which is in heavy use at Google, NASA, and Industrial Light & Magic, among other places. Not surprising, since that's the language he'd used for his research in cog sci and in the particle accelerator labs. It's a good language, a powerful language, and it's used increasingly (as it happens!) for Artificial Intelligence research.

Unfortunately, this candidate's C++ knowledge, while decent, was a bit rusty, so the two other Bar Raisers who happened to be on the loop forced me to turn him down. I was pissed off then, and I'm pissed off now. So are the recruiters who were on the loop.

Stink, stink, stink. I hope I've fulfilled my promise about making a stink. The whole thing stunk. He went off to be a full time research scientist at Los Alamos, and we lost a valuable hire.

Shame on us.

Build an AI in a Trillion Easy Steps

Everyone knows how you build an AI. It's well-documented, and there are plenty of great books about it. Peter Norvig's are very good. The basic ingredients are: Search, search, and more search. Sound familiar?

I used to be uninterested in search, primarily because I hadn't thought very deeply about it. I thought of search as primarily being a problem of screen-scraping and mundane indexing, sort of the super-scale equivalent of doing M-x search-forward-regexp in Emacs.

Was I ever wrong. I'd forgotten, of course, the famous aphorism that "AI is Search", and I didn't bother thinking about what you'd need to do in order to take search a step beyond screen-scraping and mundane indexing.

I started becoming interested in it through a rather circuitous route. Early in the year I started on an "anarchy project" (that's what we used to call "Just Do It" type projects back at Geoworks) to do a sort of taxonomy of everything we interview SDEs for, called the Fundamental Fifty. It's an interesting (and not yet finished) story in itself. The goal was to write down everything we consider important for an SDE-1 or SDE-2 to know that would make them better at their jobs.

In case you've been following the F50, I should mention that it's been in temporal stasis for a few months, in part because I inherited a bunch of new responsibility that needed to be sorted out, and in part because as we were building the site and the content, we realized that some of the skill areas we'd chosen were rather fast-moving targets: things that didn't have a very long shelf life. The Amazon-specific stuff, in particular, was a moving target that didn't seem very "fundamental" after a while, given that some of our Dev Centers weren't using it at all. So we're in the process of separating out the short shelf-life skills and replacing them with things that are a bit more timeless and generally useful, to avoid the whole thing becoming the "Fashionable Fifty".

In any case, Michael H. did me the tremendous favor of taking the proposed F50 to the Dean of the School of Engineering at Princeton (which you may or may not know is the Alma Mater of our very own Jeff Bezos). Dean Maria Klawe was nice enough to take a look at it, and her feedback was: "Why isn't machine learning in the list?"

Oops. Yeah. Why isn't (or more precisely, wasn't) machine learning in the list? The answer is that at the time, while I'd considered it to be important, I thought it was a bit too "hardcore" to put into a skills list that we wanted all of our engineers to be good at. I already had some pretty hefty stuff in there, and if you've been privy to the endless email-list arguments about whether SDEs need to know things like basic big-Oh algorithm analysis, you'll know that my list was already going to strike fear into the hearts of many an engineer here, without Artificial Intelligence being added to the list.

I wrestled with the question for a while, and decided: "Screw it. It's in." And then I set out to repair my super-rusty AI skills, which I hadn't done much with since I took an AI course in college 13 or 14 years ago. I'd done a fair amount of work with heuristic search, and I've even got a pretty snazzy implementation of the A* search algorithm in my online computer game (which I haven't looked at all year, but which continues to live a life of its own, interestingly). But I'd never looked at neural nets, or genetic algorithms, or Bayesian filters, or statistical pattern recognition, or any of that other machine learning stuff.

I was missing out.

Since then, I've spent more of my precious little free time studying AI for myself than working on the F50. Shame on me, I guess — although I'd hate to ask people to be qualified in something that I know virtually nothing about myself, so hopefully you can't fault me too much for the decision. In any case, I've found that not only is Search absolutely wonderful in its own right, it's also absolutely essential to just about everything we do in computing, period.

While I studied search, it gradually dawned on me that Google is building an AI. Maybe we are, too: who knows what those A9 guys and gals are up to down there. Taking over the world, I suspect!

The reason I titled this meandering section "Build an AI in a Trillion Easy Steps" is that there's actually one more important ingredient in the Secret Sauce that constitutes Artificial Intelligence, and that's Data. Lots of it. Tons and tons. Billions, maybe trillions of pieces of data. Do we know anyone who has that much data?

AI Winter and Thaw

By way of background, it's important to remember that in the 1980s there was a huge AI boom, perhaps similar to the dot-com bubble, in which investors poured literally billions of dollars into companies doing AI. It came crashing down in roughly 1990/1991, and Richard Gabriel (a famous AI guy who wrote the famous "Worse is Better" paper) coined the term "AI Winter". AI was dead.

Well, actually it had just gone into hibernation. The research was still happening, at least in university settings and hidden-away research labs, but at a slower pace. AI was just ahead of its time by several decades. In the early 1990s there was a bunch of consolidation, wherein a bunch of AI companies merged or folded, and the rise of the Web pretty much kept everyone's attention for the next decade, give or take.

As far as I can tell, one of the many reasons that AI "failed" in the 1980s is that they were mostly trying deterministic approaches: rule-based pattern matching, logic systems, heuristic search and hill climbing, and so on. At least, that's what my AI course was mostly about in 1991. Those are useful techniques, and they've yielded great results in certain domains.

But the term "Artificial Intelligence" sets expectations pretty high. I imagine investors were looking for a program smart enough to, say, answer the phone at work every time their spouse called, while they were off at the golf course. What they got, however, were polynomial solvers. Investors evidently didn't appreciate the enormous value in having all their polynomial-solving needs eradicated forever, and they slashed their AI budgets down to a first-order polynomial with a coefficient of zero.

What's happened to AI since then?

For starters, the Web came along, making lots more data available than ever before. Unfortunately, a web page is effectively just a bag of words; a good search engine (or AI) needs to be able to guess at the meaning of the page somehow, and word counting doesn't cut it. Natural languages like English are ambiguous, and there isn't a big central database of "concepts" anywhere, so it's hard to figure out things like, oh, whether the word "Ford" refers to a car company, a president, a river crossing, or something else. Search engines need a way to figure out the semantics of the pages somehow.

Some people think it would be nice if there were something like a "semantic web", in which everyone used some sort of universal metadata schema to mark up their data, imbuing it with meaning. That way the search engines wouldn't have to guess; they'd know (to some level, anyway) what the content was "about". And presumably you'd get better search results.

I, for one, am not looking forward to going back and marking up every document I've ever authored or produced with a bunch of new semantic data. I doubt you are, either. So a lot of people think the Semantic Web will never really take off, because people are lazy.

So deterministic approaches didn't produce AI, and the semantic web probably won't either. What other options are there?

These days, statistical learning approaches (perceptrons, SVMs, kernels, decision trees, and a bunch of others) are all the rage. They started to get really popular in the 90s, but that was after it was too late to get any more funding. Plus, statistical machine-learning approaches need access to enormous training sets. Getting that sort of data together for processing is, needless to say, a nontrivial task. And even with access to the data, the algorithms still require massive number-crunching power.

Do we know anyone who has that much computing power?

If you have enough computing power and enough data, AI (by some more realistic definition) starts to come back within reach. It's still not easy, but it's becoming possible. And I think Google is building one. Just a hunch; I know no more than you do. But it sure seems like the kind of thing they'd do.

However, they're not the only ones trying. Now that the dot-com bubble-burst is well behind us, and the world financial markets appear to be recovering, people are looking for places to invest their money. AI is gradually coming back in vogue, although people are being more cautious about it. They're not looking so much for "general" intelligence as they are looking for practical expert systems that can help solve specific, interesting problems.

What kinds of problems would you be interested in, if you had access to an AI?

What Everyone Wants

Everyone wants to be rich — at least wealthy enough to pursue their interests. Even better, everyone wants to live forever. Woody Allen put it best: "I don't want to be immortal through my work. I want to be immortal through not dying." (Source: type "woody allen immortal" into Google). People mostly figure that if they were to live healthy for a long time — hundreds of years, say — then there would be plenty of time to get rich.

So there's a lot of money in the suite of industries surrounding the human body: medical, pharmaceutical, bioengineering, neuroscience, bioinformatics, and so on. For first-hand evidence of this, go visit the campus of the University of Washington up the road, at the south end near the UW Hospital. There are huge, sprawling, luxurious-looking buildings there that weren't around when *I* went to school there, and most of them are dedicated to health-sciences research.

Heck, the new Computer Science building isn't too shabby, either. Hey, and there's an Amazonian, Albert W., in the picture on the right. Hi, Albert!

There will always be a lot of money pumped into the bio-science fields, because, well, once you're really rich, it's a sure bet that at some point, the thought of buying a bio-science miracle will cross your mind.

Well, the human body and brain are reeeeeaallly complex. It takes a lot of neurons to remember all that C++ trivia. It also requires massive computing power to search (hint, hint) through the vast combinatorial possibilities offered by our genotype in an attempt to decipher and influence it. Who's going to solve problems like these? Who has the data-storage capacity, the computing resources, and the Ph.D. researchers and AI experts on staff? I know who my money's on.

I don't think it's entirely unreasonable to assume that online "AIs" will soon become more reliable medical diagnostic tools than your doctor. Doctors make mistakes all the time. Case in point: my brother Dave, who died of aggressive T-cell lymphoma at age 24, six years and six days ago exactly (2 days after I joined Amazon), was twice mis-diagnosed by his family-practice doctor as having bronchitis, when in fact it was pneumonia caused by a tumor the size of his fist growing behind his heart. Dave had most of the classic warning signs of cancer, but none of us suspected that's what they were, and his doctor was, of course, worthless.

If it had happened today, typing his symptoms into Google might have saved his life. I just typed "pain coughing night sweats" into Google, and the 5th and 6th results, both above the fold, say "cancer symptoms".

You want to know what I think an AI is? Anything smart enough to save your brother's life.

Wrap-up

I was going to give you some more examples of things I'd ask the Google at Delphi, to get you thinking about what exactly it means to have a "practical AI", but I'm afraid my last example has knocked the wind out of me a little. So I'll wrap up.

Google already has an AI. If it's smarter than a doctor, then it's smart enough to be called an AI.

Why, then, am I not going to Google? It's a question several people have asked me lately. I'll tell you why. It's because I've met Jeff Bezos, and I've seen him in action, and he's the smartest guy I've ever met. If anyone in the world can match Google, it's Jeff, and if any company can do it, it's Amazon. I'm surrounded by smart people trying to do the right thing. That's enough for me.

Let's get to it!

[Note, 2/24/2006: Well, it's clear the writing was on the wall. Amazon was very cool, but I couldn't resist the pull, and I left 6 months later. I'll give you three guesses where I went, and the first two don't count. -steve]

(Published December 30th, 2004)

Comments

Technology has a weird way of elevating people into a new position where they are both more powerful and more dependent — they can get more done, but only with the enabling technology. Take that technology away and their knees start to knock and they can't function at all...

Can you imagine trying to get something done on the web without a search system tucked under your arm?

Me neither.

Good blog, sir.

Posted by: Todd Stumpf at December 31, 2004 12:32 AM